LocalDev on a budget: Consul DNS with systemd-resolved on Ubuntu 22.04

I'm running very minimalist and lean these days. My “homelab” consists of a 1 year-old Lenovo X1 laptop running Xubuntu 22.04 plus an account on sdf.org I've had for so long, I can't even remember when I signed up for it. I want to see how far I can get with what I have before I switch over to to a Digital Ocean or an AWS or whomever. So my laptop is my entire development environment. And for creating and testing new projects on the fly, I decided to implement HashiCorp's consul service discovery with its included DNS server. Basically, I want a new service or node on my laptop to be able to register itself with my consul server so I can query my laptop's DNS for the new service or node's IP address. This is a very handy addition because I have docker, multipass, and KVM running. And all of those add their own interfaces and networks to my laptop.

Usually, something like dnsmasq handles the consul server/local DNS integration pretty well. But, I wasn't interested in installing something new. I don't consider myself an expert on the inner workings of Ubuntu's DNS architecture, so, I decided that leaving what was initially installed alone as much as I could and yet still implementing what I wanted, would help ensure that things “just worked” no matter what new virtual machine manager or container service orchestrator I installed. Systemd-resolved is included by default on Xubuntu and it supports split DNS. So I wanted to give it a try.

For my setup, I use NextDNS as the default DNS provider on my laptop. When adding consul, I want consul's DNS to only resolve queries for the domain managed by consul. All other queries should be directed to NextDNS. Systemd-resolved's split DNS utilizes routing domains to know which DNS server to query for which domain. Adding a new routing domain means adding a new interface on my laptop with the consul server configured to be that interface's DNS server and the consul server's domain configured to be that interface's DNS domain. Simply put, I decided to configure my global/default routing domain to use NextDNS and I decided to configure a brand new interface to be the routing domain for my consul server's domain.

The global routing domain

NextDNS has a pretty trivial setup to use it with systemd-resolved. I just simply added a new file to /etc/systemd/resolved.conf.d named (un-originally enough) nextdns.conf.

/etc/systemd/resolved.conf.d/nextdns.conf

[Resolve]
DNS=45.90.28.0#XXX-XXXXXX.dns1.nextdns.io
DNS=45.90.30.0#XXX-XXXXXX.dns2.nextdns.io
Domains=~.
DNSSEC=no
DNSOverTLS=yes
MulticastDNS=no
LLMNR=no

A quick overview of the above settings: DNS= The IP addresses of the NextDNS servers. The X'ed out text above represents some metadata used by NextDNS for analytics – namely, my laptop's hostname and the ID of my NextDNS account. You can, of course, switch these IPs out with your preferred DNS provider such as Cloudflare or Google. Domains= The ~. identifies this route as the default route all queries will be directed to if no other routes are available for the domain. DNSSEC= Currently disabled because Zoom isn't playing nicely with DNSSEC. But I normally prefer to set this to allow-downgrade. DNSOverTLS= Enabled to enforce encrypted DNS traffic only. MulticastDNS=/LLMNR= Multicast DNS and Link-Local Multicast DNS. Two protocols I'm not using so I disabled them.

Note: the above works great if you do not have a separate configuration in the file /etc/systemd/resolved.conf. By default, Xubuntu adds this file but leaves all of its contents commented out.

The above will configure a global routing domain but not pair it with an interface on my laptop. And I want to only use NextDNS for all outbound queries and I want to permanently disable any DHCP server from injecting its own DNS hosts into my wifi or wired internet connections. So I chose to not directly connect this route to an interface but instead I configured NetworkManager to only use this route for all interfaces NetworkManager manages. Implementing this was trivial by adding the file dns.conf to /etc/NetworkManager/conf.d.

/etc/NetworkManager/conf.d/dns.conf

[main]
dns=systemd-resolved

[global-dns-domain-*]
servers=127.0.0.53

The above sets up NetworkManager to use systemd-resolved directly and to configure systemd-resolved's IP address (127.0.0.53) to be the global DNS host for all domains. The reason why I configured this in NetworkManager is because, by default, Xubuntu hands over network management to NetworkManager. We can verify this by looking at the file(s) in /etc/netplan. On my recently installed Xubuntu 22.04 laptop, with no network customizations added during the install process, I have only one file: 01-network-manager-all.yaml.

/etc/netplan/01-network-manager-all.yaml

network:
  version: 2
  renderer: NetworkManager

No interfaces configured here, just a line indicating NetworkManager is the place to go to find all of that.

The final step, to enable these changes and to turn up the NextDNS global/default routing domain, is to restart systemd-resolved and NetworkManager. After restarting those services, the output of resolvectl should look similar to...

resolvectl

Global
         Protocols: -LLMNR -mDNS +DNSOverTLS DNSSEC=no/unsupported
  resolv.conf mode: stub
Current DNS Server: 45.90.30.0#XXX-XXXXXX.dns2.nextdns.io
       DNS Servers: 45.90.28.0#XXX-XXXXXX.dns1.nextdns.io 45.90.30.0#XXX-XXXXXX.dns2.nextdns.io
        DNS Domain: ~.

Link 2 (wlp0s20f3)
Current Scopes: none
     Protocols: -DefaultRoute +LLMNR -mDNS +DNSOverTLS DNSSEC=no/unsupported

Link 4 (virbr0)
Current Scopes: none
     Protocols: -DefaultRoute +LLMNR -mDNS +DNSOverTLS DNSSEC=no/unsupported

Link 6 (docker0)
Current Scopes: none
     Protocols: -DefaultRoute +LLMNR -mDNS +DNSOverTLS DNSSEC=no/unsupported

Link 7 (mpqemubr0)
Current Scopes: none
     Protocols: -DefaultRoute +LLMNR -mDNS +DNSOverTLS DNSSEC=no/unsupported

You can see, I've got quite a few interfaces configured on my laptop: wlp0s20f3 is my wifi, virbr0 is used by libvirt/KVM, docker0 is used by docker, and mpqemubr0 is used by multipass. All of these interfaces have their own network CIDRs configured with routes setup to use the laptop's default gateway (which is the gateway of my wifi device). And the output of resolvectl above confirms that all of these interfaces are using the default/global routing domain for NextDNS because none of these interfaces have their own DNS servers or DNS domains configured. Also, none of these interfaces are the default route as indicated by the -DefaultRoute value in the Protocols settings.

Note: when I installed docker, KVM, and multipass (via apt and snap), I did not change their default configurations. It also does not appear that the docker, KVM, or multipass interfaces are managed by NetworkManager. But, my testing so far has demonstrated that their default configurations just seem to work with the changes I implemented above.

Adding consul to the mix

To keep things simple, I configured a new dummy interface named dummy0 on my laptop. From what I can figure out about Ubuntu these days, there are a few ways to do this (including using NetworkManager). But, it appears that only systemd-networkd supports implementing the actual routing of the consul DNS domain to the new interface. So, I opted to use systemd-networkd to create and configure this dummy interface. To do that, I added two new files to /etc/systemd/network: dummy0.netdev (to create) and dummy0.network (to configure).

/etc/systemd/network/dummy0.netdev

[NetDev]
Name=dummy0
Kind=dummy

/etc/systemd/network/dummy0.network

[Match]
Name=dummy0

[Network]
Address=169.254.32.1
DNS=169.254.32.1:8600
Domains=~meh
DNSOverTLS=no
DNSSEC=no
DNSDefaultRoute=no

For the dummy0.network file, here's a quick overview of the settings: Address= The IP address of the new interface. DNS= The DNS server to use when routing the request for the specified domain. This is the client_addr setting of the consul server on the user's laptop/desktop. The port, 8600, is configured here as well. Domains= The specified consul domain. Mine is .meh which, at the time of this writing, is not a real top level domain on the internet. Prepending this value with the ~ indicates that this is a routing domain. DNSOverTLS= I set this to “no” as I currently do not have any encryption enabled on my consul server. DNSSEC= I set this to “no” as consul DNS doesn't support DNSSEC at the time of this writing. DNSDefaultRoute= This should be “no” as the default route should always be the default/global configuration with NextDNS.

To activate this interface, I first enabled systemd-networkd (Xubuntu installs it but disables it by default) and then I started it.

And now I am ready to install my consul server :) I just use systemd to manage it on my laptop and, for now, my server's configuration is very very basic. My consul.hcl config file is below.

consul.hcl

bind_addr = "169.254.32.1"
bootstrap_expect=1
client_addr = "169.254.32.1"
datacenter = "dev"
data_dir = "/opt/consul"
domain = "meh"
enable_syslog = true
#encrypt = "..."
log_level = "INFO"
ports {
  serf_wan = -1
}
server = true
ui_config {
  enabled = true
}

All of the settings above can be customized to to suit the individual's needs, but the most important setting is client_addr. The client_addr IP should match the IP that was configured in the DNS= setting above in /etc/systemd/network/dummy0.network. And if the port of the consul DNS server is changed, then the port in /etc/systemd/network/dummy0.network also needs to be changed so they match. Both client_addr and bind_addr do not need to be the same IP address as the dummy0 interface, but, the bind_addr and client_addr IP addresses should (depending on the consul server configuration) be routable from each network that will register new nodes and services to consul. In my testing so far, I discovered that using the dummy0 interface IP just seems to work. Also note, that the setting domain needs to match the setting Domains= in /etc/systemd/network/dummy0.network (but without the ~).

After the consul server is enabled and started (check out the consul systemd service file example here), querying the laptop's DNS for the IP of the consul service should just work...

❯ dig consul.service.meh

; <<>> DiG 9.18.1-1ubuntu1-Ubuntu <<>> consul.service.meh
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55291
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;consul.service.meh.		IN	A

;; ANSWER SECTION:
consul.service.meh.	0	IN	A	169.254.32.1

;; Query time: 0 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Thu Apr 28 17:01:57 AEST 2022
;; MSG SIZE  rcvd: 63

And if the ui is enabled, the server should be accessible in a web browser at http://consul.service.meh:8500.

Finally, the output from resolvectl should now include this new routing domain...

resolvectl

Global
         Protocols: -LLMNR -mDNS +DNSOverTLS DNSSEC=no/unsupported
  resolv.conf mode: stub
Current DNS Server: 45.90.30.0#XXX-XXXXXX.dns2.nextdns.io
       DNS Servers: 45.90.28.0#XXX-XXXXXX.dns1.nextdns.io 45.90.30.0#XXX-XXXXXX.dns2.nextdns.io
        DNS Domain: ~.

Link 2 (wlp0s20f3)
Current Scopes: none
     Protocols: -DefaultRoute +LLMNR -mDNS +DNSOverTLS DNSSEC=no/unsupported

Link 3 (dummy0)
    Current Scopes: DNS
         Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 169.254.32.1:8600
       DNS Servers: 169.254.32.1:8600
        DNS Domain: ~meh

Link 4 (virbr0)
Current Scopes: none
     Protocols: -DefaultRoute +LLMNR -mDNS +DNSOverTLS DNSSEC=no/unsupported

Link 6 (docker0)
Current Scopes: none
     Protocols: -DefaultRoute +LLMNR -mDNS +DNSOverTLS DNSSEC=no/unsupported

Link 7 (mpqemubr0)
Current Scopes: none
     Protocols: -DefaultRoute +LLMNR -mDNS +DNSOverTLS DNSSEC=no/unsupported

Registering a new node or service

Its pretty quick and easy to use multipass to provision a new virtual machine. So to run a quick test, I started a new Ubuntu jammy server and installed the consul agent on it. My agent's consul config file is this...

/etc/consul/consul.hcl

bind_addr = "0.0.0.0"
client_addr = "127.0.0.1"
datacenter = "dev"
data_dir = "/opt/consul"
#encrypt = "..."
ports {
  dns = -1
}
retry_join = ["169.254.32.1"]
server = false

Again, all these settings can be customized to fit the individual's needs. The most important settings are bind_addr and retry_join. With bind_addr, this IP address must be routable to the laptop consul server's IP address so usually that means the IP address of the virtual machine that is on the multipass mpqemubr0 CIDR network. And with retry_join, this IP address must be the same IP address configured on the laptop consul server as bind_addr. I also chose to disable the virtual machine consul agent's DNS service because I noticed that, by default, the multipass virtual machine automagically queries the laptop's consul server DNS directly.

Once I startup this consul agent on the virtual machine, this node registers itself to the consul server and can immediately be queried using my laptop's DNS. And when I shut down the virtual machine's consul agent, its entry in the laptop's consul server is removed.

With docker, I set up a quick test by combining the sample app from docker's getting-started repo with gliderlab's registrator container. The registrator container automatically registers any running docker service to a consul server. I took the docker-compose.yml file at the root of the getting-started repo and edited it to include registrator....

docker-compose.yml

version: "3.7"

services:
  docs:
    build:
      context: .
      dockerfile: Dockerfile
      target: dev
    ports:
      - 8000:8000
    volumes:
      - ./:/app
    depends_on:
      - registrator

  registrator:
    container_name: registrator
    image: gliderlabs/registrator:master
    volumes:
      - "/var/run/docker.sock:/tmp/docker.sock"
    command: consul://169.254.32.1:8500
    restart: always

In the configuration above, in the registrator stanza, the command setting contains the consul server's client_addr IP address and port 8500 (the consul server's http port) which registrator uses to update the consul server via consul's api.

When I run docker compose up from the root of the getting-started repo and after the getting-started app container successfully starts, it is registered to my laptop's consul server. I can then query for it using my laptop's DNS. And when I run docker compose down at the root of the getting-started repo, my containers shut down and the getting-started app service is removed from my laptop's consul server.

Note: there are number of ways to configure an application service to register itself with a consul server. I'm just demonstrating one of those options here.

A note about the ufw firewall

If you are super security conscious like myself, then you are probably running the UFW firewall on your laptop/desktop. But, the default UFW firewall settings on Xubuntu block access to the consul server's dummy IP address. In order to fix this, I added some custom firewall rules to my laptop's UFW configuration. After I enabled the UFW firewall, I ran the commands below:

sudo ufw allow to 169.254.32.1/255.255.255.255 port 8500 proto tcp
sudo ufw allow to 169.254.32.1/255.255.255.255 port 8301 proto tcp

The first rule opens access to the consul server http api port using the IP address specified as client_addr in the consul server configuration. And the second rule opens access to the consul server LAN port using the IP address specified as bind_addr in the consul server configuration.

The output of sudo ufw status should now look similar to...

Status: active

To                         Action      From
--                         ------      ----
169.254.32.1 8500/tcp      ALLOW       Anywhere                  
169.254.32.1 8301/tcp      ALLOW       Anywhere 

Personally, I'd like to further refine these rules to drop access to consul from my internet facing interfaces. But for now, the above rules work just fine.

The end?

If you made it this far into the article, congratulations! When I started writing this, I had no idea it would be this long. And there are still a few topics I chose to leave out, such as ipv6 DNS hosts and search domains with systemd-resolved. But, with the information provided here, I hope it helps others sort out one path to integrate consul service discovery and DNS with their own Ubuntu hosts. And with the addition of consul to an individual's development environment, a growing suite of tools also become available to expand consul's features and enhance the user's localdev experience.

#ubuntu #consul #dns #systemd-resolved #routingDomain #localDev #development #test #serviceDiscovery #ephemeral #virtualMachine #docker #container #orchestrator #multipass #kvm #libvirt #nextDNS #xubuntu


RSS feed: mel.sh/feed

This blog is on the fediverse: @mel@mel.sh

Subscribe to email updates:

I'm also on mastodon: @mel@social.sdf.org (but I admit I'm not very active)

My projects: codeberg.org/meh

keyoxide proofs: https://keyoxide.org/6EA5985B857ED15E1630424AC3E29D39C06F2B70

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.