Automatic TLS certificate rotation with lego on FreeBSD

I’ve been using Let’s Encrypt to manage certificates on my systems for some time now. I started off using the excellent acme-client, which has  since been integrated into OpenBSD. Previously, there was a portable version which had been ported to FreeBSD, but this is no longer maintained. I continued running it for some time without realizing this. Fortunately the FreeBSD port has since been removed.

One of the difficulties I faced with acme-client was generating the required challenge for a system that had no HTTP daemon listening on port 80, which is required for the HTTP-01 challenge. acme-client had stubs in place to generate bits required for the DNS-01 challenge, a good fit for this use case, but it was a manual process to get the DNS records in place, and was error prone, which meant that my certificates were always expiring.

When I finally got around to figuring out how to get everything automated, I’d discovered that acme-client-portable had been abandoned, so I began looking for an alternative. My requirements were that it had minimal dependencies (my systems are lightweight VMs at ARP Networks, and have minimal resources) and would be able to automate the DNS-01 challenge. I also needed something that could handle the fact that my authoritative DNS servers do not support any sort of dynamic DNS — more on that later.

I came across the excellent lego project. lego is a lightweight Let’s Encrypt client written in — you guessed it — Go. While lego didn’t have a FreeBSD port at the time, it did meet all the requirements above. It is lightweight with minimal dependencies, and handles the DNS-01 challenge. It was the last requirement — working with my authoritative DNS servers — that presented a challenge.

Getting lego up and running with Google’s Cloud DNS, Amazon Route 53, and other providers, is quite easy. However, the domains I would be securing don’t use Cloud DNS or Route 53, so I needed to find another solution. Fortunately, some other Let’s Encrypt clients had pioneered a solution using CNAMEs. I found that someone had started to bring support for this to lego and submitted a pull request to finish the effort.

The pull request introduced a new environment variable, LEGO_EXPERIMENTAL_CNAME_SUPPORT, which would force lego to resolve CNAMEs when looking for the authoritative server when creating DNS records for the associated challenge. This meant that I could use my existing authoritative DNS servers but put a CNAME in place to Cloud DNS (or Route 53) and lego would traverse the CNAME to deploy the required TXT record for the DNS-01 challenge.

Setting this all up was pretty simple. The DNS-01 challenge looks for a TXT record corresponding to _acme-challenge.$DOMAIN. When using LEGO_EXPERIMENTAL_CNAME_SUPPORT, one would configure a CNAME as follows: 3600 IN CNAME

This presumes that is hosted on a DNS provider supported by lego. The CNAME points to, and lego will traverse that CNAME in order to set the corresponding TXT record for

With all this in place, I needed a script to automate running lego using the DNS-01 challenge.

#!/bin/sh -e

# Email used for registration and recovery contact.


if [ -z "${EMAIL}" ]; then
	echo "Please set EMAIL to a valid address in ${BASEDIR}/"
	exit 1

if [ ! -e "${DOMAINSFILE}" ]; then
	echo "Please create ${DOMAINSFILE} as specified in ${BASEDIR}/"
	exit 1

# /usr/local/etc/lego/domains.txt:

# Generates a certificate with and

# Each line will generate a separate certificate.


while read line ; do
	output=$(/usr/local/bin/lego --path "${SSLDIR}" \
		--email="${EMAIL}" \
		$(printf -- "--domains=%s " $line) \
		--dns="gcloud" \
		renew --days 30) || (echo "$output" && exit 1)
done < "${DOMAINSFILE}"

The above script passes an email address to the lego client to ensure that expiry notices can be sent from Let’s Encrypt if the script fails for any reason. It then reads a list of domains from /usr/local/etc/lego/domains.txt and passes each line to lego’s --domain flag. I’ve set the LEGO_EXPERIMENTAL_CNAME_SUPPORT environment variable and the corresponding Cloud DNS environment variables so that lego can create the corresponding TXT records.

With all this in place, I can run lego on a weekly basis via periodic and it will automatically renew my certificates via the DNS-01 challenge when they will expire within 30 days.

Since I needed to deploy lego across multiple FreeBSD systems, I also created and maintain a FreeBSD port. The port supports using the HTTP-01 challenge out of the box, and will attempt to restart nginx via a deploy script if the corresponding periodic variables are set.

The following is all that needs to be added to /etc/periodic.conf to get up and running via the port on FreeBSD with nginx:


You’ll need to edit /usr/local/etc/lego/ to set EMAIL as appropriate, but otherwise the domains will be read from /usr/local/etc/lego/domains.txt. Each line of domains.txt will be passed to lego, so this can be used to generate certificates for related domains. The packaged deploy script, /usr/local/etc/lego/, will copy the generated certificates from /usr/local/etc/ssl/lego/certificates into /usr/local/etc/ssl/certs. It will also place the corresponding private keys in /usr/local/etc/ssl/lego/private. It will then attempt to reload nginx.

It was great getting the lego ported to FreeBSD and automating my use of the DNS-01 challenge. I no longer have expiring certificates on my systems. In ensure that certificates are being renewed as expected, I set the EMAIL environment variable as discussed above, but also set up Prometheus to monitor my hosts TLS certificates. I was able to put alerting in place such that if certificates are about to expire, I’ll receive a notification.

Budget GKE deployment

In an effort to better understand Kubernetes, the need to stand up monitoring infrastructure, and the desire to reduce the burden of maintaining a MySQL instance, I decided to check out Google’s GKE offering. As I’d be using this for hosting personal projects, I wanted to keep the cost as low as possible. This, plus latency to my ARP Networks VPSes, is why I chose GKE over other cloud providers.

Note: when I embarked on this journey, Google provided the Kubernetes control plane free of charge to all GKE users. Unfortunately this policy has since been rolled back, though for hobbyist users like myself, one zonal cluster per billing account still qualifies for a free Kubernetes control plane.

My first attempt to deploy GKE involved using f1-micro instances. Unfortunately the micro instances are too memory constrained and once Kubernetes components like kubelet, kube-proxy, and kube-dns are up and running, there was very little RAM left over for my workloads. At the time, e2 instances had not yet been rolled out, so I went with g1-small. Since then, the e2 instances have been released, and I’d recommend a minimum instance size of e2-small. You can get away with running g1-small, but you might need to disable some default services.

A GKE cluster can be created as follows:

gcloud container clusters create cluster-1 \
  --release-channel regular \

This will create a new GKE cluster with the name cluster-1 running the “regular” release channel. The last bit, workload-pool, is something I discovered after deploying my first GKE cluster. This enables GKE workoad identity, which makes it possible for GKE workloads to authenticate transparently to Google services. This will be important later when deploying external-dns, or if using other Google Cloud services from GKE.

The cluster just gives you a Kubernetes control plane with no nodes. This means that no workloads can be deployed. A node pool can be deployed as follows:

gcloud container node-pools create --cluster=cluster-1 \
  --machine-type=e2-small --workload-metadata=GKE_METADATA \
  --num-nodes=1 --zone us-west2-b pool-1 --disk-size=10GB

This will stand up a node pool named pool-1 inside the GKE cluster cluster-1 with one instance of machine type e2-small with a 10GB persistent in the us-west2-b zone. The small disk size was chosen to save money, as the Kubernetes workloads that require persistent disk will have their own persistent volume claims.

At this point the cluster is up and running and workloads can be deployed. However, generally one would want to expose those services to the outside world. That can easily be done by using a load balancer, but for hobbyist projects, that can be cost prohibitive. At the time of this writing, a load balancer on Google Cloud would cost $20/month at a minimum — excluding data costs. This is because Google’s load balancer is globally distributed — a great feature — but at quite a hefty charge. My entire deployment as documented here averages $30/month.

In order to avoid using Google’s load balancer, I deployed the nginx ingress controller. Kubernetes ingresses allow “services” deployed in the cluster to be “exposed” to the outside world. I say “exposed” because there’s a bit more magic required to make the ingress itself publicly accessible — most commonly by using a load balancer, which I was trying to avoid — but I’ll cover an alternative approach.

While Helm can be used to deploy the nginx ingress controller, I decided to stand it up manually to get a better understanding of how it works. Deploying the nginx ingress controller manually is covered in the nginx ingress controller installation guide. The controller can be deployed on GKE with one simple command:

kubectl apply -f

By default, the nginx ingress controller is set up so that it can be used with Google’s load balancer. Since I wanted to save money and not use the load balancer, I had to change a few things. To edit the nginx ingress controller deployment, use the following commands:

kubectl edit deployment -n ingress-nginx ingress-nginx-controller

I had to make the following changes to the deployment for the budget setup. First, the deployment strategy should be set to Recreate. This is because we’ll be using hostPort to expose the nginx directly on the Kubernetes node, which will cause issues with RollingUpdate. Second, the following arg needs to be removed from the container spec:

- --publish-service=ingress-nginx/ingress-nginx-controller

This is because we’ll not be using the service to expose the nginx ingress controller via a load balancer, but instead using a hostPort and DNS to reach nginx from outside the cluster. Finally, the container spec’s ports need to be modified to allow use hostPort. This should be done for the http and https ports. There’s no need to change the webhook port. The container spec’s ports property should look as follows:

- containerPort: 80
  hostPort: 80
  name: http
  protocol: TCP
- containerPort: 443
  hostPort: 443
  name: https
  protocol: TCP
- containerPort: 8443
  name: webhook
  protocol: TCP

This will cause the container ports 80 and 443 to map directly to the node’s ports 80 and 443, making the nginx available on the node’s external IP.

With all this in place, the nginx should be reachable from the outside world on the node’s external IP, which can be found via the following comand:

gcloud compute instances list

Kubernetes ingresses deployed to the cluster will now be exposed by the nginx controller. For example, to expose Prometheus running on the cluster, deploy the following ingress:

apiVersion: extensions/v1beta1
kind: Ingress
  annotations: "60" nginx
    name: prometheus-ingress
    namespace: default
  - host:
      - backend:
        serviceName: prometheus-service
        servicePort: 9090

This ingress being tagged with the annotation will cause it to be exposed by our nginx ingress controller. The external-dns annotation configures the TTL for the DNS record of our ingress (more on that later). The annotation configures an nginx server alias, if desired.

At this point, presuming workloads are up and running and the corresponding services and ingresses have been deployed, everything should be reachable through the node’s external IP and nginx as previously discussed. However, manual DNS configuration isn’t a great solution as the node’s IP will change whenever it is upgraded or taken down for maintenance. To solve this problem, DNS can be automated by using external-dns.

The external-dns repository includes a GKE deployment tutorial. However, this tutorial requires that the node pools be granted read/write access to Cloud DNS. While this may work fine for testing, the impact is that all workloads on these Kubernetes nodes will have read/write access to Cloud DNS. This may not be desired. I first deployed my node pool using the scope in order to get external-dns up and running, but later discovered workload identity, which is much better from a security standpoint. I’ll cover deploying external-dns with workload identity here.

external-dns will manage a zone in Cloud DNS. Above I’ve referenced a dedicated zone “”, but it can also manage a top level zone so long as the names do not collide. Extra metadata is stored in Cloud DNS to determine if external-dns is the owner of the associated records. When used with the nginx ingress controller server-alias annotation shown above, a top level CNAME can be set up to point to records in the external-dns managed subdomain and the nginx ingress controller will write traffic to the correct workload.

To set up the credentials as needed for workload identity, a service account must be created:

gcloud iam service-accounts create external-dns

This service account needs to be granted the DNS administrator role:

gcloud projects add-iam-policy-binding example \
  --role roles/dns.admin \
  --member ""

Change both references to example to your project ID.

Finally, a mapping needs to be made between the Kubernetes Service Account (which will be created below) and the Google Service Account created above:

gcloud iam service-accounts add-iam-policy-binding \
  --role roles/iam.workloadIdentityUser \
  --member "[default/external-dns]" \

Again, change both references to example to your project ID.

Now that the service accounts have been set up, the following manifest can be used to deploy external-dns:

apiVersion: v1
kind: ServiceAccount
  name: external-dns
kind: ClusterRole
  name: external-dns
- apiGroups: [""]
  resources: ["services","endpoints","pods"]
  verbs: ["get","watch","list"]
- apiGroups: ["extensions",""]
  resources: ["ingresses"] 
  verbs: ["get","watch","list"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list"]
kind: ClusterRoleBinding
  name: external-dns-viewer
  kind: ClusterRole
  name: external-dns
- kind: ServiceAccount
  name: external-dns
  namespace: default
apiVersion: apps/v1
kind: Deployment
  name: external-dns
    type: Recreate
      app: external-dns
        app: external-dns
      serviceAccountName: external-dns
      - name: external-dns
        - --source=ingress
        - --provider=google
        - --google-project=example
        - --registry=txt
        - --txt-owner-id=my-identifier

Change the annotation, –domain-filter and –google-project arguments as required.

Now that external-dns has been deployed, any ingresses will automatically have corresponding DNS entries published to Cloud DNS. The TTL can be configured as shown above, and a low TTL will result in minimal downtime should the node IP change. For production workloads, one would want to have multiple nodes and ensure that workloads are always distributed across those nodes so there is zero downtime, and would likely want to use a load balancer anyway — but for low cost setups that can incur downtime, this should be fine.

I’ve been running this setup for well over a year now and it’s been working great. I first set this up just to run Prometheus in order to monitor my systems, but eventually deployed a stateful MySQL workload on the cluster as well. The MySQL service requires a patch to external-dns as it’s running as a “headless” service. There is an upstream issue and corresponding fix, which I’m hoping will be merged soon.

Encrypted root disk migration for FreeBSD

I’ve had a VPS with ARP Networks for a long time now. Things were a bit different back then. The default FreeBSD installer suggested setting up multiple partitions (slices) on a disk by default. This is no longer the case. Encryption wasn’t a thing that people generally worried about. I’ve had it on my todo list for a while now to figure out how to converge to a single encrypted partition on my VPS — saving me from running out of space in /var/, and also to protect the data on the underlying “disk”. I finally worked it out a few weeks ago.

The helpful folks in the #arpnetworks IRC channel on Freenode pointed me in the right direction. A tutorial on full disk encryption got me started. However, this was only helpful on a new install. My VPS is over 10 years old, and I didn’t really want to set everything up from scratch. Also helpful was FreeBSD’s dump/restore combined with an additional disk attached to my VPS.

To start, I backed up all my existing partitions to an additional disk:

# dump -C16 -b64 -0aL -h0 -f /mnt/root.dump /
# dump -C16 -b64 -0aL -h0 -f /mnt/tmp.dump /tmp
# dump -C16 -b64 -0aL -h0 -f /mnt/var.dump /var # dump -C16 -b64 -0aL -h0 -f /mnt/usr.dump /usr

Note that I backed up the partitions in order of volatility, figuring /usr would have the most changes by the time I finished the dump process. The -L option made it possible to dump a mounted filesystem.

Once the dump was complete, I booted from a FreeBSD install CD. When the installer popped up, I chose to start a shell.

Because I already had a had an existing partition table, I first had to destroy it before creating a new one:

# gpart destroy -F da0
# gpart create -s gpt da0

I laid out my partition table with the unencrypted boot partition first, then a swap partition, and then my root partition. To do that, I used the following commands:

# gpart add -t freebsd-boot -s 512k -a 4k da0
# gpart add -t freebsd-ufs -l bootfs -s 1g -a 1m da0
# gpart add -t freebsd-ufs -l bootfs -s 2g -a 1m da0
# gpart add -t freebsd-ufs -l encrypted -a 1m da0

I then installed the bootcode:

# gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 da0

Initializing geli was easy:

# geli init -b -s 4096 da0p4

Once initialized, the device must be attached, which creates /dev/da0p4.eli:

# geli attach da0p4

Now the partitions can be formatted:

# newfs -U /dev/da0p2
# newfs -U /dev/da0p4.eli

With everything formatted, I then mounted /dev/da0p4.eli to /mnt as this is where all subsequent work would be performed:

# mount /dev/da0p4.eli /mnt

Because my backups were on /dev/da1, I had to mount them somewhere to perform the restore. I created a new directory in /mnt/ (/mnt/mnt) and mounted /dev/da1s1 there:

# mkdir /mnt/mnt
# mount /dev/da1s1 /mnt/mnt

Now I was ready to restore the filesystems atop each other. I wasn’t sure if this would work, but it did!

# cd /mnt
# restore -ruf /mnt/root.dump
# rm restoresymtable
# cd /mnt/tmp
# restore -ruf /mnt/tmp.dump
# rm restoresymtable
# cd /mnt/var
# restore -ruf /mnt/var.dump
# rm restoresymtable
# cd /mnt/usr
# restore -ruf /mnt/usr.dum
# rm restoresymtable

Note that the dump will be restore to the current directory, hence cd before each restore.

This restored /boot to my encrypted partition, which isn’t what I wanted. I had to remove the restored /boot and then set up the unencrypted partition and restore again. I discovered the -x option for restore which let me restore only the /boot directory to the unencrypted partition.

# rm -rf /mnt/boot
# mkdir /mnt/unenc
# mount /dev/ada0p2 /mnt/unenc
# cd /mnt/unenc
# restore -xuf /mnt/root.dump boot
You have not read any tapes yet.
If you are extracting just a few files, start with the last volume
and work towards the first; restore can quickly skip tapes that
have no further files to extract. Otherwise, begin with volume 1.
Specify next volume #: 1
set owner/mode for '.'? [yn] y

With everything in place, I could then add the symlink for /boot to the unencrypted partition:

# cd /mnt
# ln -s unenc/boot /mnt/boot

Having restored the backups, I then had to set up my /etc/fstab and /boot/loader.conf. I replaced /mnt/etc/fstab with the following:

# Device                Mountpoint      FStype  Options         Dump    Pass#
/dev/da0p3.eli none swap sw 0 0
/dev/da0p2 /unenc ufs rw 1 1
/dev/da0p4.eli / ufs rw 2 2

Note that while I did not geli init or geli attach /dev/da0p3, /sbin/swapon will detect the .eli suffix and encrypt the partition with a one-time key.

The FreeBSD bootloader must be configured to load geli and informed of where the encrypted root partition. I added the following to /mnt/unenc/boot/loader.conf:


Note that if you have a CPU with AESNI (ARP Networks VPSes do) then you should also add aesni_load="YES" to loader.conf for increased performance.

With everything in place I rebooted and was prompted to enter my password:

Enter passphrase for da0p4:

I was truly impressed at how easy this all was, especially with dump and restore, which I’d never used before! After finishing the migration, I discovered that geli init option -g which supposedly obviates the need for the unencrypted /boot partition. I’ve not tried this out myself, but perhaps that’s something to explore on a rainy day.

Setting up two-factor authentication on FreeBSD

I typically utilize public key authentication when connecting via SSH to However, there are times when I’m away from a device which has my private key and need access to my server. I reluctantly enabled password authentication for those occasions, but after enabling two-factor authentication for most of the services that I use regularly, I wanted to do the same for my own server.

Setting it up turned out to be more difficult than expected. FreeBSD includes an OPIE module, but it doesn’t integrate with the existing two-factor app I use to generate codes for GitHub, Google, etc. I wanted something that supported the HOTP/TOTP algorithm so that I could use it with Google Authenticator. I then discovered OATH Toolkit.

The documentation for pam_oath provides some cursory details on how to set up pam_oath. Unfortunately the instructions will result in an insecure configuration which will produce predictable authentication codes. Here I’ll outline how to configure pam_oauth on FreeBSD for a single user in the wheel group, allowing access from all other users without OATH.


SSH on FreeBSD utilizes PAM to authenticate users. The default SSH configuration enables ChallengeResponseAuthentication and disables PasswordAuthentication, which is required for pam_oauth to function properly. If ChallengeResponseAuthentication is disabled and PasswordAuthentication enabled while UsePAM is on, two-factor authentication will not function.


PAM is configured per service. The PAM configuration for ssh is in /etc/pam.d/sshd.

We are concerned with the entries pertaining to the auth facility, located at the top of the file. Mine looked like:

auth sufficient no_warn no_fake_prompts
auth requisite no_warn allow_local
#auth sufficient no_warn try_first_pass
#auth sufficient no_warn try_first_pass
auth required no_warn try_first_pass

I removed the pam_opie modules and added the OATH module:

auth requisite no_warn try_first_pass
auth sufficient deny luser
auth required /usr/local/lib/security/ usersfile=/usr/local/etc/users.oath

I also added the pam_group module, which short-circuits OATH for normal users (those not in the wheel group). This works by rejecting any user in the wheel group, forcing PAM to evaluate the next module. If a user is not in the wheel group, the pam_group breaks out of the auth chain, skipping the pam_oath module.

Changing pam_unix to requisite ensures that a mistyped password won’t prompt for an authentication code until the correct UNIX password is entered.


Once PAM has been configured, it is necessary to configure the pam_oath module itself. The usersfile argument to pam_oath must point to a file that contains an entry for each OATH user. The entries are simple – the token type, username, optional password, and secret.

I generated the secret with SECRET=$(head -c 1024 /dev/urandom | openssl sha1).

I then added an entry for myself to /usr/local/etc/users.oath, specifying:

HOTP/T30/6 mhoran - $SECRET

This file must be owned by root and mode 600:
chmod 600 /usr/local/etc/users.oath
chown root /usr/local/etc/users.oath

While the OATH configuration file expects a hexadecimal secret, the Google Authenticator app expects a Base32 encoded secret. The OATH Toolkit may be used to transform the hexadecimal secret to Base32: SECRET32=$(oathtool -v --totp $SECRET | grep Base32 | awk '{ print $3; }').

Once the Base32 secret has been procured, the last step is to set up a token generator. I used the Google Charts API to generate a QR code, which can be scanned into the Google Authenticator app. While insecure, the following command will return an HTTPS URI of a QR code from the Google Charts API:

echo "|0&cht=qr&chl=otpauth://totp/$USER@$HOST%3Fsecret%3D$SECRET32"

Alternatively the Base32 encoded secret can be manually entered into any authenticator app which supports HOTP/TOTP with the token type set to time based, or an otpauth URI provided to a trusted QR code generator.


Once it’s all set up, SSH in and you should be prompted for a “One-time password” when logging in as a user in the wheel group:


If the module has been properly configured and the token generator is working, login should succeed.

Setting up Postfix and Dovecot to play nicely with mutt, mbox, Maildir and FreeBSD

I’m one of two people I’m aware of who still runs their own e-mail servers. I do this for a multitude of reasons, mostly because I love mutt and nothing else quite stacks up to it. Running a local mail server with mutt reading a local inbox is relatively simple. I run Postfix and it gets the job done with relatively little configuration. However, if you’d like to check mail remotely, and not rely on ConnectBot to SSH into your ARP Networks VPS, you’ll have to venture into the world of IMAP servers.

To check mail on the go, I use K-9 Mail. As an Android user, the stock e-mail app leaves much to be desired, and K-9 brings handy features like IMAP IDLE support, which provides push e-mail support.

On the server side, I decided to go with Dovecot. Dovecot is a relatively simple IMAP server which has great Postfix support. I originally went with Dovecot because it integrates with Postfix SASL, and is a great alternative to Cyrus SASL. SASL is important in the context of SMTP servers because it allows remote clients to send e-mail authenticated through your server, therefore not having to be an open relay in order to send e-mail on the go.

As a FreeBSD user, I’ll be providing FreeBSD specific steps for getting Postfix, Dovecot, mutt and K-9 mail playing nicely together. The process isn’t too involved, but I’ll explain the steps and what I know about them along the way.

Getting everything installed is relatively simple:

portsnap fetch update
portmaster mail/postfix mail/dovecot2 mail/mutt

This will install the latest version of Postfix, Dovecot 2 and mutt (non-devel). It’s important to note the dovecot2, as the mail/dovecot port is for Dovecot 1.

I also recommend installing mail/dovecot-sieve as an alternative to procmail. I found procmail locking support to be lacking on FreeBSD, and this is something that is quite important when you have multiple processes accessing your inbox, if you’re using mbox. This is of course less concern with Maildir, however, Sieve is a great alterntive to procmail and can be configured from supported third-party plugins for various mail clients.

make config will prompt with a few questions for the mail/postfix and mail/dovecot2 packages. There are a couple of options for each with are important. For mail/dovecot2, I recommend enabling SSL, otherwise your credentials will be sent in the clear across the Internet. If you’re using a non-PAM based authentication mechanism, other authentication schemes can be used such as CRAM-MD5, however this will only protect your password. All of your mail will still be sent unencrypted. For mail/postfix, you’ll want to be sure to enable the Dovecot 2.x SASL authentication method and SSL/TLS support.

There’re a few configuration changes which need to be made to get everything playing together nicely. I tinkered with this configuration for about a year or so before finally getting it right. For a long while, I noticed that I would sometimes delete a message from my inbox via mutt, but then it would reappear in K-9 mail, and vice versa. I didn’t really mind this at first, but I was afraid that this would ultimately lead to a corrupt mailbox, which it did. I eventually figured out the right combination of settings which got the combination of local mail and IMAP playing nicely, but switching to Maildir instead of mbox will also do the trick.

There are two options for local mail delivery with Dovecot and Postfix. One is dovecot-lda, a simple solution for a small installation. The other is Dovecot’s built-in LMTP sever. Each has advantages and disadvantages. dovecot-lda is relatively simple, and similar in concept to procmail. It’s a process that runs, via Postfix mailbox_command, for local mail delivery. LMTP definitely makes more sense on larger installations, as it runs as a daemon and hooks in with Postfix via mailbox_transport. However, configuration is non-trivial, and on my installation it wasn’t worth the effort.

There are a few caveats with dovecot-lda. If using mbox, dotlocking for /var/mail inbox spools won’t work. This is because dovecot-lda runs as the user being delivered to, and this user typically isn’t in the mail group, and can’t create a dotlock file in /var/mail. One option is to enable the sticky bit on /var/mail, but there are various security implications with this approach. I’ve switched to Maildir, which is better for a variety of reasons. However, if you are using mail spools in shared directories which can’t be written by the user (read: you’re not using Maildir), you’re going to have to set mbox_read_locks = flock and mbox_write_locks = flock in /usr/local/etc/dovecot/conf.d/10-mail.conf. The issues with message deletion were solved by setting mbox_dirty_syncs = no and mbox_lazy_writes = no, which were both yes by default in my installation. Again, these changes are not necessary if you’re running Maildir.

The Dovecot LMTP server is a great alternative to dovecot-lda for large installations. It also plays nicely with /var/mail inbox spools, as the mail_privileged_group setting gives the LMTP process escalated permissions when necessary. However, there is a caveat with the LDA and postifx virtual aliases, which is that the LMTP daemon will look for the fully qualified username, e.g. root@hostname instead of just root. If you’re using the default PAM database, this will of course fail. If you’re using the LMTP transport with mbox, you’ll also need to set client_limit = 1 option inside the service lmtp block in /usr/local/etc/dovecot/conf.d/10-master.conf. As I’m not running LMTP, setup is outside the scope of this post, however all the necessary documentation can be found on the Dovecot wiki.

The key for setting up Maildir, or mbox delivery should you prefer, is the mail_location setting in /usr/local/etc/dovecot/conf.d/10-mail.conf. I have this option set to mail_location = maildir:~/Maildir. As the Maildir is stored in the home directory, there are no shared locking issues to worry about, as the user has control over all the files that need to be created. As stated earlier, if you use an alternative configuration with mbox, mail_location = mbox:~/mail/:INBOX=/var/mail/%u, you’re going to have to be concerned with locking, dirty and lazy writes.

Once locking and mailbox delivery are all set up, you’ll have to tell Dovecot to do its business. You must enable the IMAP protocol by setting protocols = imap in /usr/local/etc/dovecot/dovecot.conf. The default configuration will use PAM authentication by default, which requires setup of the /etc/pam.d/dovecot service. My file looks like this:

auth    required
account required

If you’ve decided to go down the route of CRAM-MD5 authentication, you’re not going to be able to use PAM authentication, as the password database has to be stored in the clear.

To setup Postfix SASL authentication, you must set up a Dovecot auth listener. This can be done by adding the following to /usr/local/etc/dovecot/conf.d/10-master.conf, inside the service auth block:

  # Postfix smtp-auth
  unix_listener /var/spool/postfix/private/auth {
    mode = 0666

SSL configuration is outside of the scope of this post, however an ssl_cert and ssl_key must be set in /usr/local/etc/dovecot/conf.d/10-ssl.conf. smtpd_tls_cert_file must be set in /usr/local/etc/postfix/ as well.

To get the Dovecot LDA up and running, you must set postmaster_address in /usr/local/etc/dovecot/conf.d/15-lda.conf. I’ve also set lda_mailbox_autocreate = yes so that saving mail to a new mailbox creates the mailbox automatically. To enable the sieve plugin for the Dovecot LDA, don’t forget to add sieve to the mail_plugins under protocol lda. This is the only thing that needs to be set to enable sieve, with the magic coming (by default) from ~/.dovecot.sieve.

My ~/dovecot.sieve is relatively simple. Here’s an excerpt:

require ["fileinto", "envelope"];

if header :contains "List-Id" "" {
  fileinto "freebsd";
} elsif header :contains "List-Id" "" {
  fileinto "void";

The path to the user’s dovecot.sieve can be changed in /usr/local/etc/dovecot/conf.d/90-sieve.conf.

Once these changes have been made, Dovecot is ready to go. Just set dovecot_enable="YES" in /etc/rc.conf and start it up via /usr/local/etc/rc.d/dovecot start. Tail /var/log/maillog to check for any configuration errors.

I’m using Postfix virtual aliases for local mail delivery. This allows me to have multiple email accounts at domains I host, all delivered to various local system accounts. It’s a relatively simple solution for a relatively simple setup. Virtual aliases are out of the scope of this post, but I’ll assume you’ve got some sort of local delivery happening with Postfix.

To hook up Postfix delivery to Dovecot, if going down the route of dovecot-lda, simply add mailbox_command = /usr/local/libexec/dovecot/dovecot-lda to the /usr/local/etc/postfix/ This will have a similar effect to the inline example of using procmail for local delivery.

If going down the route of LMTP, you’ll not only need to get a working userdb mapping for incoming email addresses (virtual aliases are delivered via LMTP at their fully qualified name, e.g. root@hostname), but also set mailbox_transport = lmtp:unix:private/dovecot-lmtp in /usr/local/etc/postfix/

I also recommend setting mailbox_size_limit = 0 and message_size_limit = 0 in /usr/local/etc/postfix/, else you may experience issues with local delivery. Postfix applies limits to the size of a mailbox even when delivered via dovecot-lda. I ran into this issue with mbox delivery, though I’m not sure if it’s also an issue with maildir.

To get Postfix playing nicely with Dovecot SASL, add the following to /usr/local/etc/postfix/

smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_auth_enable = yes
smtpd_sasl_authenticated_header = yes

smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination

You’ll want to merge smtpd_recipient_restrictions with any other custom restrictions you may already have in place. The key is permit_sasl_authenticated, which bypasses reject_unauth_destiontion, preventing your mail server from becoming an open relay.

By default, SASL will be enabled on port 25, accepting plaintext auth (your username and password) across the Internet. I recommend setting up SSL, as with Dovecot. Once you’ve got this set up, you should disable plaintext authentication. This is done by default in Dovecot, except for localhost. To do this in Postfix, you must set smtpd_tls_auth_only = yes. Then you’ll need to enable an alternative, secure port for accepting SASL auhtenticated mail. I’ve set up the submission port (587) to do this in /usr/local/etc/postfix/

submission inet n       -       n       -       -       smtpd
  -o smtpd_tls_security_level=may
  -o smtpd_client_restrictions=permit_sasl_authenticated,reject

Once this is all set, you should be ready to disable FreeBSD sendmail in favor of Postfix. Add the following to /etc/rc.conf:



At this point, you should be ready to start up Postfix via /usr/local/etc/rc.d/postfix. Again, tail /var/log/maillog for any error messages.

Once both Dovecot and Postfix are successfully up and running, you should be able to get your favorite IMAP client connected. You’ll be unable to check mail via IMAP by default without setting up SSL. However, if you don’t care about security, you can set disable_plaintext_auth = no in /usr/local/etc/dovecot/conf.d/10-auth.conf. Setting up SSL is really simple and totally worth it. I use StartCom Free SSL certificates, which work on every device I own.

K-9 mail plays nicely with this configuration, and Dovecot supports IMAP IDLE out of the box. I’m loving push IMAP to my Galaxy Nexus, and I get all the benefits of Gmail without the pain of setting up OfflineIMAP or something similar.

FreeRADIUS on FreeBSD and OpenLDAP

Instead of relying on PAM or /etc/passwd for authentication and authorization, I decided to store account information in an OpenLDAP database. Of course I could have used NIS or flat file databases, but OpenLDAP proved to be the best solution for my situation.

sudo portinstall net/openldap23-server

This will install the OpenLDAP 2.3 client and server.

sudo portinstall net/freeradius

This will install the FreeRADIUS server. To enable the LDAP backend, check the LDAP option.

Once FreeRADIUS has finished compiling, OpenLDAP can be configured.

First, the RADIUS LDAP schema must be copied to the OpenLDAP schema directory.

sudo cp /usr/local/share/freeradius/openldap.schema /usr/local/etc/openldap/radius.schema

Next, slapd must be configured.

sudoedit /usr/local/etc/openldap/slapd.conf

The RADIUS schema must be added to the include section. The Cosine schema provides the account objectclass.

include /usr/local/etc/openldap/schema/core.schema
include /usr/local/etc/openldap/schema/cosine.schema

You must also configure at least one database backend.

database bdb
suffix "dc=matthoran,dc=com"
rootdn "cn=Manager,dc=matthoran,dc=com"
rootpw secret
directory /var/db/openldap-data
index objectClass eq
index uid eq

Indexing the objectclass and uid will speed up LDAP queries.

To enable slapd on FreeBSD, add the following to /etc/rc.conf and run the init script.

slapd_flags="-h ldap://"
sudo /usr/local/etc/rc.d/slapd start

Note that this will cause slapd to listen on localhost only.

The ldapadd tool is used to add entries to the LDAP database. The ldapmodify command is used to modify entries in the LDAP database. Both tools take a number of arguments as well as an LDIF file.

dn: dc=matthoran,dc=com
objectclass: dcObject
objectclass: organization
o: Matt Horan
dc: matthoran

dn: cn=Manager,dc=matthoran,dc=com
objectclass: organizationalRole
cn: Manager

If not saved to a file, the above could be fed to ldapadd through stdin.

ldapadd -x -D "cn=Manager,dc=matthoran,dc=com" -W base.ldif

The -x flag causes ldapadd to use simple bind instaed of SASL. The -D flag indicates the distinguished name used to connect to the database and the -W flag causes ldapadd to prompt for simple authentication.

Now the first user may be added to the database.

objectclass: account
objectclass: simpleSecurityObject
objectclass: radiusprofile
uid: test
userPassword: {SSHA}53jF+nBYXuouGDSpKNaUIvOkyxCCEsah

ldapadd -x -D "cn=Manager,dc=matthoran,dc=com" -W test.ldif

This will add a user test with password test to the database. The slappasswd command may be used to generate hashed passwords. The default hash algorithm is SSHA and can be changed with the -h flag.

The ldapsearch command may be used to ensure that the above entry was successfully added to the database.

ldapsearch -x -b "dc=matthoran,dc=com" "(uid=test)"

This should return an LDIF identical to the content of test.ldif.

Now that the LDAP database contains a test user, FreeRADIUS can be configured.

Since my FreeRADIUS server is only performing AAA for EAP clients, there are only two parts of the default configuration file that need to be changed. Many of the others can be slimmed down or completely removed.

To get FreeRADIUS to talk to the LDAP server, the following LDAP configuration options must be changed, assuming that slapd is listening on localhost, that your rootdn is cn=Manager,dc=matthoran,dc=com and that your rootpw is secret.

server = "localhost"
identity = "cn=Manager,dc=matthoran,dc=com"
password = secret
basedn = "dc=matthoran,dc=com"
password_attribute = userPassword
set_auth_type = no

I commented out access_attr so that all users were allowed access, added password_attribute so that the PAP module can extract the userPassword attribute and set set_auth_type to no so that the PAP module will handle authentication.

Now that the LDAP module has been configured, the authorization module must be told to use LDAP for authorization. To do so, just uncomment the ldap line from the authorization section.

PAP is last in the default authorization chain. The default users file will set AuthType = System which will cause authentication to fail. Setting set_auth_type = yes in the LDAP configuration section will not solve this problem as the files database is checked before LDAP due to the order of the modules in the authorization section and would also require an additional LDAP bind. As I am not using the local passwd database for authorization or authentication, I commented out the files module in the authorization section. This will prevent Auth-Type from being set to System. The System Auth-Type may also be commented out in the Authorization section since it will not be used.

Now that the core is configured, EAP can be configured. By default, EAP is enabled, but the default configuration won’t get you too far.

I decided to use PEAP with GTC, which supports encrypted passwords in the database. Because incoming EAP messages do not specify which EAP type they are using, the default must be set for the EAP module. Be sure that you change the default EAP type for the EAP module and not PEAP.

sudoedit /usr/local/etc/raddb/eap.conf

default_eap_type = peap

The TLS and PEAP configuration sections are commented out by default. Because PEAP relies on TLS to set up a secure channel, the TLS module must be configured. The following configuration options must be uncommented:

private_key_password = whatever
private_key_file = ${raddbdir}/certs/cert-srv.pem
certificate_file = ${raddbdir}/certs/cert-srv.pem
CA_file = ${raddbdir}/certs/demoCA/cacert.pem
dh_file = ${raddbdir}/certs/dh
random_file = ${raddbdir}/certs/random

This configuration is sufficient for testing. Be sure to replace the distributed certificates with a signed certificate in a production environment.

If GTC is to be used as the innermost PEAP protocol, the default_eap_type must be set to gtc for the PEAP module. Be sure that you are setting default_eap_type for the PEAP module and not for the EAP module.

default_eap_type = gtc

Now that FreeRADIUS has been configured, it is time to start up the server.

sudo /usr/local/etc/rc.d/radiusd start

The radtest command line tool provides a quick and easy way to make sure that RADIUS can authenticate users. The default clients file includes an entry for localhost with the shared key testing123. Be sure to change this in a production environment.

radtest test test localhost 10 testing123

The first and second arguments to radtest are the username and password of the test user, respectively. radtest is connecting to localhost. The fifth option is the NAS-Port, used for accounting. The final argument is the shared key.

If you do not receive an Access-Accept packet, run FreeRADIUS from the command line in debug mode.

sudo /usr/local/etc/rc.d/radiusd stop
sudo radiusd -X

Now you can go ahead and test EAP interactively. I will not go into how to configure your NAS device to communicate with the RADIUS server. Make sure that you have an entry in /usr/local/etc/raddb/clients.conf for the NAS device.