Building a Reusable Image on Digital Ocean With Packer

As mentioned in my previous server deployment post, the first stage of my workflow is to build a base image on Digital Ocean. Here’s the Packer config:

"variables": {
"builders": [{
"type": "digitalocean",
"api_token": "{{user `DIGITAL_OCEAN_TOKEN`}}",
"image": "ubuntu-16-04-x64",
"region": "lon1",
"ssh_username": "root",
"size": "512mb",
"droplet_name": "helm108",
"snapshot_name": "helm108-base-{{timestamp}}"
"provisioners": [
"type": "shell",
"script": "",
"pause_before": "5s"
"type": "puppet-masterless",
"manifest_file": "puppet-env/base/manifests/site.pp",
"module_paths": "modules/",
"puppet_bin_dir": "/opt/puppetlabs/bin/"
"post-processors": [
"output": "packer-base-manifest.json",
"strip_path": true,
"type": "manifest"

What’s it doing?


This section pulls in my token for the Digital Ocean API. This token is stored in a file called stored in the root directory. This file is not commited to git, so whenever I check out the repo I have to manually rebuild with the various secrets it contains. I’ve seen some people suggest that secrets can be stored somewhere secure like in Amazon S3 but I haven’t had to recreate the file frequently enough to bother with that yet.

A security note: I went to the Velocity Devops Conference in October and according to some talks there the current best practice is no longer to store secrets as environment variables. There’s a risk of secrets being dumped into logs by anything that decides to print out the environment variables as part of its debugging. A better solution is to mount a folder containing the enctypted secrets into a memory file system; this means that the secrets only ever exist in plaintext in RAM and not on the filesystem itself, and can’t be read by anything with environment variable access. Hashicorp’s Vault does this when working with Nomad.


Defines the Digital Ocean builder. It creates a 512mb droplet running Ubuntu 16.04 x64 in the lon1 region called helm108, and finally generates a snapshot called helm108-base-{{timestamp}}.


How to provision the snapshot. The first defined provisioner is a shell script that Packer copies on to the image over SSH and then runs as root, as defined in the Builder step. looks like this:

#!/bin/sh -x
export DEBIAN_FRONTEND=noninteractive
sudo locale-gen en_GB.UTF-8
cd ~ && wget
dpkg -i puppetlabs-release-pc1-trusty.deb
apt-get update
apt-get install -y puppet-agent

export DEBIAN_FRONTEND=noninteractive tells the system that it is not running in a mode in which a user can interact with it. This helps skip steps that typically require a user to accept or authorise things be done, but I’ve read that it does also hide some errors from being made visible. So far I don’t seem to have had any problems so I’m happy with it for now.

sudo locale-gen en_GB.UTF-8 was to stop the box telling me that I didn’t have a locale set all the time.

The rest of the file downloads puppet-agent and then installs it. This allows the following Puppet provisioner to work; Puppet cannot install itself, and a vanilla Ubuntu image doesn’t ship with Puppet.

Puppet Provisioner

This is where the bulk of the work is done. This file runs a Puppet script that calls two modules that I have written; one that creates the helm108 user on the box, and another that installs all of the low-level dependencies for my projects; things like git and nginx that aren’t unique to any given project.

Almost all of the following steps are done using either built-in Puppet commands or the most popular Puppet plugin that I could find, so for that reason and also some others I’m not going to go into detail on how I achieved each step (one of the other reasons is ‘I want to replace a bunch of this’).

The User Config module

This module handles the following tasks:

  • Creates a user and group of a given name, in this case ‘helm108’
  • Adds that user to appropriate groups such as www-data
  • Creates home and .ssh directories for that user
  • Adds my public keys to the helm108 user so that I can shell into it from my desktop and laptop. This just uses the ssh_authorized_key resource type.
  • Generates a keypair using maestrodev/ssh_keygen for the user so that it can pull from github and gitlab

The Dependencies module

This module installs all of the non-project-specific dependencies that are needed, and handles various other tasks that don’t really fall under the ‘installing dependencies’ umbrella but I couldn’t be bothered to make another module.

The things it does are:

  • Create a 2gb swap file using petems/swap_file, because Digital Ocean droplets do not have swap space out of the box
  • Creates a .sh file that is populated in the deploy build phase
  • Creates a folder for my git repos to be created in
  • Creates a folder that my projects get served out of
  • Installs
    • Git
    • Support Digital Ocean’s server monitoring feature
    • NodeJS, after update the package reference
    • PM2 (this and other node packages installed using the exec command)
    • yarn
    • nginx
    • fail2ban
    • logwatch
  • Reads a template file that contains environment variables and applies them to a shell script in /etc/profile.d so that they load on boot
  • Sets up some firewall rules using a UFW module I wrote and haven’t published.
  • Sets up unattended upgrades
  • Generates a new root password
  • Creates a cron job to email me the output of pm2 status every day

Post Processors and how to build a droplet from a snapshot

Once all of the provisioning steps have completed Puppet generates a manifest file that contains, amongst other things, the id of the generated shapshot. This allows the Packer config that sets up the actual deployment image to reference the base snapshot. In the "image": "ubuntu-16-04-x64", section of the Packer config, you can replace ubuntu-16-04-x64 with the ID of a snapshot and your new box will boot that snapshot.

The manifest file that Packer generates returns the snapshot ID in an annoying format: "artifact_id": "lon1:27614034",. All you actually want is the number after lon1:, though I suppose in a more complex system knowing the region that the snapshot is in would be important. As I don’t need to know the region, in order to turn that into something useful I wrote the following bash script to get that number out and stored as an environment variable:

export BASE_SNAPSHOT_ID=$(jq -r '.builds[-1].artifact_id' packer-base-manifest.json | awk -F':' '{print $2}')

jq is a tool for parsing JSON. Gets the last item in the builds array and pipes the artifact_id value to awk, which then splits it on the colon and finally returns the second value from that split. This gets stored as the BASE_SNAPSHOT_ID environment variable which is then usable in my Packer deploy config.