Server Deployment Processes

Now that I know I can run Puppet manifests on a local dev environment and on Digital Ocean, I need to be able to define my projects and write Puppet manifests that will get them onto my server and host them.

My server is built in three steps:

  1. A Packer config that uses Puppet to build a snapshot on Digital Ocean that has all of my dependencies installed and sets up the Helm108 user.
  2. A Packer config that uses Puppet to take the snapshot from step one and install all of my projects on it. This involves creating folders, empty git repositories and Nginx vhosts. This is saved as a new snapshot.
  3. A Terraform config that takes the snapshot from step two and deploys it as an actual running Digital Ocean server. This step pulls all of my projects from their git repositories and starts their Express servers.

This post will give an overview of some of that stuff, and I’ll be writing more that go a bit further in to interesting problems I had and how I solved them. If the entire solution was “I looked in the documentation and it told me what to do” I’m not going to write about it.

Step One - Building the base image

This step builds an image that serves as a base for building more complex images later.

build-base.sh

To build a new base image I run ./build-base.sh, which contains the following:

build-base.sh
1
2
3
4
5
6
7
#!/usr/bin/env bash
# Ensure modules are up to date.
librarian-puppet install
# Load in secrets.
source secrets.sh
# Run packer.
packer build packer-base.json

packer-base.json

The last line of build-base.sh kicks off the actual work of building an image to use. Some of this is covered in my previous post but here’s the whole packer-base.json file:

packer-base.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
{
"variables": {
"DIGITAL_OCEAN_TOKEN": "{{env `DIGITAL_OCEAN_TOKEN`}}"
},
"builders": [{
"type": "digitalocean",
"api_token": "{{user `DIGITAL_OCEAN_TOKEN`}}",
"image": "ubuntu-16-04-x64",
"region": "lon1",
"ssh_username": "root",
"size": "512mb",
"droplet_name": "helm108",
"snapshot_name": "helm108-base-{{timestamp}}"
}],
"provisioners": [
{
"type": "shell",
"script": "provision-base.sh",
"pause_before": "5s"
},
{
"type": "puppet-masterless",
"manifest_file": "puppet-env/base/manifests/site.pp",
"module_paths": "modules/",
"puppet_bin_dir": "/opt/puppetlabs/bin/"
}
],
"post-processors": [
[
{
"output": "packer-base-manifest.json",
"strip_path": true,
"type": "manifest"
}
]
]
}

packer-base.json sets up the Digital Ocean image, and then runs provision-base.sh and the base puppet manifest against it.
Once the image has finished provisioning, the ‘manifest’ post-processor writes out a json file containing data about the image which is used in step two.

The ‘base’ puppet environment.

The base puppet script is responsible for setting up the helm108 user and setting up various dependencies, either configuration or installation of packages.

Puppet module: userconfig

Creates a user and group which everything else uses.
Creates the user’s home and .ssh directories.
Adds various keys to the user’s authorized_keys file.
Generates a keypair for the user to access github/gitlab with

Puppet module: dependencies

Creating a swap file
Creating the folders for storing git repos and hosting websites
Installing git, node, pm2, nginx, etc
Setting environment variables
Setting up the ufw firewall
Security stuff - fail2ban, unattended upgrades, emailing pm2 status to myself once a day
Creating the update_repos.sh script that gets run in a later step

Step Two - building the deployment image

This step takes the image generated in step one and installs my projects on it.

build-deploy.sh

Similar to step one, I run ./build-deploy.sh.

build-deploy.sh
1
2
3
4
5
6
7
8
9
#!/usr/bin/env bash
# Ensure modules are up to date.
librarian-puppet install
# Load in secrets.
source secrets.sh
# Get base snapshot id.
export BASE_SNAPSHOT_ID=$(jq -r '.builds[-1].artifact_id' packer-base-manifest.json | awk -F':' '{print $2}')
echo "Snapshot ID: $BASE_SNAPSHOT_ID"
packer build packer-deploy.json

The base image only needs to be rebuilt if a core dependency changes, so build-deploy.sh is run much more frequently. It’s very similar to build-base.sh except that it reads the manifest file generated by step one to get the ID of the snapshot that step one generated on Digital Ocean.

packer-deploy.json

packer-deploy.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
{
"variables": {
"DIGITAL_OCEAN_TOKEN": "{{env `DIGITAL_OCEAN_TOKEN`}}",
"BASE_SNAPSHOT_ID": "{{env `BASE_SNAPSHOT_ID`}}"
},
"builders": [{
"type": "digitalocean",
"api_token": "{{user `DIGITAL_OCEAN_TOKEN`}}",
"image": "{{user `BASE_SNAPSHOT_ID`}}",
"region": "lon1",
"ssh_username": "root",
"size": "512mb",
"droplet_name": "helm108",
"snapshot_name": "helm108-deploy-{{timestamp}}"
}],
"provisioners": [
{
"type": "puppet-masterless",
"manifest_file": "puppet-env/deploy/manifests/site.pp",
"module_paths": "modules/",
"puppet_bin_dir": "/opt/puppetlabs/bin/"
}
],
"post-processors": [
[
{
"output": "packer-deploy-manifest.json",
"strip_path": true,
"type": "manifest"
}
]
]
}

The ‘deploy’ puppet environment

This step configures nginx and sets up each project being hosted on the server.

The server needs to pull each project from its repository. In order to do this, GitLab and GitHub need to be authenticated for SSH connections. Normally the first time you connect to a server over SSH your machine will ask you to confirm that you trust the server that you are connecting to. Since this build process is automatic we don’t want to (and sort of can’t) sit and wait for that to happen and type in ‘yes’, so we get the keys beforehand using:

1
ssh-keyscan -t rsa gitlab.com

and then add them to the box using the sshkey command:

1
2
3
4
5
6
7
8
9
10
sshkey { "gitlab.com":
ensure => present,
type => "ssh-rsa",
key => "keygoeshere"
}
sshkey { "github.com":
ensure => present,
type => "ssh-rsa",
key => "keygoeshere"
}

Puppet module: sitebuilder

The deploy step then runs my sitebuilder module which defines all of the projects that will run on the server. Its main tasks are to set up the script that will pull and start each project, and to set up nginx proxies for each project. I’ll go into this more in a future blog post.

SSH key

Finally, the deploy step echoes out the box’s public key so that I can add it to github and gitlab. This is the only real boring manual part of the process, but security versus convenience and all that. I could possibly do something to ensure the box always has the same key but then I’m probably committing a private key somewhere and that’s a bad move. I don’t have to rebuild this enough to find out if there’s a safe and sane way around it so it’ll stay like this for now.

Step 3 - Deploy

This step takes the image generated in step two and uses it to launch a Digital Ocean server. It then runs a script that pulls each repo and starts server.js with pm2.

deploy.sh
1
2
3
4
5
6
7
8
#!/usr/bin/env bash
while read secret; do
final="${secret/export /export TF_VAR_}"
eval "$final"
done <secrets.sh

export TF_VAR_DEPLOY_SNAPSHOT_ID=$(jq -r '.builds[-1].artifact_id' packer-deploy-manifest.json | awk -F':' '{print $2}')
terraform apply

So this is a bit weird. Basically I have a script that declares a bunch of environment variables for things to use, but Terraform expects any environment variables to be prefixed with TF_VAR_. My solution to this was to read in that file line by line, modify the variable name, and then eval each line to declare the environment variable for Terraform.

Then once it’s done that it gets the ID of the snapshot generated in Step Two and declares that as the snapshot ID to use in the deploy.

Finally, it calls terraform apply.

I then have a digitalocean.tf file that specifies the type of droplet to create in Digital Ocean. It then specifies a provisioner that inserts any project-specific environment variables into the appropriate .env files, and finally it updates cloudflare to point my domains to the correct URLs.

Done

Helm108 has been running on this setup for almost a year now and every time I deploy a new project to it everything just works the way I hoped it would. Since this is my first foray into automating my server infrastructure I’m pretty pleased with how stable it’s been. I really like that everything is destroyed and rebuilt every time; it means I can’t ever just shell in and fix something manually, I need to follow my process or changes will be lost forever.

There are definitely better ways I could do some of this stuff, but considering how basic my needs are it doesn’t feel like the best use of my time to chase down small optimisations.