Building a Deployment Image

In my original server deployment post I described the three stages of my server deployment. This post describes, in not much actual detail, the second stage: using the base image I built in step one to create an image to be deployed.

Booting up the Base Image

When the previous step created the base image, it finished the Packer run with creating a manifest file that, amongst other things, contains the Digital Ocean snapshot id of the created image.

When I run my shell command to build a deploy image, it parses the manifest file, extracts the snapshot ID and adds it as an environment variable. It then runs packer build packer-deploy.json which contains this:

1
2
3
4
5
6
7
8
9
10
"variables": {
"BASE_SNAPSHOT_ID": "{{env `BASE_SNAPSHOT_ID`}}"
...
},
"builders": [{
"type": "digitalocean",
"image": "{{user `BASE_SNAPSHOT_ID`}}",
...
}]
...

This means that it will spin up a new box using the snapshot created in the previous step. Neato!

“Install” NGINX

So this threw me for a while when I was building all of this. The step that creates the base image installs NGINX with

1
2
3
4
class {'nginx':
manage_repo => true,
package_source => 'nginx-stable'
}

So in the deploy build phase I thought I could just reference the nginx class anywhere and since nginx was installed already everything would just work. Wrong! Because stage two is a separate Puppet module it has no idea that it can do things with nginx unless I tell it that it has nginx, so I needed to start this stage with

1
class {'nginx': }

So now it knows it has the nginx module available.

SSH keys for known hosts

The final deploy phase needs to pull repos from Github and GitLab, and it does that over SSH. The problem I ran into was that it expects there to be a user to say ‘yes I trust this remote host’ which isn’t something you can or want to do in an automated build. The solution to this was to just add their respective public keys to my user’s known_hosts file.

1
2
3
4
5
6
7
8
9
10
sshkey { "gitlab.com":
ensure => present,
type => "ssh-rsa",
key => "AAAAB3NzaC1yc2EAAAADAQABAAABAQ..."
}
sshkey { "github.com":
ensure => present,
type => "ssh-rsa",
key => "AAAAB3NzaC1yc2EAAAABIwAAAQEAq2..."
}

To get a domain’s key for this, you need to use ssh-keyscan.

1
ssh-keyscan -t rsa gitlab.com

Setting up Each Defined Site

The previous steps were basically admin to allow the following steps to enable the server to perform it’s ACTUAL PURPOSE: running my websites.

I made a module called sitebuilder which allows me to define a URL and a git repo that should be deployed at that URL.

If the site runs from a fully qualified domain name:

1
2
3
4
5
sitebuilder::builder { 'www.helm108.com':
domain => 'www.helm108.com',
port => 4000,
remoteUrl => 'git@gitlab.com:helm108/helm108.com.git',
}

If the site runs from a subfolder:

1
2
3
4
5
6
sitebuilder::builder { 'iomante':
domain => '/games/iomante/play',
location => 'www.helm108.com',
port => 4005,
remoteUrl => 'git@gitlab.com:helm108/iomante.git'
}

Currently I have to define the port number manually, but it’s not that much of a hassle. Each of my projects runs on Express, and that port number is used to both tell Express which port to use, and to configure an NGINX reverse proxy to point at it.

So, what does sitebuilder’s builder function do?

Create an NGINX location

The first thing it does is creates an NGINX server resource:

1
2
3
4
5
6
7
8
9
10
11
12
if 'www.' in $domain {
$non_www = delete($domain, 'www.')
nginx::resource::server { $non_www:
server_cfg_append => {
return => sprintf('301 $scheme://%s$request_uri', $domain)
}
}
}

nginx::resource::server { $domain:
proxy => "http://localhost:$port/",
}

This creates a reverse proxy from the given domain to the Express server running on the defined port number. It also sets up a redirect from non-www routes to www. The version of puppet/nginx I’m using has a ‘www to non-www’ config but nothing the other way round. I tried to upgrade to the latest version of puppet/nginx but so many things broke that I gave up. That seems like a operation for a more structured refactor of my work.

Set Up a Git Repo

The next step is to initialize a bare git repo for the site and then set up post-checkout and post-receive hooks. The hooks are to check the files out into the appropriate /var/www folder and then run pm2 start server.js on the project whenever changes are made. There was actually a fair amount of work to get this functioning correctly so it’s going to be its own post. Consider this the tl;dr.

Create Project .env File

1
2
3
4
5
6
file { "/var/www/$title/.env":
owner => 'helm108',
ensure => present,
content => template('sitebuilder/env.erb'),
require => Githook::Githook[$title],
}

That env file contains the port that Express should use. It’s read by the dotenv npm package at the start of each repo’s server.js.

Add Site to update_repos.sh

Once all of that is done, the site is ready to be pulled and pm2’d into existence. That’s done by Terraform in the Deploy stage by running /update_repos.sh. Sitebuilder makes that file do things by ending each call to Builder with this:

1
2
3
4
file_line { "Append update_repos.sh for $title":
path => '/update_repos.sh',
line => "cd $repoPath/$title && git fetch origin && git --work-tree=${sitePath}/${title} --git-dir=${repoPath}/${title} checkout -f master",
}

The deploy phase runs this script to pull down the master branch of each repo. The previously created git hooks then ensure that pm2 start server.js is run for each repo.

Display the server’s public SSH key

As the server will need to fetch from various git repos its public key needs to be added to the sites it will be pulling the repos from. The base build phase generates a new keypair every time it is run, so I need to be given the public key in order to add it to Github and GitLab. I could avoid this by ensuring that the site always uses the same keypair but that requires that I securely store a private key somewhere to be loaded onto the server and I’d rather deal with an infrequent inconvenience than do the keypair storing thing.