DeployBot at WonderProxy

by Gemma Anible on

Previously, I described WonderProxy's classic git pull deploys, and explained how we eventually grew out of them:

  • The pre- and post-deploy checklists got too long. They were unwieldy for team members, and they made reverting to a previous state difficult.
  • We were unnecessarily limiting "people who can deploy" to "people who have easy SSH access and all the necessary permissions and deploy knowledge".

As we explored the world of automated build/deploy solutions, we found two general categories.

Batteries included

The solution gives you an entire environment already set up with all the tools you need. Your build will run a couple commands, and the solution will drop the final product wherever you need it.

Since the solution is handing you a pre-built environment, you're limited to the environments it has pre-built. If it targets PHP and Ruby and you're trying to deploy a NodeJS project, you're out of luck. If you're trying to deploy a PHP project that needs some esoteric extensions, you're out of luck unless the solution gives you a way to install them. Batteries-included solutions are similar to the old shared hosting providers of yore: Convenient if they meet your requirements, but unusable otherwise.

DIY

The solution doesn't define the environment, or at least doesn't limit your options to the environments it has available. It probably uses either your own infrastructure or containers you can control.

This is the category we wanted at WonderProxy. It would be more work to set up, but it would let us customize exactly the environment we needed.

DeployBot

DeployBot falls into the latter category. It defines a process, rather than an environment, and lets the project define where the process will run and what will happen at each step. The process you use could be as simple as Connect to my server and run a series of shell commands, or as complicated as managing a series of releases on an external server with symlinks and caching.

Our Where's It Up API website is relatively small, and while it has pre- and post-deploy tasks to complete, it doesn't have many external systems to connect. It was a perfect test candidate for DeployBot.

Setting up the environment

DeployBot lets projects set up any number of environments, and any number of "servers" (really, deploy processes) per environment. Each environment is linked to a specific Git branch, and can optionally trigger a deploy whenever the branch updates. Environments can also be hooked into notification systems like Slack or HipChat.

We started with one environment for Where's It Up: staging, which would automatically deploy updates to the project master branch to a new staging server. We also hooked it up to our Slack channel.

Setting up the deploy process

We chose DeployBot's Atomic SFTP deploy process, which defines a deploy environment as "whatever the current symlink points to". It accomplishes a couple goals:

  1. It's all-or-nothing. Either the entire deploy completes successfully, or the whole thing is discarded and nothing deploys.
  2. It keeps downtime to a minimum, since the act of deploying is updating a symlink instead of waiting for a big build to finish.

The process lets us define a series of tasks as shell scripts.

Build

DeployBot's pre-upload build task runs in a Docker container. DeployBot provides their own, but we replaced it with one customized to our needs. We use this step for most of our old pre-deploy checklist:

  • Run unit tests
  • Run linters
  • Build CSS and JS assets

If anything in the build task fails, the entire deploy halts and we get notified in Slack.

Upload

When the build completes, DeployBot uses scp to push the resulting set of files up to our Staging webserver, with a local user and SSH key we specify. Since we already performed the build, we exclude a whole collection of now-redundant source files and build-only requirements from the upload.

After the upload completes, we get one more opportunity to make changes to the build on our server before it deploys. We use it to optimize the Composer autoloader.

Again, if the upload or anything in the post-upload task fails, the entire deploy halts.

Deploy

With the Atomic SFTP process, "deploying" means "updating the current symlink". DeployBot provides one more task that runs after the deploy, which we use to reload the web server.

Extending the functionality

With the environment and deploy process set up, DeployBot started automatically deploying Where's It Up master updates to our staging environment. We could push or merge, and the finished product would show up minutes later. It was exciting!

The branching strategy at WonderProxy is a lot like what's become known as "GitHub flow": master is always deployable, and all new development happens in separate branches created from and eventually merged back to master. That means we've usually got lots of topic branches in progress, and they don't hit master until they're done.

Up to this point, if we wanted to demo a new branch for the team, we had to manually deploy it to our own development environments on a development server. More distressingly, if a team member unfamiliar with the deploy wanted to see anything in progress, they would need to ask someone to deploy it somewhere.

Now that we had DeployBot, we wanted to use our staging deploy process for arbitrary topic branches. DeployBot environments are tied to specific branches for automatic deploys, but each environment has a webhook endpoint that can deploy any <commit-ish> (branch, tag, or commit hash) in the repository. It looks something like this:

https://wondernetwork.deploybot.com/webhook/deploy?env_id=<staging environment ID>&revision=<commit-ish>

We connected that webhook to our internal IRC-cum-Slack bot, and presto:

!deploy wheresitup staging my-awesome-branch

Now anyone in Slack can deploy any Where's It Up branch to the staging environment, and the only people who need to know the details of the deploy process are the people who maintain the DeployBot tasks!

Drawbacks

We've been (mostly!) satisfied DeployBot users for a little over a year. As the number and variety of projects we've DeployBot-ified has grown, we've become increasingly attuned to some of the drawbacks.

No automatic deploys for arbitrary branches

As noted above, each DeployBot environment is tied to a single branch in your repository. We hacked together our !deploy IRC command to get around that restriction for ad-hoc deploys, but there is no way to automatically run a build or deploy for every branch update. That means we can't e.g. set up DeployBot with GitHub Status Checks without a lot of extra wiring on our part.

Build uploads do not preserve file modes

Git repositories store the modes of the files they hold. If you git commit an executable shell script, it'll still be executable when you git checkout.

The Atomic SFTP process we use does not preserve file modes when it uploads a build to our deploy environment, so any executable files in our projects (e.g. cron jobs) will not be executable after the upload. We solved that problem by uploading a tarball of the build, and discovered that...

Build uploads are slooooow

When we solved the file mode problem, we found that uploads of large binary files are painfully slow. One of our projects results in a 26MB tarball; DeployBot takes 10-15 minutes to upload it to our servers. We've confirmed with their support team that the problem is on their end, but to date they don't have a resolution.

Deploy processes are not pipelines

The deploy processes available in DeployBot are flexible, but they are still a fixed set of tasks taking you from zero to deployed. There are no "build artifacts" that result from a process, so it's not possible to e.g. create a build-only process that feeds its output directly into a deploy-only process. That ends up meaning a lot of duplication among similarly-structured projects.

The Future

We're not quite at the point that we're shopping for a new solution, but we're close. GitHub Status Checks would be a welcome addition to our collaboration process, pipelines would simplify the process maintenance, and I'm not sure how much longer Paul and Will will be able to stomach fifteen-minute deploys. (I, naturally, have the patience of a gnat saint.) For now, I'm investigating alternatives and taking note of our DeployBot pain points.