Nerd Alert: Sprintly’s Continuous Integration

Published by on May 10, 2012.

A few of our customers have asked for more details about the technology that makes Sprint.ly tick behind the scenes. As a result, we’re going to be writing a series of “Nerd Alert” posts covering our technology stack and other hackery we’re working on.

Here at Sprint.ly our mission is to make people more efficient. As part of this philosophy, we believe Continuous Integration or “CI” environments help teams move faster. They can be configured to automatically test code on commit, build packages for deployment, lint check code, produce test coverage reports, or tell you what code doesn’t conform to your particular style guide.

While CI environments do take time to implement, and you will need to keep up on writing tests, we believe that over the long haul they’re akin to a snowflake falling on the top of the mountain – they’ll gain momentum over time.

So what does ours look like? Here’s our list of ingredients:

Our code works its way through this system and, eventually, winds up on our production servers. For our purposes, our process is distinctly broken up into two major steps: continuous integration (GitHub, Jenkins, Repoman, Ubuntu, and Fabric) and deployment/systems automation (Chef, Ubuntu, AWS, and Fabric). This post will only focus on our CI environment. So what’s the life cycle for code being committed as it works its way through our continuous integration?

  1. You’ve just pushed some tasty new code up to GitHub.
  2. GitHub pings our CI environment (Jenkins) via their post post-receive hooks.
  3. Jenkins kicks off the build process for our code, which is, essentially, a series of shell scripts. If one of them fails, the build fails, otherwise Jenkins proceeds to the next step of the build process.
  4. In the first step of the build process, Jenkins creates a virtualenv so our tests can run in a clean environment.
  5. In the second step, Jenkins runs our test suite, runs coverage reports, etc. If tests fail, an email is sent out to the team and the repository is marked as being in a failed state in Jenkins.
  6. If no tests fail, Jenkins will build a Debian/Ubuntu system package for our code. Why apt instead of pip? Because our code, and we guess yours too, relies on all sorts of libraries, packages, and services that live outside the scope of Python’s packaging system (e.g. libmemcached or Redis). Additionally, Debian/Ubuntu’s package system allows you to define init.d scripts and shell scripts that run at various points of the package installation process (For instance, we use postinst to compile our LESS/CSS, minify our JavaScript, and to restart Apache/Celery).
  7. Once a package has been built, we use Repoman’s CLI, to push the new package into our own custom apt repository into the sprintly-staging distribution.

At this point we have a fresh Debian/Ubuntu package of our code sitting in our package repository in the sprintly-staging distribution. Similar to Debian/Ubuntu, we have multiple distributions that packages are promoted through: sprintly-development (Not currently used.), sprintly-staging, sprintly.

The nice thing about using distributions in this manner is that we can have different environments using different distributions, each with different versions of our software running on them. It also allows for a sane promotion path for our code: development ⇢ staging ⇢ production.

Luckily, repoman comes with a CLI application that lets us promote packages with ease. We’ve created a simple Fabric command that lets us promote packages from one distribution to another using repoman. For example, the following promotes a package from one distribution to another:

$ fab promote_package:snowbird,sprintly-staging,sprintly

So after a successful build we can immediately do apt-get update && apt-get install -y snowbird to deploy the package on any machine using our sprintly-staging distribution. After running the above Fabric command, we can use the same apt-get workflow to deploy the code to our production environment.

We’ve invested heavily in this type of automation and it’s paying dividends for us. Our test coverage and CI environment is, essentially, our last line of defense before pushing out crappy code. Do we lament our Jenkins overlord from time-to-time? Yes, but we’re more often thankful that it caught a bug, or revealed an issue prior to deployment.

Next up will be more information about how we use Fabric, Chef, Debian/Ubuntu, and AWS to automate our systems and infrastructure. More to come!