Stuff That Works part 1: One Command Build Environment

We are going to write a series of posts on good practices concerning software development, configuration management, building, testing and methodologies. The stuff in these posts will not be vague, abstract ideas but real lessons learned by veterans in the trenches of the battlefield.

This is the first post in the series, so let’s start from the beginning.

Everyone of us has been in this situation: You are the new guy in the team. You have just set up your computer. Email and web browser works. You are eager and ready to start coding!

What is the first thing you must do to start coding? You must install some tools and libraries. You must find out where to get the source code. Which version control tool do they use here? Where is the repository? Who do I talk to get access?

Do you remember how long it took before you were able to build the source code you were supposed to be working on? And build it in such a way that it really resulted in something that installs and works?

It probably took you anywhere from a few hours to a few days (and I have read about horror stories where it took weeks.)

Why?

Because the old hands in the team already have their own build environments set up and have had them for years. Their workstation has all the tools, all the libraries and all the little undocumented tweaks that are needed to make the perfect build. The build environment has not been set up, it has more like grown or evolved. Documentation, if any, is probably outdated.

The only way to avoid this mess is to make it possible to set up a complete build environment by running one (1) command in a clean workstation.

Why one command? If there are two commands, you might not remember them both or you might run them in the wrong order. If there are more commands, you definitely will not remember all of them.

And why would you bother with anything more than 1 command? If you currently need to run 2 commands, make a script that runs them. Now you have only one command. You have just automated the whole set up process and reduced the number of things you need to remember about it by 50%!

What this command does is entirely up to you. It can install software from network drives, copy files, check out source code, whatever is needed to turn a freshly installed Windows or Linux or Mac into something you could replace your build server with.

If you have such a command, it is probably a script. You do not need documentation, just read the script. If there is some obscure stuff in the script, write a comment into the script.

Test the script. Get your boss to buy you a new, shiny, fast workstation and use the script to set up your build environment on it. (If your boss refuses, set up a fresh virtual machine or something.)

And best of all, if one day you see smoke coming out of your build server, you know how to make a new server in no time at all.

We in the Linux team have a script that uses apt-get to install bazaar, configures it, then checks out an Arch project that has a Makefile. This Makefile is used to check out the various source trees we work with. The Makefile also knows which tools you need for which source tree and will install the tools before checking out the source.

This is how it would look if you wanted to make a build in a clean computer:

# One command build environment setup
$ wget -O- http://server/1cmdenv.sh | sh
$ cd universe
# Check out one source tree and install the tools it needs
$ make client-server-5.5x
$ cd client-server-5.5x
$ make testbuild
# Now install and test the build in ./release directory

Of course, it may not be a good idea to depend on tools and libraries that are part of your OS. When you upgrade the OS, you sometimes get new versions of them that are not compatible with the way you use them. But that is a topic for another post.