Yesterday just a night before EuroDjangoCon 2010, I was writing deployment scripts for my personal project. I have been using analogous scripts at least for a year for my work projects of different scopes. And I can prove that it's really worth to have such scripts, because they save a lot of time and let you avoid confusion and mistakes.
Most of the projects are continuously improved, so the main purpose of deployment in my case is not the initial installation, but handy and fast updating.
For each website that I am managing, there are two environments set up under different domains, where one of them is production environment and another one is staging/testing environment. The configuration of staging environment mirrors the production environment. Both have their own webserver configurations, codebases, databases, and uploaded media files. At first the newest features are tested in the staging environment with the copy of live data, and then they are published in the production environment.
I am using subversion for version control with third-party modules defined as svn:externals and put under the same codebase. Unfortunately, some of the modules have to be copied if they were originally taken from Git, Bazaar, or other places. Anyway, keeping all the codebase under SVN makes the initial installation easier and ensures that you can easily update the whole combination of module dependencies.
In addition, I am using south migrations, to make the changes to the database schema automagically.
Earlier the process of updating the staging environment consisted of these manual steps:
- Set up under-construction page for the staging site either by changing
httpd.confand restarting webserver (depending on the server configuration).
- Export the dump of production database and import it into the staging database.
- Copy the media uploads from production environment to staging environment.
- Update code from subversion trunk.
- Migrate database schema and/or do any database changes.
- Restart apache and memcached.
- Test if everything is working as expected.
- Unset under-construction page.
Then some similar instructions should be executed in the production environment:
- Set up under-construction page for the production site.
- Backup database in case if anything goes wrong, so that you could recover.
- Update code from subversion.
- Run database migrations.
- Test everything.
- Unset under-construction page.
And that's quite a long routine to do manually and also sometimes when the changes were small, I was always tempted to skip some steps which increased the risk to fail. But then I started using Fabric to automate the process. Fabric is python-based deployment tool providing simple API for FTP and SSH connections as well as for local shell calls.
To use Fabric, you have to install it in your local machine and then write scripts called fabfiles for each website, you are updating. For example, updating the staging environment in all my cases is as simple as running
# cd path/to/directory/with/fabfile.py/ # fab staging deploy
and then entering user passwords and answering questions like
Backup database ([y]/n)? _
The fabfiles could also be without user interaction, but I made those dialogs to be able to monitor the process and stop it at any point in case if I need some specific additional manual work.
For security reasons, you should avoid keeping passwords under subversion or in the deployment script. Much better it is to write some bash scripts in the remote servers which would do the work and then to address them from your fabfile.
Here is an example of possible fabfile. Keep in mind that different apache configurations and some bash scripts are something that you have to write yourself depending on your server configuration. Also the fabfile itself might slightly differ depending on the server specifics.
According to Thilo Fromm with whom I talked today at EuroDjangoCon, another way to create deployment scripts is to create linux-distribution-related packages which you could install using installation tools within the distribution, for example apt-get for Debian or rpm for Suse. However, in my opinion, that kind of deployment fits better to those cases when you don't have control of the production server and need to deliver a final product to corporate clients, or where you need to install the same project/product to multiple servers.