Running python apps is not that hard, if you’ve done it and learned the ins and outs a few times. The bigger issues is dealing with deployment and automatic continous integration. I have worked my way towards a fantastic approach that is probably not new to most, but if so, read on.
Using env vars
First and foremost, use environment variables, ala 12factor.net. This is a hugely important way to improve portability across configuration management tools, as well as infrastructure your application will live on. Env vars are a conduit for easy passing of information across just about anything. You can even do cooler stuff, like Feature flipping.
Deploying apps with VCS
This is pretty common. You have a source repository where your code lives under version control (right!?), whether it be git, subversion, bitbucket, bazaar, mercurial, etc…, and you download the code into some directory on a node you wish to deploy with. Then you do your service setup (say launching the appropriate web server, starting a database) to get things running.
The first problem with the above
There is a one issue that consistently comes up when I did deployments this way. The first is that of authentication; when you want to pull down your code, you have to provide some kind of authentication mechanism for your source control. This is typically done with ssh for security reasons. However, if your source code is publicly available, it can be done with https.
But this requires maintaining ssh keys on the host, or hard-coding the username/password.
You have app dependencies, right?
Chances are great your app has dependencies. Let’s say, you’re developing a flask app. You will need flask, itself having a plethora of dependencies, along with any other app extensions you want. So what do you do? Typically this is approached with a requirements.txt
file, which is either manually handled or generated with pip freeze > ...
. Unfortunately, your app is still being deployed with version control, unlike the simple method of pip, like these packages. And if for some reason you use setuptools, you have to keep requirements.txt
and setup_requires=[]
in sync (and don’t think about reading the file in your setuptools python script–there are pathing issues that will make this a brittle solution on many systems!)
Enter packaging
So the answer turns out to be real simple; make your app a package! This produces more than one useful effect, right out of the gate:
- Makes your app more dependable because it enforces good standards to make it work with a typical pip install (ala 12-factor app).
- Makes your dependencies automatically install before your app is installed (when using
setuptools
setup_requires=[]
keyword argument) - Allows you to do a
pip install myapp
and be done.
I know this has its fallacies…
If say, you have a self-hosted pypi, with the above approach you would still have to authenticate to it. And there are potential configuring you have to do here (e.g. pip.conf
, .pypirc
files). But the ease of having pip be your one handler for all packages and dependencies negates a lot of this, in my opinion, and is only applicable to some of these certain cases.
And when it comes to a simple install, if the package is public there is no authentication to worry about.