It seems that docker is all the rage these days. Docker has popularized a powerful paradigm for repeatable, isolated deployments of pretty much any application you can run on Linux. There are numerous highly sophisticated orchestration systems which can leverage Docker to deploy applications at massive scale. At the other end of the spectrum, there are quick ways to get started with automated deployment or orchestrated multi-container development environments.
When you're just getting started, this dazzling array of tools can be as bewildering as it is impressive.
A big part of the promise of docker is that you can build your app in a standard format on any computer, anywhere, and then run it. As docker.com puts it:
“... run the same app, unchanged, on laptops, data center VMs, and any cloud ...”
So when I started approaching docker, my first thought was: before I mess around with any of this deployment automation stuff, how do I just get an arbitrary docker container that I've built and tested on my laptop shipped into the cloud?
There are a few documented options that I came across, but they all had drawbacks, and didn't really make the ideal tradeoff for just starting out:
- I could push my image up to the public registry and then pull it down. While this works for me on open source projects, it doesn't really generalize.
- I could run my own registry on a server, and push it there. I can either run it plain-text and risk the unfortunate security implications that implies, deal with the administrative hassle of running my own certificate authority and propagating trust out to my deployment node, or spend money on a real TLS certificate. Since I'm just starting out, I don't want to deal with any of these hassles right away.
- I could re-run the build on every host where I intend to run the application. This is easy and repeatable, but unfortunately it means that I'm missing part of that great promise of docker - I'm running potentially subtly different images in development, test, and production.
I think I have figured out a fourth option that is super fast to get started with, as well as being reasonably secure.
What I have done is:
- run a local registry
- build an image locally - testing it until it works as desired
- push the image to that registry
- use SSH port forwarding to "pull" that image onto a cloud server, from my laptop
Before running the registry, you should set aside a persistent location for the
registry's storage. Since I'm using
boot2docker, I stuck this in my home
directory, like so:
To run the registry, you need to do this:
1 2 3 4 5 6 7 8
To briefly explain each of these arguments -
--name is just there so I can
quickly identify this as my registry container in
docker ps and the like;
--rm=true is there so that I don't create detritus from subsequent runs of
-p 5000:5000 exposes the registry to the docker host,
GUNICORN_OPTS=[--preload] is a workaround for a small bug,
STORAGE_PATH=/registry tells the registry to look in
/registry for its
images, and the
-v option points
/registry at the directory we previously
It's important to understand that this registry container only needs to be running for the duration of the commands below. Spin it up, push and pull your images, and then you can just shut it down.
Next, you want to build your image, tagging it with
Assuming the image builds without incident, the next step is to send the image to your registry.
Once that has completed, it's time to "pull" the image on your cloud machine,
which - again, if you're using
boot2docker, like me, can be done like so:
1 2 3
If you're on Linux and simply running Docker on a local host, then you don't
need the "
1 2 3
Finally, you can now run this image on your cloud server. You will of course
need to decide on appropriate configuration options for your applications such
1 2 3 4
To avoid network round trips, you can even run the previous two steps as a single command:
1 2 3 4 5 6
I would not recommend setting up any intense production workloads this way; those orchestration tools I mentioned at the beginning of this article exist for a reason, and if you need to manage a cluster of servers you should probably take the time to learn how to set up and manage one of them.
However, as far as I know, there's also nothing wrong with putting your
application into production this way. If you have a simple single-container
application, then this is a reasonably robust way to run it: the docker daemon
will take care of restarting it if your machine crashes, and running this
command again (with a
docker rm -f mydockerapp before
docker run) will
re-deploy it in a clean, reproducible way.
So if you're getting started exploring docker and you're not sure how to get a couple of apps up and running just to give it a spin, hopefully this can set you on your way quickly!
(Many thanks to my employer, Rackspace, for sponsoring the time for me to write this post. Thanks also to Jean-Paul Calderone, Alex Gaynor, and Julian Berman for their thoughtful review. Any errors are surely my own.)