I've started using Docker in my home setup for a side project that I tinker with.
I just finished setting up multiple build systems in which I can compile that project for x86 and x64 using docker containers.
Why?
The compilation environment before docker was a set of virtual machines that had the exact dependencies required to compile the application. To keep the environments clean, these virtual machines are not used for anything other than compilation. This is wasteful, but unavoidable ... until I learnt about Docker.
Now the dependencies required to compile the application are described in a Dockerfile and the image pushed to my in house repository. Side note: There was no need for the repository to be in-house, it was just some fun thing I was trying out.... and it worked, so why not use it?
The Dockerfile is the perfect place to document the dependencies and the uploaded image allows me to instantiate it as a container on any machine at home.
How?
I first started with a Dockerfile that would create a monolithic big image. Something like
FROM ubuntu:15.04
RUN apt-get update ; apt-get -y dist-upgrade ; apt-get -y install dependencies
This generated an image that was about a GB in size. The moment I started making the next image I realized that it would eat up a whole lot of space on my laptop. this would not do.
Dockers storage layers are based on sharing. The next step therefore was to begin sharing.
I just finished setting up multiple build systems in which I can compile that project for x86 and x64 using docker containers.
Why?
The compilation environment before docker was a set of virtual machines that had the exact dependencies required to compile the application. To keep the environments clean, these virtual machines are not used for anything other than compilation. This is wasteful, but unavoidable ... until I learnt about Docker.
Now the dependencies required to compile the application are described in a Dockerfile and the image pushed to my in house repository. Side note: There was no need for the repository to be in-house, it was just some fun thing I was trying out.... and it worked, so why not use it?
The Dockerfile is the perfect place to document the dependencies and the uploaded image allows me to instantiate it as a container on any machine at home.
How?
I first started with a Dockerfile that would create a monolithic big image. Something like
FROM ubuntu:15.04
RUN apt-get update ; apt-get -y dist-upgrade ; apt-get -y install dependencies
This generated an image that was about a GB in size. The moment I started making the next image I realized that it would eat up a whole lot of space on my laptop. this would not do.
Dockers storage layers are based on sharing. The next step therefore was to begin sharing.
For this, I created a Dockerfile that would first be an "updated base image" of Ubuntu.
The second Dockerfile would build on top of that to create a layer that held the first set of common dependencies.
The third Dockerfile would build on top of the second to create a layer for the not-so-shareable dependencies.
This organisation allowed me to create three different final images that shared about 400-500 MB of data between themselves.
The third Dockerfile would build on top of the second to create a layer for the not-so-shareable dependencies.
This organisation allowed me to create three different final images that shared about 400-500 MB of data between themselves.
So the first Dockerfile was (named firstlevel):
FROM ubuntu:15.04
RUN apt-get update ; apt-get -y dist-upgrade ; apt-get -y install binutils vim build-essential
RUN apt-get update ; apt-get -y dist-upgrade ; apt-get -y install binutils vim build-essential
The second was:
FROM firstlevel
RUN apt-get update ; apt-get -y install second_dependencies
RUN apt-get update ; apt-get -y install second_dependencies
And then the third was
FROM secondlevel
RUN apt-get update ; apt-get -y install third_dependencies
RUN apt-get update ; apt-get -y install third_dependencies
And to make the creation of these images simple, I created a directory structure for each Dockerfile and a Makefile next to it and a controller Makefile that would invoke make in each sub-directory.
When I run the topmost make, builds 6 images automatically within about 10 minutes. There's very little interaction required to recreate everything.
When I run the topmost make, builds 6 images automatically within about 10 minutes. There's very little interaction required to recreate everything.
Compare and contrast this against what I need to do with my virtual machines for a moment.
To recreate my virtual machines, I would have to install the operating system or clone it, then run the script (which I should have created first to document what the dependencies are) and then its finally ready. One build system would be close to 2-3 hours to recreate.
Instead, I have 6 Docker images ready in 10 minutes.
To recreate my virtual machines, I would have to install the operating system or clone it, then run the script (which I should have created first to document what the dependencies are) and then its finally ready. One build system would be close to 2-3 hours to recreate.
Instead, I have 6 Docker images ready in 10 minutes.
Cool!
Now that they're ready, its also simple to start off a compilation on any physical machine. I'm not limited to running the compilation ONLY on the virtual machine created for that purpose.
The container can be started on my laptop, my desktop, my friends laptop, any virtual machine I have or even on an Amazon EC2 instance.
The container can be started on my laptop, my desktop, my friends laptop, any virtual machine I have or even on an Amazon EC2 instance.
Wow!
Machines ought to be made simple to use. Even for hackers like me.
Remember: Cattle, not pets.
Remember: Cattle, not pets.
*Wow intensifies*