Journal Articles
Browse in : |
All
> Journals
> CVu
> 295
(11)
All > Topics > Programming (877) Any of these categories - All of these categories |
Note: when you create a new publication type, the articles module will automatically use the templates user-display-[publicationtype].xt and user-summary-[publicationtype].xt. If those templates do not exist when you try to preview or display a new article, you'll get this warning :-) Please place your own templates in themes/yourtheme/modules/articles . The templates will get the extension .xt there.
Title: A Brief Introduction to Docker
Author: Bob Schmidt
Date: 08 November 2017 17:00:11 +00:00 or Wed, 08 November 2017 17:00:11 +00:00
Summary: Silas S. Brown shares his experiences of setting up a virtual appliance.
Body:
Docker is basically a convenient way of setting up chroot jails on the GNU/Linux platform, but some companies now use it to deploy software to their servers. Docker is like having a lightweight virtual machine, except only on Linux (don’t expect to be able to run it on Windows or Mac except inside a Linux VM). One advantage of Docker over virtual machines is ease of initial setup. For example, on several versions of Red Hat Linux as used in some corporate environments, Virtualbox won’t run without significant extra effort, but Docker ‘just works’. Want to do something on a virtual Debian box? Install Docker, ensure sudo dockerd
is running, and do:
docker pull debian docker run -it debian /bin/bash
and you should be away (except you’ll need an apt-get update
before doing any serious amount of package installation).
But as soon as that first shell exits, any changes you made to the system (such as installing extra packages and changing the configuration) will be lost. That might be OK for a one-off experiment, but for serious use you probably want some of the configuration to persist. That’s usually done by writing build instructions for your own derived Docker image.
Just as the make
command uses a file typically named Makefile, so Docker uses a file called Dockerfile, which should be placed at the top of your source tree (or at least the part of it relevant to the Docker image you’re creating). Dockerfiles almost always name an existing GNU/Linux distribution to use as a base, followed by files or directories to copy into the container and setup commands to run:
FROM centos:6.8 COPY myDirectory /etc/myDirectory COPY src/*.c /home/user/src/ RUN yum install -y myPackage
but if you need to start daemons, you shouldn’t do so as an effect of the RUN
commands here, since these run only when the image is generated, not every time it’s started. You may add a single CMD
command to the Dockerfile saying what command should be run when docker run
is called on the image (there’s also an alternative called ENTRYPOINT
which can take additional command-line arguments from the docker run
command, in which case CMD
is repurposed to specify default arguments to add when these are missing), and this will be responsible for starting any necessary daemons, running the foreground process, and, if necessary, cleanly shutting down the daemons afterwards (otherwise they’ll all be aborted as soon as the master process exits).
It may also be worth noting that Docker will try to cache all intermediate states between commands in the Dockerfile, so combining multiple RUN
commands into one can save disk space. (RUN
commands are also expected to produce practically-identical results each time they are run: the cache will be refreshed if the source file of a COPY
is changed, but it will not be refreshed just because the expected result of some RUN
command changes unless the RUN
command itself is changed. This might affect you if you try to install your own work via a network-fetching RUN
instead of via a COPY
: changes you make upstream will not be reflected in the Docker build, unless you give Docker some other reason to invalidate its cache before reaching that RUN
, such as by making changes to a file that’s COPY
’d in first.)
Base images can be found on https://hub.docker.com/explore/ but the presence of application-specific base images (nginx, golang etc) is slightly misleading: you can’t ‘import’ multiple applications by depending on multiple base images, so it’s not as much of a ‘package manager’ as it seems. Granted, if you need one base environment to compile something, but then wish to copy only its final binary into another base environment (discarding the compiler etc), you can do this with Docker’s ‘multi-stage builds’:
FROM golang:1.7.3 as builder RUN build-my-Go-program FROM centos:6.8 COPY --from=builder /path/to/my/binary .
but you’d then have to make sure all the right libraries are in the final image. This might be useful if you need a compiler that’s harder to set up on all development machines due to distribution differences, and don’t want the bloat of putting the compiler in the final image. But beyond this, there unfortunately doesn’t yet seem to be a way of asking Docker to ‘merge’ base images, so you can’t use Docker itself as a package manager. At least it makes it easier to obtain minimal distributions (which can be different from the distribution you’re running) and use their own package managers. Please change the distro’s package-manager configuration to use your nearest mirror before downloading large packages with it, especially in a Docker image that’s likely to be re-built frequently (in extreme cases it might even be a good idea to cache some packages locally).
Further documentation can be found on docker.com; the EXPOSE
command is worth a look if you want to run a server inside the container that you wish to be visible from the outside.
Notes:
More fields may be available via dynamicdata ..