Table of Contents
Objective
After completing this , you will be able to deploy Docker and Docker containers
using best practices.
Ov erview
You will:
Know what's next: the future of Docker, and how to get help.
Agenda
Part 1
Part 2
About Docker
Instant Gratification
Install Docker
Initial Images
Connecting Containers
Building on a Container
Daemonizing Containers
About Docker.
Founded in 2009.
Docker Hub - online home and hub for managing your Docker containers.
Docker Services & Training - professional services and training to help you get
the best out of Docker.
The problem
Container technology has been around for a while (c.f. LXC, Solaris Zones, BSD
Jails)
But, their use was largely limited to specialized organizations, with special tools
& training. Containers were not portable
Analogy: Shipping containers are not just steel boxes. They are steel boxes that
are a standard size, and have hooks and holes in all the same places
Process Simplification
Docker can simplify both workflows and communication, and that usually starts with
the deployment story. Traditionally, the cycle of getting an application to production
often looks something like
Revision Control
The first thing that Docker gives you out of the box is two forms of revision control.
One is used to track the filesystem layers that images are made up of, and the other is
a tagging systems for built containers.
Filesystem layers
Docker containers are made up of stacked filesystem layers, each identified by a
unique hash, where each new set of changes made during the build process is laid on
top of the previous changes. Thats great because it means that when you do a new
build, you only have to rebuild the layers that include and build upon the change
youre deploying. This saves time and bandwidth because containers are shipped
around as layers and you dont have to ship layers that a server already has stored.
Image tags
Docker has a built-in mechanism for versioning : it provides image tagging at
deployment time. You can leave multiple revisions of your application on the
server and just tag them at release.
Building
Docker doesnt solve all the problems, but it does provide a standardized
tool configuration and tool set for builds. That makes it a lot easier
The Docker command-line tool contains a ' build ' flag that will consume a Dockerfile
and produce a Docker image. Each command in a Dockerfile generates a new layer
in the image. The great part of all of this standardization is that any engineer who
has worked with a Dockerfile can dive right into and modify the build of any other
applications .
Isolation
Containers are isolated from each other . You can put limits on their resources,
the default container configuration just has them all sharing CPU and memory
on the host system .
Stateless Applications
A good example of the kind of application that containerizes well is a web application
If you think about your web application, though, it probably has local state that you rely on,
like configuration files. That might not seem like a lot of state, but it means that youve limited
the reusability of your container, and made it more challenging to deploy into different
environments, without maintaining configuration data in your codebase.
In many cases, the process of containerizing your application means that you move
configuration state into environment variables that can be passed to your application
from the container. This allows you to easily do things like use the same container to
run in either production or staging environments. In most companies, those environments
would require many different configuration settings, from the names of databases to
the hostnames of other service dependencies.
With containers, you might also find that you are always decreasing the size of your
containerized application as you optimize it down to the bare essentials required to run.
Installing Docker
Objectives
At the end of this lesson, you will be able to:
Install Docker.
Installing Docker
Docker is easy to install.
It runs on:
Ubuntu;
Debian;
Fedora;
Gentoo.
Hello World
$ sudo docker run busybox echo datreon
datreon
Docker architecture
Log out
$ exit
Install Docker
Section summary
We've learned how to:
Install Docker.
Having a Docker Hub account will allow us to store our images in the registry.
Note: if you have an existing Index/Hub account, this step is not needed.
Login
Let's use our new account to login to the Docker Hub!
$ docker login
Username: my_docker_hub_login
Password:
Email: my@email.com
Login Succeeded
"https://index.docker.io/v1/": {
"auth":"amFtdHVyMDE6aTliMUw5ckE=",
"email":"education@docker.com"
}
The auth section is Base64 encoding of your user name and password.
It should be owned by your user with permissions of 0600.
You should protect this file!
Section summary
We've learned how to:
Download an image.
Images
What are they?
Base images (ubuntu, busybox, fedora etc.) are what you build your own custom
images on top of.
Images are layered, and each layer represents a diff (what changed) from the
previous layer. For instance, you could add Python 3 on top of a base image.
Add node
Images are like templates or stencils that you can create containers from.
In a Docker registry.
DESCRIPTION
STARS
0
0
0
0
0
OFFICIAL
AUTOMATED
[OK]
[OK]
[OK]
[OK]
[OK]
Root namespace
The root namespace is for official images. They are put there by Docker Inc., but they
are generally authored and maintained by third parties.
Those images include some barebones distro images, for instance:
ubuntu
fedora
centos
User namespace
The user namespace holds images for Docker Hub users and organizations.
For example:
datreon/ docker-fundamentals-image
Self-Hosted namespace
This namespace holds images which are not hosted on Docker Hub, but on third party
registries.
They contain the hostname (or IP address), and optionally the port, of the registry
server.
For example:
localhost:5000/ubuntu
Note: self-hosted registries used to be called private registries, but this is misleading!
Downloading images
We already downloaded two root images earlier:
TAG
latest
13.10
saucy
raring
13.04
12.10
quantal
10.04
lucid
12.04
latest
precise
IMAGE ID
8144a5b2bc0c
9f676bd305a4
9f676bd305a4
eb601b8965b8
eb601b8965b8
5ac751e8d623
5ac751e8d623
9cc9ea5ea540
9cc9ea5ea540
9cd978db300e
9cd978db300e
9cd978db300e
CREATED
5 days ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
7 weeks ago
VIRTUAL SIZE
835 MB
178 MB
178 MB
166.5 MB
166.5 MB
161 MB
161 MB
180.8 MB
180.8 MB
204.4 MB
204.4 MB
204.4 MB
Explanation
-a, --all
-f, --filter=[]
--no-trunc
-q, --quiet
The rmi command command removes images. Removing an image also removes all the
underlying images that it depends on and were downloaded when it was pulled:
$ docker rmi [OPTION] {IMAGE(s)]
Flag
Explanation
-f, --force
--no-prune
This command removes one of the images from your machine: for eg an image named 'test'
$ docker rmi test
The -i flag allows us to specify a file instead of trying to get a stream from th
stdin file
Building
uilding Docker Images
Objectives
At the end of this lesson, you will be able to:
FROM specifies a source image for our new image. It's mandatory.
CMD defines the default command to run when a container is launched from this
image.
EXPOSE lists the network ports to open when a container is launched from this
image.
You can only have one CMD and one ENTRYPOINT instruction in a
Dockerfile.
A user image:
FROM datreon/ubuntu
Or self-hosted image:
FROM localhost:5000/node
The FROM instruction can be specified more than once to build multiple images.
FROM ubuntu:14.04
. . .
FROM fedora:20
. . .
Each FROM instruction marks the beginning of the build of a new image.
The `-t command-line parameter will only apply to the last image.
An optional version tag can be added after the name of the image.
E.g.: ubuntu:14.04.
Or using the exec method, which avoids shell string expansion, and allows execution in
images that don't have /bin/sh:
RUN [ "apt-get", "update" ]
Execute a command.
If you want to start something automatically when the container runs, you should use
CMD and/or ENTRYPOINT.
When you docker run -p <port> ..., that port becomes public.
(Even if it was not declared with EXPOSE.)
When you docker run -P ... (without port number), all ports declared
with EXPOSE become public.
A public port is reachable from other containers and from outside the host.
A private port is not reachable from outside.
Note: /src/node is not relative to the host filesystem, but to the directory
containing the Dockerfile.
Otherwise, a Dockerfile could succeed on host A, but fail on host B.
The ADD instruction can also be used to get remote files.
ADD http://www.example.com/node /opt/
ADD is cached. If you recreate the image and no files have changed then a cache
is used.
If the local source is a zip file or a tarball it'll be unpacked to the destination.
Any files created by the ADD instruction are owned by root with permissions of
0755.
The VOLUME instruction will create a data volume mount point at the specified path.
VOLUME [ "/opt/mongo/data" ]
Data volumes persist until all containers referencing them are destroyed.
You can specify WORKDIR again to change the working directory for further operations.
This will result in an environment variable being created in any containers created from
this image of
node_PORT=8080
You can also specify environment variables when you use docker run.
$ docker run -e node_PORT=8000 -e node_HOST=www.example.com ...
Means we don't need to specify nginx -g "daemon off;" when running the
container.
Instead of:
$ docker run -d --name=node_container
node:1.0
If we were to run:
$ docker run datreon/ls -l
Instead of trying to run -l, the container will run /bin/ls -l.
-t
This will override the options CMD provided with new flags.
Working with
Images
Working
with
Images
Pulling images
Earlier in this training we saw how to pull images down from the Docker Hub.
$ docker pull ubuntu:14.04
This will connect to the Docker Hub and download the ubuntu:14.04 image to allow
us to build containers from it.
We can also do the reverse and push an image to the Docker Hub so that others can use
it.
You will login using your Docker Hub name, account and email address you created
earlier in the training.
"https://index.docker.io/v1/": {
"auth":"amFtdHVyMDE6aTliMUw5ckE=",
"email":"datreon@gmail.com"
}
If the image doesn't exist on the Docker Hub, a new repository will be created.
You can push an updated image on top of an existing image. Only the layers
which have changed will be updated.
When you pull down the resulting image, only the updates will need to be
downloaded.
You can also browse to the Tags tab to see image tags, or navigate to a link in
the "Settings" sidebar to configure the repo.
epo.
Automated Builds
In addition to pushing images to Docker Hub you can also create special images call
Automated Builds. An Automated Build is created from a Dockerfile in a GitH
repository.
This provides a guarantee that the image came from a specific source and allows y
ensure that any downloaded image is built from a Dockerfile you can review
If you don't have a repository with a Dockerfile, you can fork https://github.com/
docker-training/staticweb, for instance.
Automated Building
Once configured your Automated Build will automatically start building an image from
the Dockerfile contained in your Git repository.
Every time you make a commit to that repository a new version of the image will be
built.
Section summary
We've learned how to:
Daemonized containers
The docker run command is launched with the -d command line flag.
Interactive containers
Attached a pseudo-terminal, i.e. let you get input and output from the container.
The container also runs until its controlling process stops or it is stopped or
killed.
While starting the daemon, you can run it with arguments that control
the Domain Name (DNS) configurations, storage drivers, and execution
drivers for the containers:
$ Docker -d -D -e lxc -s btrfs -dns 8.8.8.8 -dns-search example.com
Explanation
-d
-D
-e [option]
-s [option]
--dns [option(s)]
This sets the DNS server (or servers) for all Docker containers.
--dns-search
[option(s)]
This sets the DNS search domain (or domains) for all Docker
containers.
-H [option(s)]
This is the socket (or sockets) to bind to. It can be one or more of
tcp://host:port, unix:///path/to/socket, fd://*
or fd://socketfd.
Explanation
-a, --attach=[]
-d, --detach
-i, --interactive
This runs the container in interactive mode (keeps the stdin file
open).
-t, --tty
-p, --publish=[]
--rm
--privileged
-v, --volume=[]
--volumes-from=[]
-w, --workdir=""
--name=""
-h, --hostname=""
-u, --user=""
-e, --env=[]
--env-file=[]
--dns=[]
--dns-search=[]
--link=[]
-c, --cpushares=0
--cpuset=""
-m, --memory=""
--restart=""
Flags
Explanation
--cap-add=""
--cap-drop=""
--device=""
Launching a container
Let's create a new container from the ubuntu image:
$ docker run -i -t ubuntu:14.04 /bin/bash
root@268e59b5754c:/#
We've specified the ubuntu:14.04 image from which to create our container.
Container status
You can see container status using the docker ps command.
e.g.:
$ docker ps
CONTAINER ID
STATUS
IMAGE
PORTS
COMMAND
NAMES
CREATED
Since the container has stopped we can show it by adding the -l flag. This shows the
last run container, running or stopped.
$ docker ps -l
CONTAINER ID IMAGE
COMMAND
CREATED
STATUS
a2d4b003d7b6 ubuntu:12.04 /bin/bash 5 minutes ago Exit 0
PORTS
NAMES
sad_pare
IMAGE
STATUS
ubuntu:12.04
COMMAND
NAMES
/bin/bash
5 minutes
nathanleclaire/devbox:latest
/bin/bash -l
16 hours
minutes
b2d
training/webapp:latest
python -m SimpleHTTP
16 hours
minutes
5000/tcp, 0.0.0.0:49154->8000/tcp
training/webapp:latest
minutes
python -m SimpleHTTP
16 hours
nathanleclaire/devbox:latest
minutes
/bin/bash -l
16 hours
bar
IMAGE
COMMAND
CREATED
STATUS
ubuntu:12.04 /bin/bash 5 minutes ago Exit 0
PORTS
NAMES
sad_pare
This shows the ID (and only the ID!) of the last container we started.
Flag
Explanation
-a, --all
-q, --quiet
-s, --size
-l,
--latest
-n=""
--before=""
--after=""
Here we've used the --format flag and specified a single value from our inspect hash
result. This will return its value, in this case a Boolean value for the container's status.
(Fun fact: the argument to --format is a template using the Go text/template
package.)
Being lazy
T he container will be restarted using the same options you launched it with.
Note: if the prompt is not displayed after running docker attach, just press "Enter"
one more time. The prompt should then appear.
You can also attach to the container using its name.
$ docker attach sad_pare
root@<yourContainerID>:/#
if w e ran exit here the container would stop again because the /bin/bash process
would be ended.
You can detach from the running container using <CTRL+p><CTRL+q>.
The -a flag combines the function of docker attach when running the docker
start command.
The rm command
CPU shares
Docker thinks of CPU in terms of cpu shares. The computing power of all the CPU
cores in a system is considered to be the full pool of shares. 1024 is the number that
Docker assigns to represent the full pool. By configuring a containers CPU shares,
you can dictate how much time the container gets to use the CPU for. If you want the
container to be able to use at most half of the computing power of the system, then
you would allocate it 512 shares. Note that these are not exclusive shares, meaning
that assigning all 1024 shares to a container does not prevent all other containers
from running.
$ docker run -ti -c 1024 ubuntu:14.04 /bin/bash
Memory
We can control how much memory a container can access in a manner similar to
constraining the CPU. There is, however, one fundamental difference: while constraining the CPU only impacts the applications priority for CPU time, the memory
limit is a hard limit. Even on an unconstrained system with 96 GB of free memory, if
we tell a container that it may only have access to 24 GB, then it will only ever get to
use 24 GB regardless of the free memory on the system.
When you docker commit, the content of volumes is not brought into the
resulting image.
In another terminal, let's start another container with the same volume.
$ docker run --volumes-from alpha ubuntu cat /var/log/now
Fri May 30 05:06:27 UTC 2014
You have a separate disk with better performance (SSD) or resiliency (EBS) than
the system disk, and you want to put important data on that disk.
You want to share your source directory between your host (where the source
gets edited) and the container (where it is compiled or executed).
We can see that our volume is present on the file system of the Docker host.
All modifications done to /numbers in the container will also change /tmp/
numbers on the host!
This pattern is frequently used to give access to the Docker socket to a given
container.
Warning: when using such mounts, the container gains access to the host. It can
potentially do bad things.
Section summary
We've learned how to:
We've used the datreon/node image, which happens to have node & npm
IMAGE
...
COMMAND
...
CREATED
...
STATUS
...
PORTS
NAMES
0.0.0.0:49153->8080/tcp ...
The -p flag maps a random high port, here 49153 to port 8080 inside the
container. We can see it in the PORTS column.
The other columns have been scrubbed out for legibility here.
We specify the container ID and the port number for which we wish to return a mapped
port.
We can also find the mapped network port using the docker inspect command.
$ docker inspect -f "{{ .HostConfig.PortBindings }}" <yourContainerID>
{"80/tcp":[{"HostIp":"0.0.0.0","HostPort":"49153"}]}
This maps port 8080 on the host to port 8080 inside the container
.
Note that this style prevents you from spinning up multiple instances of the
same image (the ports will conflict).
We've again used the --format flag which selects a portion of the output
returned by the docker inspect command.
Connecting Containers
Connecting containers
Objectives
At the end of this lesson, you will be able to:
Connecting containers
We will learn how to use names and links to expose one container's port(s) to
another.
Why? So each component of your app (e.g., DB vs. web app) can run
independently with its own dependencies.
We're going to get two images: a node image and a mongodb image
We're going to link the containers running our node backend and mongo db
using Docker's link primitive.
This time we will expose a port to our Rails application in another container, not to the
host. But the Dockerfile syntax is identical.
Connecting Containers
Section summary
We've learned how to: