RUN RUN COPY RUN FROM RUN COPY RUN CMD, EXPOSE ... ``` * The build fails as soon as an instructions fails * If `RUN ` fails, the build doesn't produce an image * If it succeeds, it produces a clean image (without test libraries and data) .debug[[intro/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Dockerfile_Tips.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://upload.wikimedia.org/wikipedia/commons/4/4d/Locomotive_4700_with_a_container_train_at_Concordancia_de_Poceirao.jpg)] --- name: toc-naming-and-inspecting-containers class: title Naming and inspecting containers .nav[ [Previous section](#toc-tips-for-efficient-dockerfiles) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-container-networking-basics) ] .debug[(automatically generated title slide)] --- class: title # Naming and inspecting containers ![Markings on container door](images/title-naming-and-inspecting-containers.jpg) .debug[[intro/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Naming_And_Inspecting.md)] --- ## Objectives In this lesson, we will learn about an important Docker concept: container *naming*. Naming allows us to: * Reference easily a container. * Ensure unicity of a specific container. We will also see the `inspect` command, which gives a lot of details about a container. .debug[[intro/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Naming_And_Inspecting.md)] --- ## Naming our containers So far, we have referenced containers with their ID. We have copy-pasted the ID, or used a shortened prefix. But each container can also be referenced by its name. If a container is named `thumbnail-worker`, I can do: ```bash $ docker logs thumbnail-worker $ docker stop thumbnail-worker etc. ``` .debug[[intro/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Naming_And_Inspecting.md)] --- ## Default names When we create a container, if we don't give a specific name, Docker will pick one for us. It will be the concatenation of: * A mood (furious, goofy, suspicious, boring...) * The name of a famous inventor (tesla, darwin, wozniak...) Examples: `happy_curie`, `clever_hopper`, `jovial_lovelace` ... .debug[[intro/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Naming_And_Inspecting.md)] --- ## Specifying a name You can set the name of the container when you create it. ```bash $ docker run --name ticktock jpetazzo/clock ``` If you specify a name that already exists, Docker will refuse to create the container. This lets us enforce unicity of a given resource. .debug[[intro/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Naming_And_Inspecting.md)] --- ## Renaming containers * You can rename containers with `docker rename`. * This allows you to "free up" a name without destroying the associated container. .debug[[intro/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Naming_And_Inspecting.md)] --- ## Inspecting a container The `docker inspect` command will output a very detailed JSON map. ```bash $ docker inspect [{ ... (many pages of JSON here) ... ``` There are multiple ways to consume that information. .debug[[intro/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Naming_And_Inspecting.md)] --- ## Parsing JSON with the Shell * You *could* grep and cut or awk the output of `docker inspect`. * Please, don't. * It's painful. * If you really must parse JSON from the Shell, use JQ! (It's great.) ```bash $ docker inspect | jq . ``` * We will see a better solution which doesn't require extra tools. .debug[[intro/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Naming_And_Inspecting.md)] --- ## Using `--format` You can specify a format string, which will be parsed by Go's text/template package. ```bash $ docker inspect --format '{{ json .Created }}' "2015-02-24T07:21:11.712240394Z" ``` * The generic syntax is to wrap the expression with double curly braces. * The expression starts with a dot representing the JSON object. * Then each field or member can be accessed in dotted notation syntax. * The optional `json` keyword asks for valid JSON output. (e.g. here it adds the surrounding double-quotes.) .debug[[intro/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Naming_And_Inspecting.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://upload.wikimedia.org/wikipedia/commons/7/7e/ShippingContainerSFBay.jpg)] --- name: toc-container-networking-basics class: title Container networking basics .nav[ [Previous section](#toc-naming-and-inspecting-containers) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-container-network-drivers) ] .debug[(automatically generated title slide)] --- class: title # Container networking basics ![A dense graph network](images/title-container-networking-basics.jpg) .debug[[intro/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Networking_Basics.md)] --- ## Objectives We will now run network services (accepting requests) in containers. At the end of this section, you will be able to: * Run a network service in a container. * Manipulate container networking basics. * Find a container's IP address. We will also explain the different network models used by Docker. .debug[[intro/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Networking_Basics.md)] --- ## A simple, static web server Run the Docker Hub image `nginx`, which contains a basic web server: ```bash $ docker run -d -P nginx 66b1ce719198711292c8f34f84a7b68c3876cf9f67015e752b94e189d35a204e ``` * Docker will download the image from the Docker Hub. * `-d` tells Docker to run the image in the background. * `-P` tells Docker to make this service reachable from other computers. (`-P` is the short version of `--publish-all`.) But, how do we connect to our web server now? .debug[[intro/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Networking_Basics.md)] --- ## Finding our web server port We will use `docker ps`: ```bash $ docker ps CONTAINER ID IMAGE ... PORTS ... e40ffb406c9e nginx ... 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp ... ``` * The web server is running on ports 80 and 443 inside the container. * Those ports are mapped to ports 32769 and 32768 on our Docker host. We will explain the whys and hows of this port mapping. But first, let's make sure that everything works properly. .debug[[intro/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Networking_Basics.md)] --- ## Connecting to our web server (GUI) Point your browser to the IP address of your Docker host, on the port shown by `docker ps` for container port 80. ![Screenshot](images/welcome-to-nginx.png) .debug[[intro/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Networking_Basics.md)] --- ## Connecting to our web server (CLI) You can also use `curl` directly from the Docker host. Make sure to use the right port number if it is different from the example below: ```bash $ curl localhost:32769 Welcome to nginx! ... ``` .debug[[intro/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Networking_Basics.md)] --- ## Why are we mapping ports? * We are out of IPv4 addresses. * Containers cannot have public IPv4 addresses. * They have private addresses. * Services have to be exposed port by port. * Ports have to be mapped to avoid conflicts. .debug[[intro/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Networking_Basics.md)] --- ## Finding the web server port in a script Parsing the output of `docker ps` would be painful. There is a command to help us: ```bash $ docker port 80 32769 ``` .debug[[intro/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Networking_Basics.md)] --- ## Manual allocation of port numbers If you want to set port numbers yourself, no problem: ```bash $ docker run -d -p 80:80 nginx $ docker run -d -p 8000:80 nginx $ docker run -d -p 8080:80 -p 8888:80 nginx ``` * We are running two NGINX web servers. * The first one is exposed on port 80. * The second one is exposed on port 8000. * The third one is exposed on ports 8080 and 8888. Note: the convention is `port-on-host:port-on-container`. .debug[[intro/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Networking_Basics.md)] --- ## Plumbing containers into your infrastructure There are many ways to integrate containers in your network. * Start the container, letting Docker allocate a public port for it. Then retrieve that port number and feed it to your configuration. * Pick a fixed port number in advance, when you generate your configuration. Then start your container by setting the port numbers manually. * Use a network plugin, connecting your containers with e.g. VLANs, tunnels... * Enable *Swarm Mode* to deploy across a cluster. The container will then be reachable through any node of the cluster. When using Docker through an extra management layer like Mesos or Kubernetes, these will usually provide their own mechanism to expose containers. .debug[[intro/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Networking_Basics.md)] --- ## Finding the container's IP address We can use the `docker inspect` command to find the IP address of the container. ```bash $ docker inspect --format '{{ .NetworkSettings.IPAddress }}' 172.17.0.3 ``` * `docker inspect` is an advanced command, that can retrieve a ton of information about our containers. * Here, we provide it with a format string to extract exactly the private IP address of the container. .debug[[intro/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Networking_Basics.md)] --- ## Pinging our container We can test connectivity to the container using the IP address we've just discovered. Let's see this now by using the `ping` tool. ```bash $ ping 64 bytes from : icmp_req=1 ttl=64 time=0.085 ms 64 bytes from : icmp_req=2 ttl=64 time=0.085 ms 64 bytes from : icmp_req=3 ttl=64 time=0.085 ms ``` .debug[[intro/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Networking_Basics.md)] --- ## Section summary We've learned how to: * Expose a network port. * Manipulate container networking basics. * Find a container's IP address. In the next chapter, we will see how to connect containers together without exposing their ports. .debug[[intro/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Networking_Basics.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://upload.wikimedia.org/wikipedia/commons/c/c7/Copper_%26_Kings_Distillery_Shipping_Containers.jpg)] --- name: toc-container-network-drivers class: title Container network drivers .nav[ [Previous section](#toc-container-networking-basics) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-the-container-network-model) ] .debug[(automatically generated title slide)] --- # Container network drivers The Docker Engine supports many different network drivers. The built-in drivers include: * `bridge` (default) * `none` * `host` * `container` The driver is selected with `docker run --net ...`. The different drivers are explained with more details on the following slides. .debug[[intro/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Network_Drivers.md)] --- ## The default bridge * By default, the container gets a virtual `eth0` interface. (In addition to its own private `lo` loopback interface.) * That interface is provided by a `veth` pair. * It is connected to the Docker bridge. (Named `docker0` by default; configurable with `--bridge`.) * Addresses are allocated on a private, internal subnet. (Docker uses 172.17.0.0/16 by default; configurable with `--bip`.) * Outbound traffic goes through an iptables MASQUERADE rule. * Inbound traffic goes through an iptables DNAT rule. * The container can have its own routes, iptables rules, etc. .debug[[intro/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Network_Drivers.md)] --- ## The null driver * Container is started with `docker run --net none ...` * It only gets the `lo` loopback interface. No `eth0`. * It can't send or receive network traffic. * Useful for isolated/untrusted workloads. .debug[[intro/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Network_Drivers.md)] --- ## The host driver * Container is started with `docker run --net host ...` * It sees (and can access) the network interfaces of the host. * It can bind any address, any port (for ill and for good). * Network traffic doesn't have to go through NAT, bridge, or veth. * Performance = native! Use cases: * Performance sensitive applications (VOIP, gaming, streaming...) * Peer discovery (e.g. Erlang port mapper, Raft, Serf...) .debug[[intro/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Network_Drivers.md)] --- ## The container driver * Container is started with `docker run --net container:id ...` * It re-uses the network stack of another container. * It shares with this other container the same interfaces, IP address(es), routes, iptables rules, etc. * Those containers can communicate over their `lo` interface. (i.e. one can bind to 127.0.0.1 and the others can connect to it.) .debug[[intro/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Network_Drivers.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://cdn.pixabay.com/photo/2017/08/01/21/54/container-2568197_1280.jpg)] --- name: toc-the-container-network-model class: title The Container Network Model .nav[ [Previous section](#toc-container-network-drivers) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-service-discovery-with-containers) ] .debug[(automatically generated title slide)] --- class: title # The Container Network Model ![A denser graph network](images/title-the-container-network-model.jpg) .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## Objectives We will learn about the CNM (Container Network Model). At the end of this lesson, you will be able to: * Create a private network for a group of containers. * Use container naming to connect services together. * Dynamically connect and disconnect containers to networks. * Set the IP address of a container. We will also explain the principle of overlay networks and network plugins. .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## The Container Network Model The CNM was introduced in Engine 1.9.0 (November 2015). The CNM adds the notion of a *network*, and a new top-level command to manipulate and see those networks: `docker network`. ```bash $ docker network ls NETWORK ID NAME DRIVER 6bde79dfcf70 bridge bridge 8d9c78725538 none null eb0eeab782f4 host host 4c1ff84d6d3f blog-dev overlay 228a4355d548 blog-prod overlay ``` .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## What's in a network? * Conceptually, a network is a virtual switch. * It can be local (to a single Engine) or global (spanning multiple hosts). * A network has an IP subnet associated to it. * Docker will allocate IP addresses to the containers connected to a network. * Containers can be connected to multiple networks. * Containers can be given per-network names and aliases. * The names and aliases can be resolved via an embedded DNS server. .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## Network implementation details * A network is managed by a *driver*. * All the drivers that we have seen before are available. * A new multi-host driver, *overlay*, is available out of the box. * More drivers can be provided by plugins (OVS, VLAN...) * A network can have a custom IPAM (IP allocator). .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## Differences with the CNI * CNI = Container Network Interface * CNI is used notably by Kubernetes * With CNI, all the nodes and containers are on a single IP network * Both CNI and CNM offer the same functionality, but with very different methods .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## Creating a network Let's create a network called `dev`. ```bash $ docker network create dev 4c1ff84d6d3f1733d3e233ee039cac276f425a9d5228a4355d54878293a889ba ``` The network is now visible with the `network ls` command: ```bash $ docker network ls NETWORK ID NAME DRIVER 6bde79dfcf70 bridge bridge 8d9c78725538 none null eb0eeab782f4 host host 4c1ff84d6d3f dev bridge ``` .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## Placing containers on a network We will create a *named* container on this network. It will be reachable with its name, `es`. ```bash $ docker run -d --name es --net dev elasticsearch:2 8abb80e229ce8926c7223beb69699f5f34d6f1d438bfc5682db893e798046863 ``` .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## Communication between containers Now, create another container on this network. .small[ ```bash $ docker run -ti --net dev alpine sh root@0ecccdfa45ef:/# ``` ] From this new container, we can resolve and ping the other one, using its assigned name: .small[ ```bash / # ping es PING es (172.18.0.2) 56(84) bytes of data. 64 bytes from es.dev (172.18.0.2): icmp_seq=1 ttl=64 time=0.221 ms 64 bytes from es.dev (172.18.0.2): icmp_seq=2 ttl=64 time=0.114 ms 64 bytes from es.dev (172.18.0.2): icmp_seq=3 ttl=64 time=0.114 ms ^C --- es ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.114/0.149/0.221/0.052 ms root@0ecccdfa45ef:/# ``` ] .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- class: extra-details ## Resolving container addresses In Docker Engine 1.9, name resolution is implemented with `/etc/hosts`, and updating it each time containers are added/removed. .small[ ```bash [root@0ecccdfa45ef /]# cat /etc/hosts 172.18.0.3 0ecccdfa45ef 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.18.0.2 es 172.18.0.2 es.dev ``` ] In Docker Engine 1.10, this has been replaced by a dynamic resolver. (This avoids race conditions when updating `/etc/hosts`.) .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- class: pic .interstitial[![Image separating from the next chapter](http://s0.geograph.org.uk/geophotos/05/04/71/5047160_cc034d65.jpg)] --- name: toc-service-discovery-with-containers class: title Service discovery with containers .nav[ [Previous section](#toc-the-container-network-model) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-ambassadors) ] .debug[(automatically generated title slide)] --- # Service discovery with containers * Let's try to run an application that requires two containers. * The first container is a web server. * The other one is a redis data store. * We will place them both on the `dev` network created before. .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## Running the web server * The application is provided by the container image `jpetazzo/trainingwheels`. * We don't know much about it so we will try to run it and see what happens! Start the container, exposing all its ports: ```bash $ docker run --net dev -d -P jpetazzo/trainingwheels ``` Check the port that has been allocated to it: ```bash $ docker ps -l ``` .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## Test the web server * If we connect to the application now, we will see an error page: ![Trainingwheels error](images/trainingwheels-error.png) * This is because the Redis service is not running. * This container tries to resolve the name `redis`. Note: we're not using a FQDN or an IP address here; just `redis`. .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## Start the data store * We need to start a Redis container. * That container must be on the same network as the web server. * It must have the right name (`redis`) so the application can find it. Start the container: ```bash $ docker run --net dev --name redis -d redis ``` .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## Test the web server again * If we connect to the application now, we should see that the app is working correctly: ![Trainingwheels OK](images/trainingwheels-ok.png) * When the app tries to resolve `redis`, instead of getting a DNS error, it gets the IP address of our Redis container. .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## A few words on *scope* * What if we want to run multiple copies of our application? * Since names are unique, there can be only one container named `redis` at a time. * However, we can specify the network name of our container with `--net-alias`. * `--net-alias` is scoped per network, and independent from the container name. .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- class: extra-details ## Using a network alias instead of a name Let's remove the `redis` container: ```bash $ docker rm -f redis ``` And create one that doesn't block the `redis` name: ```bash $ docker run --net dev --net-alias redis -d redis ``` Check that the app still works (but the counter is back to 1, since we wiped out the old Redis container). .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- class: x-extra-details ## Names are *local* to each network Let's try to ping our `es` container from another container, when that other container is *not* on the `dev` network. ```bash $ docker run --rm alpine ping es ping: bad address 'es' ``` Names can be resolved only when containers are on the same network. Containers can contact each other only when they are on the same network (you can try to ping using the IP address to verify). .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- class: extra-details ## Network aliases We would like to have another network, `prod`, with its own `es` container. But there can be only one container named `es`! We will use *network aliases*. A container can have multiple network aliases. Network aliases are *local* to a given network (only exist in this network). Multiple containers can have the same network alias (even on the same network). In Docker Engine 1.11, resolving a network alias yields the IP addresses of all containers holding this alias. .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- class: extra-details ## Creating containers on another network Create the `prod` network. ```bash $ docker create network prod 5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c ``` We can now create multiple containers with the `es` alias on the new `prod` network. ```bash $ docker run -d --name prod-es-1 --net-alias es --net prod elasticsearch:2 38079d21caf0c5533a391700d9e9e920724e89200083df73211081c8a356d771 $ docker run -d --name prod-es-2 --net-alias es --net prod elasticsearch:2 1820087a9c600f43159688050dcc164c298183e1d2e62d5694fd46b10ac3bc3d ``` .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- class: extra-details ## Resolving network aliases Let's try DNS resolution first, using the `nslookup` tool that ships with the `alpine` image. ```bash $ docker run --net prod --rm alpine nslookup es Name: es Address 1: 172.23.0.3 prod-es-2.prod Address 2: 172.23.0.2 prod-es-1.prod ``` (You can ignore the `can't resolve '(null)'` errors.) .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- class: extra-details ## Connecting to aliased containers Each ElasticSearch instance has a name (generated when it is started). This name can be seen when we issue a simple HTTP request on the ElasticSearch API endpoint. Try the following command a few times: .small[ ```bash $ docker run --rm --net dev centos curl -s es:9200 { "name" : "Tarot", ... } ``` ] Then try it a few times by replacing `--net dev` with `--net prod`: .small[ ```bash $ docker run --rm --net prod centos curl -s es:9200 { "name" : "The Symbiote", ... } ``` ] .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## Good to know ... * Docker will not create network names and aliases on the default `bridge` network. * Therefore, if you want to use those features, you have to create a custom network first. * Network aliases are *not* unique on a given network. * i.e., multiple containers can have the same alias on the same network. * In that scenario, the Docker DNS server will return multiple records. (i.e. you will get DNS round robin out of the box.) * Enabling *Swarm Mode* gives access to clustering and load balancing with IPVS. * Creation of networks and network aliases is generally automated with tools like Compose. .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- class: extra-details ## A few words about round robin DNS Don't rely exclusively on round robin DNS to achieve load balancing. Many factors can affect DNS resolution, and you might see: - all traffic going to a single instance; - traffic being split (unevenly) between some instances; - different behavior depending on your application language; - different behavior depending on your base distro; - different behavior depending on other factors (sic). It's OK to use DNS to discover available endpoints, but remember that you have to re-resolve every now and then to discover new endpoints. .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- class: extra-details ## Custom networks When creating a network, extra options can be provided. * `--internal` disables outbound traffic (the network won't have a default gateway). * `--gateway` indicates which address to use for the gateway (when utbound traffic is allowed). * `--subnet` (in CIDR notation) indicates the subnet to use. * `--ip-range` (in CIDR notation) indicates the subnet to allocate from. * `--aux-address` allows to specify a list of reserved addresses (which won't be allocated to containers). .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- class: extra-details ## Setting containers' IP address * It is possible to set a container's address with `--ip`. * The IP address has to be within the subnet used for the container. A full example would look like this. ```bash $ docker network create --subnet 10.66.0.0/16 pubnet 42fb16ec412383db6289a3e39c3c0224f395d7f85bcb1859b279e7a564d4e135 $ docker run --net pubnet --ip 10.66.66.66 -d nginx b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09 ``` *Note: don't hard code container IP addresses in your code!* *I repeat: don't hard code container IP addresses in your code!* .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## Overlay networks * The features we've seen so far only work when all containers are on a single host. * If containers span multiple hosts, we need an *overlay* network to connect them together. * Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging VXLAN. * Other plugins (Weave, Calico...) can provide overlay networks as well. * Once you have an overlay network, *all the features that we've used in this chapter work identically.* .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- class: extra-details ## Multi-host networking (overlay) Out of the scope for this intro-level workshop! Very short instructions: - enable Swarm Mode (`docker swarm init` then `docker swarm join` on other nodes) - `docker network create mynet --driver overlay` - `docker service create --network mynet myimage` See http://jpetazzo.github.io/container.training for all the deets about clustering! .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- class: extra-details ## Multi-host networking (plugins) Out of the scope for this intro-level workshop! General idea: - install the plugin (they often ship within containers) - run the plugin (if it's in a container, it will often require extra parameters; don't just `docker run` it blindly!) - some plugins require configuration or activation (creating a special file that tells Docker "use the plugin whose control socket is at the following location") - you can then `docker network create --driver pluginname` .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- ## Section summary We've learned how to: * Create private networks for groups of containers. * Assign IP addresses to containers. * Use container naming to implement service discovery. .debug[[intro/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Container_Network_Model.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://c1.staticflickr.com/2/1513/25530579783_57d6dd3d9c_b.jpg)] --- name: toc-ambassadors class: title Ambassadors .nav[ [Previous section](#toc-service-discovery-with-containers) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-local-development-workflow-with-docker) ] .debug[(automatically generated title slide)] --- class: title # Ambassadors ![Two serious-looking persons shaking hands](images/title-ambassador.jpg) .debug[[intro/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Ambassadors.md)] --- ## The ambassador pattern Ambassadors are containers that "masquerade" or "proxy" for another service. They abstract the connection details for this services, and can help with: * discovery (where is my service actually running?) * migration (what if my service has to be moved while I use it?) * fail over (how do I know to which instance of a replicated service I should connect?) * load balancing (how to I spread my requests across multiple instances of a service?) * authentication (what if my service requires credentials, certificates, or otherwise?) .debug[[intro/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Ambassadors.md)] --- ## Introduction to Ambassadors The ambassador pattern: * Takes advantage of Docker's per-container naming system and abstracts connections between services. * Allows you to manage services without hard-coding connection information inside applications. To do this, instead of directly connecting containers you insert ambassador containers. .debug[[intro/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Ambassadors.md)] --- ![ambassador](images/ambassador-diagram.png) .debug[[intro/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Ambassadors.md)] --- ## Interacting with ambassadors * The web container uses normal Docker networking to connect to the ambassador. * The database container also talks with an ambassador. * For both containers, the ambassador is totally transparent. (There is no difference between normal operation and operation with an ambassador.) * If the database container is moved (or a failover happens), its new location will be tracked by the ambassador containers, and the web application container will still be able to connect, without reconfiguration. .debug[[intro/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Ambassadors.md)] --- ## Ambassadors for simple service discovery Use case: * my application code connects to `redis` on the default port (6379), * my Redis service runs on another machine, on a non-default port (e.g. 12345), * I want to use an ambassador to let my application connect without modification. The ambassador will be: * a container running right next to my application, * using the name `redis` (or linked as `redis`), * listening on port 6379, * forwarding connections to the actual Redis service. .debug[[intro/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Ambassadors.md)] --- ## Ambassadors for service migration Use case: * my application code still connects to `redis`, * my Redis service runs somewhere else, * my Redis service is moved to a different host+port, * the location of the Redis service is given to me via e.g. DNS SRV records, * I want to use an ambassador to automatically connect to the new location, with as little disruption as possible. The ambassador will be: * the same kind of container as before, * running an additional routine to monitor DNS SRV records, * updating the forwarding destination when the DNS SRV records are updated. .debug[[intro/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Ambassadors.md)] --- ## Ambassadors for credentials injection Use case: * my application code still connects to `redis`, * my application code doesn't provide Redis credentials, * my production Redis service requires credentials, * my staging Redis service requires different credentials, * I want to use an ambassador to abstract those credentials. The ambassador will be: * a container using the name `redis` (or a link), * passed the credentials to use, * running a custom proxy that accepts connections on Redis default port, * performing authentication with the target Redis service before forwarding traffic. .debug[[intro/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Ambassadors.md)] --- ## Ambassadors for load balancing Use case: * my application code connects to a web service called `api`, * I want to run multiple instances of the `api` backend, * those instances will be on different machines and ports, * I want to use an ambassador to abstract those details. The ambassador will be: * a container using the name `api` (or a link), * passed the list of backends to use (statically or dynamically), * running a load balancer (e.g. HAProxy or NGINX), * dispatching requests across all backends transparently. .debug[[intro/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Ambassadors.md)] --- ## "Ambassador" is a *pattern* There are many ways to implement the pattern. Different deployments will use different underlying technologies. * On-premise deployments with a trusted network can track container locations in e.g. Zookeeper, and generate HAproxy configurations each time a location key changes. * Public cloud deployments or deployments across unsafe networks can add TLS encryption. * Ad-hoc deployments can use a master-less discovery protocol like avahi to register and discover services. * It is also possible to do one-shot reconfiguration of the ambassadors. It is slightly less dynamic but has much less requirements. * Ambassadors can be used in addition to, or instead of, overlay networks. .debug[[intro/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Ambassadors.md)] --- ## Section summary We've learned how to: * Understand the ambassador pattern and what it is used for (service portability). For more information about the ambassador pattern, including demos on Swarm and ECS: * AWS re:invent 2015 [DVO317](https://www.youtube.com/watch?v=7CZFpHUPqXw) * [SwarmWeek video about Swarm+Compose](https://youtube.com/watch?v=qbIvUvwa6As) .debug[[intro/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Ambassadors.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://static.pexels.com/photos/163726/belgium-antwerp-shipping-container-163726.jpeg)] --- name: toc-local-development-workflow-with-docker class: title Local development workflow with Docker .nav[ [Previous section](#toc-ambassadors) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-working-with-volumes) ] .debug[(automatically generated title slide)] --- class: title # Local development workflow with Docker ![Construction site](images/title-local-development-workflow-with-docker.jpg) .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Objectives At the end of this section, you will be able to: * Share code between container and host. * Use a simple local development workflow. .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Containerized local development environments We want to solve the following issues: - "Works on my machine" - "Not the same version" - "Missing dependency" By using Docker containers, we will get a consistent development environment. .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Working on the "namer" application * We have to work on some application whose code is at: https://github.com/jpetazzo/namer. * What is it? We don't know yet! * Let's download the code. ```bash $ git clone https://github.com/jpetazzo/namer ``` .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Looking at the code ```bash $ cd namer $ ls -1 company_name_generator.rb config.ru docker-compose.yml Dockerfile Gemfile ``` -- Aha, a `Gemfile`! This is Ruby. Probably. We know this. Maybe? .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Looking at the `Dockerfile` ```dockerfile FROM ruby MAINTAINER Education Team at Docker COPY . /src WORKDIR /src RUN bundler install CMD ["rackup", "--host", "0.0.0.0"] EXPOSE 9292 ``` * This application is using a base `ruby` image. * The code is copied in `/src`. * Dependencies are installed with `bundler`. * The application is started with `rackup`. * It is listening on port 9292. .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Building and running the "namer" application * Let's build the application with the `Dockerfile`! -- ```bash $ docker build -t namer . ``` -- * Then run it. *We need to expose its ports.* -- ```bash $ docker run -dP namer ``` -- * Check on which port the container is listening. -- ```bash $ docker ps -l ``` .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Connecting to our application * Point our browser to our Docker node, on the port allocated to the container. -- * Hit "reload" a few times. -- * This is an enterprise-class, carrier-grade, ISO-compliant company name generator! (With 50% more bullshit than the average competition!) (Wait, was that 50% more, or 50% less? *Anyway!*) ![web application 1](images/webapp-in-blue.png) .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Making changes to the code Option 1: * Edit the code locally * Rebuild the image * Re-run the container Option 2: * Enter the container (with `docker exec`) * Install an editor * Make changes from within the container Option 3: * Use a *volume* to mount local files into the container * Make changes locally * Changes are reflected into the container .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Our first volume We will tell Docker to map the current directory to `/src` in the container. ```bash $ docker run -d -v $(pwd):/src -P namer ``` * `-d`: the container should run in detached mode (in the background). * `-v`: the following host directory should be mounted inside the container. * `-P`: publish all the ports exposed by this image. * `namer` is the name of the image we will run. * We don't specify a command to run because is is already set in the Dockerfile. .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Mounting volumes inside containers The `-v` flag mounts a directory from your host into your Docker container. The flag structure is: ```bash [host-path]:[container-path]:[rw|ro] ``` * If `[host-path]` or `[container-path]` doesn't exist it is created. * You can control the write status of the volume with the `ro` and `rw` options. * If you don't specify `rw` or `ro`, it will be `rw` by default. There will be a full chapter about volumes! .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Testing the development container * Check the port used by our new container. ```bash $ docker ps -l CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 045885b68bc5 namer rackup 3 seconds ago Up ... 0.0.0.0:32770->9292/tcp ... ``` * Open the application in your web browser. .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Making a change to our application Our customer really doesn't like the color of our text. Let's change it. ```bash $ vi company_name_generator.rb ``` And change ```css color: royalblue; ``` To: ```css color: red; ``` .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Viewing our changes * Reload the application in our browser. -- * The color should have changed. ![web application 2](images/webapp-in-red.png) .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Understanding volumes * Volumes are *not* copying or synchronizing files between the host and the container. * Volumes are *bind mounts*: a kernel mechanism associating a path to another. * Bind mounts are *kind of* similar to symbolic links, but at a very different level. * Changes made on the host or on the container will be visible on the other side. (Since under the hood, it's the same file on both anyway.) .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Trash your servers and burn your code *(This is the title of a [2013 blog post](http://chadfowler.com/2013/06/23/immutable-deployments.html) by Chad Fowler, where he explains the concept of immutable infrastructure.)* -- * Let's mess up majorly with our container. (Remove files or whatever.) * Now, how can we fix this? -- * Our old container (with the blue version of the code) is still running. * See on which port it is exposed: ```bash docker ps ``` * Point our browser to it to confirm that it still works fine. .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Immutable infrastructure in a nutshell * Instead of *updating* a server, we deploy a new one. * This might be challenging with classical servers, but it's trivial with containers. * In fact, with Docker, the most logical workflow is to build a new image and run it. * If something goes wrong with the new image, we can always restart the old one. * We can even keep both versions running side by side. If this pattern sounds interesting, you might want to read about *blue/green deployment* and *canary deployments*. .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Improving the workflow The workflow that we showed is nice, but it requires us to: * keep track of all the `docker run` flags required to run the container, * inspect the `Dockerfile` to know which path(s) to mount, * write scripts to hide that complexity. There has to be a better way! .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Docker Compose to the rescue * Docker Compose allows us to "encode" `docker run` parameters in a YAML file. * Here is the `docker-compose.yml` file that we can use for our "namer" app: ```yaml www: build: . volumes: - .:/src ports: - 80:9292 ``` * Try it: ```bash $ docker-compose up -d ``` .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Working with Docker Compose * When you see a `docker-compose.yml` file, you can use `docker-compose up`. * It can build images and run them with the required parameters. * Compose can also deal with complex, multi-container apps. (More on this later!) .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Recap of the development workflow 1. Write a Dockerfile to build an image containing our development environment. (Rails, Django, ... and all the dependencies for our app) 2. Start a container from that image. Use the `-v` flag to mount our source code inside the container. 3. Edit the source code outside the containers, using regular tools. (vim, emacs, textmate...) 4. Test the application. (Some frameworks pick up changes automatically. Others require you to Ctrl-C + restart after each modification.) 5. Iterate and repeat steps 3 and 4 until satisfied. 6. When done, commit+push source code changes. .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- class: extra-details ## Debugging inside the container Docker has a command called `docker exec`. It allows users to run a new process in a container which is already running. If sometimes you find yourself wishing you could SSH into a container: you can use `docker exec` instead. You can get a shell prompt inside an existing container this way, or run an arbitrary process for automation. .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- class: extra-details ## `docker exec` example ```bash $ # You can run ruby commands in the area the app is running and more! $ docker exec -it bash root@5ca27cf74c2e:/opt/namer# irb irb(main):001:0> [0, 1, 2, 3, 4].map {|x| x ** 2}.compact => [0, 1, 4, 9, 16] irb(main):002:0> exit ``` .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- class: extra-details ## Stopping the container Now that we're done let's stop our container. ```bash $ docker stop ``` And remove it. ```bash $ docker rm ``` .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- ## Section summary We've learned how to: * Share code between container and host. * Set our working directory. * Use a simple local development workflow. .debug[[intro/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Local_Development_Workflow.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://cdn.pixabay.com/photo/2017/03/12/06/18/container-2136505_1280.jpg)] --- name: toc-working-with-volumes class: title Working with volumes .nav[ [Previous section](#toc-local-development-workflow-with-docker) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-compose-for-development-stacks) ] .debug[(automatically generated title slide)] --- class: title # Working with volumes ![volume](images/title-working-with-volumes.jpg) .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- ## Objectives At the end of this section, you will be able to: * Create containers holding volumes. * Share volumes across containers. * Share a host directory with one or many containers. .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- ## Working with volumes Docker volumes can be used to achieve many things, including: * Bypassing the copy-on-write system to obtain native disk I/O performance. * Bypassing copy-on-write to leave some files out of `docker commit`. * Sharing a directory between multiple containers. * Sharing a directory between the host and a container. * Sharing a *single file* between the host and a container. .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- ## Volumes are special directories in a container Volumes can be declared in two different ways. * Within a `Dockerfile`, with a `VOLUME` instruction. ```dockerfile VOLUME /uploads ``` * On the command-line, with the `-v` flag for `docker run`. ```bash $ docker run -d -v /uploads myapp ``` In both cases, `/uploads` (inside the container) will be a volume. .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- class: extra-details ## Volumes bypass the copy-on-write system Volumes act as passthroughs to the host filesystem. * The I/O performance on a volume is exactly the same as I/O performance on the Docker host. * When you `docker commit`, the content of volumes is not brought into the resulting image. * If a `RUN` instruction in a `Dockerfile` changes the content of a volume, those changes are not recorded neither. * If a container is started with the `--read-only` flag, the volume will still be writable (unless the volume is a read-only volume). .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- class: extra-details ## Volumes can be shared across containers You can start a container with *exactly the same volumes* as another one. The new container will have the same volumes, in the same directories. They will contain exactly the same thing, and remain in sync. Under the hood, they are actually the same directories on the host anyway. This is done using the `--volumes-from` flag for `docker run`. We will see an example in the following slides. .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- class: extra-details ## Sharing app server logs with another container Let's start a Tomcat container: ```bash $ docker run --name webapp -d -p 8080:8080 -v /usr/local/tomcat/logs ``` Now, start an `alpine` container accessing the same volume: ```bash $ docker run --volumes-from webapp alpine sh -c "tail -f /usr/local/tomcat/logs/*" ``` Then, from another window, send requests to our Tomcat container: ```bash $ curl localhost:8080 ``` .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- ## Volumes exist independently of containers If a container is stopped, its volumes still exist and are available. Volumes can be listed and manipulated with `docker volume` subcommands: ```bash $ docker volume ls DRIVER VOLUME NAME local 5b0b65e4316da67c2d471086640e6005ca2264f3... local pgdata-prod local pgdata-dev local 13b59c9936d78d109d094693446e174e5480d973... ``` Some of those volume names were explicit (pgdata-prod, pgdata-dev). The others (the hex IDs) were generated automatically by Docker. .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- ## Naming volumes * Volumes can be created without a container, then used in multiple containers. Let's create a couple of volumes directly. ```bash $ docker volume create webapps webapps ``` ```bash $ docker volume create logs logs ``` Volumes are not anchored to a specific path. .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- ## Using our named volumes * Volumes are used with the `-v` option. * When a host path does not contain a /, it is considered to be a volume name. Let's start a web server using the two previous volumes. ```bash $ docker run -d -p 1234:8080 \ -v logs:/usr/local/tomcat/logs \ -v webapps:/usr/local/tomcat/webapps \ tomcat ``` Check that it's running correctly: ```bash $ curl localhost:1234 ... (Tomcat tells us how happy it is to be up and running) ... ``` .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- ## Using a volume in another container * We will make changes to the volume from another container. * In this example, we will run a text editor in the other container. (But this could be a FTP server, a WebDAV server, a Git receiver...) Let's start another container using the `webapps` volume. ```bash $ docker run -v webapps:/webapps -w /webapps -ti alpine vi ROOT/index.jsp ``` Vandalize the page, save, exit. Then run `curl localhost:1234` again to see your changes. .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- ## Managing volumes explicitly In some cases, you want a specific directory on the host to be mapped inside the container: * You want to manage storage and snapshots yourself. (With LVM, or a SAN, or ZFS, or anything else!) * You have a separate disk with better performance (SSD) or resiliency (EBS) than the system disk, and you want to put important data on that disk. * You want to share your source directory between your host (where the source gets edited) and the container (where it is compiled or executed). Wait, we already met the last use-case in our example development workflow! Nice. ```bash $ docker run -d -v /path/on/the/host:/path/in/container image ... ``` .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- class: extra-details ## Migrating data with `--volumes-from` The `--volumes-from` option tells Docker to re-use all the volumes of an existing container. * Scenario: migrating from Redis 2.8 to Redis 3.0. * We have a container (`myredis`) running Redis 2.8. * Stop the `myredis` container. * Start a new container, using the Redis 3.0 image, and the `--volumes-from` option. * The new container will inherit the data of the old one. * Newer containers can use `--volumes-from` too. .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- class: extra-details ## Data migration in practice Let's create a Redis container. ```bash $ docker run -d --name redis28 redis:2.8 ``` Connect to the Redis container and set some data. ```bash $ docker run -ti --link redis28:redis alpine telnet redis 6379 ``` Issue the following commands: ```bash SET counter 42 INFO server SAVE QUIT ``` .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- class: extra-details ## Upgrading Redis Stop the Redis container. ```bash $ docker stop redis28 ``` Start the new Redis container. ```bash $ docker run -d --name redis30 --volumes-from redis28 redis:3.0 ``` .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- class: extra-details ## Testing the new Redis Connect to the Redis container and see our data. ```bash docker run -ti --link redis30:redis alpine telnet redis 6379 ``` Issue a few commands. ```bash GET counter INFO server QUIT ``` .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- ## Volumes lifecycle * When you remove a container, its volumes are kept around. * You can list them with `docker volume ls`. * You can access them by creating a container with `docker run -v`. * You can remove them with `docker volume rm` or `docker system prune`. Ultimately, _you_ are the one responsible for logging, monitoring, and backup of your volumes. .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- class: extra-details ## Checking volumes defined by an image Wondering if an image has volumes? Just use `docker inspect`: ```bash $ # docker inspect training/datavol [{ "config": { . . . "Volumes": { "/var/webapp": {} }, . . . }] ``` .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- class: extra-details ## Checking volumes used by a container To look which paths are actually volumes, and to what they are bound, use `docker inspect` (again): ```bash $ docker inspect [{ "ID": "", . . . "Volumes": { "/var/webapp": "/var/lib/docker/vfs/dir/f4280c5b6207ed531efd4cc673ff620cef2a7980f747dbbcca001db61de04468" }, "VolumesRW": { "/var/webapp": true }, }] ``` * We can see that our volume is present on the file system of the Docker host. .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- ## Sharing a single file The same `-v` flag can be used to share a single file (instead of a directory). One of the most interesting examples is to share the Docker control socket. ```bash $ docker run -it -v /var/run/docker.sock:/var/run/docker.sock docker sh ``` From that container, you can now run `docker` commands communicating with the Docker Engine running on the host. Try `docker ps`! .warning[Since that container has access to the Docker socket, it has root-like access to the host.] .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- ## Volume plugins You can install plugins to manage volumes backed by particular storage systems, or providing extra features. For instance: * [dvol](https://github.com/ClusterHQ/dvol) - allows to commit/branch/rollback volumes; * [Flocker](https://clusterhq.com/flocker/introduction/), [REX-Ray](https://github.com/emccode/rexray) - create and manage volumes backed by an enterprise storage system (e.g. SAN or NAS), or by cloud block stores (e.g. EBS); * [Blockbridge](http://www.blockbridge.com/), [Portworx](http://portworx.com/) - provide distributed block store for containers; * and much more! .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- ## Section summary We've learned how to: * Create and manage volumes. * Share volumes across containers. * Share a host directory with one or many containers. .debug[[intro/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Working_With_Volumes.md)] --- class: pic .interstitial[![Image separating from the next chapter](http://www.publicdomainpictures.net/pictures/100000/velka/blue-containers.jpg)] --- name: toc-compose-for-development-stacks class: title Compose for development stacks .nav[ [Previous section](#toc-working-with-volumes) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-advanced-dockerfiles) ] .debug[(automatically generated title slide)] --- # Compose for development stacks Dockerfiles are great to build container images. But what if we work with a complex stack made of multiple containers? Eventually, we will want to write some custom scripts and automation to build, run, and connect our containers together. There is a better way: using Docker Compose. In this section, you will use Compose to bootstrap a development environment. .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## What is Docker Compose? Docker Compose (formerly known as `fig`) is an external tool. Unlike the Docker Engine, it is written in Python. It's open source as well. The general idea of Compose is to enable a very simple, powerful onboarding workflow: 1. Checkout your code. 2. Run `docker-compose up`. 3. Your app is up and running! .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Compose overview This is how you work with Compose: * You describe a set (or stack) of containers in a YAML file called `docker-compose.yml`. * You run `docker-compose up`. * Compose automatically pulls images, builds containers, and starts them. * Compose can set up links, volumes, and other Docker options for you. * Compose can run the containers in the background, or in the foreground. * When containers are running in the foreground, their aggregated output is shown. Before diving in, let's see a small example of Compose in action. .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Compose in action ![composeup](images/composeup.gif) .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Checking if Compose is installed If you are using the official training virtual machines, Compose has been pre-installed. You can always check that it is installed by running: ```bash $ docker-compose --version ``` .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Launching Our First Stack with Compose First step: clone the source code for the app we will be working on. ```bash $ cd $ git clone git://github.com/jpetazzo/trainingwheels ... $ cd trainingwheels ``` Second step: start your app. ```bash $ docker-compose up ``` Watch Compose build and run your app with the correct parameters, including linking the relevant containers together. .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Launching Our First Stack with Compose Verify that the app is running at `http://:8000`. ![composeapp](images/composeapp.png) .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Stopping the app When you hit `^C`, Compose tries to gracefully terminate all of the containers. After ten seconds (or if you press `^C` again) it will forcibly kill them. .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## The `docker-compose.yml` file Here is the file used in the demo: .small[ ```yaml version: "2" services: www: build: www ports: - 8000:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src redis: image: redis ``` ] .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Compose file versions Version 1 directly has the various containers (`www`, `redis`...) at the top level of the file. Version 2 has multiple sections: * `version` is mandatory and should be `"2"`. * `services` is mandatory and corresponds to the content of the version 1 format. * `networks` is optional and indicates to which networks containers should be connected. (By default, containers will be connected on a private, per-app network.) * `volumes` is optional and can define volumes to be used and/or shared by the containers. Version 3 adds support for deployment options (scaling, rolling updates, etc.) .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Containers in `docker-compose.yml` Each service in the YAML file must contain either `build`, or `image`. * `build` indicates a path containing a Dockerfile. * `image` indicates an image name (local, or on a registry). * If both are specified, an image will be built from the `build` directory and named `image`. The other parameters are optional. They encode the parameters that you would typically add to `docker run`. Sometimes they have several minor improvements. .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Container parameters * `command` indicates what to run (like `CMD` in a Dockerfile). * `ports` translates to one (or multiple) `-p` options to map ports. You can specify local ports (i.e. `x:y` to expose public port `x`). * `volumes` translates to one (or multiple) `-v` options. You can use relative paths here. For the full list, check: https://docs.docker.com/compose/compose-file/ .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Compose commands We already saw `docker-compose up`, but another one is `docker-compose build`. It will execute `docker build` for all containers mentioning a `build` path. It can also be invoked automatically when starting the application: ```bash docker-compose up --build ``` Another common option is to start containers in the background: ```bash docker-compose up -d ``` .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Check container status It can be tedious to check the status of your containers with `docker ps`, especially when running multiple apps at the same time. Compose makes it easier; with `docker-compose ps` you will see only the status of the containers of the current stack: ```bash $ docker-compose ps Name Command State Ports ---------------------------------------------------------------------------- trainingwheels_redis_1 /entrypoint.sh red Up 6379/tcp trainingwheels_www_1 python counter.py Up 0.0.0.0:8000->5000/tcp ``` .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Cleaning up (1) If you have started your application in the background with Compose and want to stop it easily, you can use the `kill` command: ```bash $ docker-compose kill ``` Likewise, `docker-compose rm` will let you remove containers (after confirmation): ```bash $ docker-compose rm Going to remove trainingwheels_redis_1, trainingwheels_www_1 Are you sure? [yN] y Removing trainingwheels_redis_1... Removing trainingwheels_www_1... ``` .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Cleaning up (2) Alternatively, `docker-compose down` will stop and remove containers. It will also remove other resources, like networks that were created for the application. ```bash $ docker-compose down Stopping trainingwheels_www_1 ... done Stopping trainingwheels_redis_1 ... done Removing trainingwheels_www_1 ... done Removing trainingwheels_redis_1 ... done ``` .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Special handling of volumes Compose is smart. If your container uses volumes, when you restart your application, Compose will create a new container, but carefully re-use the volumes it was using previously. This makes it easy to upgrade a stateful service, by pulling its new image and just restarting your stack with Compose. .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Compose project name * When you run a Compose command, Compose infers the "project name" of your app. * By default, the "project name" is the name of the current directory. * For instance, if you are in `/home/zelda/src/ocarina`, the project name is `ocarina`. * All resources created by Compose are tagged with this project name. * The project name also appears as a prefix of the names of the resources. E.g. in the previous example, service `www` will create a container `ocarina_www_1`. * The project name can be overridden with `docker-compose -p`. .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- ## Running two copies of the same app If you want to run two copies of the same app simultaneously, all you have to do is to make sure that each copy has a different project name. You can: * copy your code in a directory with a different name * start each copy with `docker-compose -p myprojname up` Each copy will run in a different network, totally isolated from the other. This is ideal to debug regressions, do side-by-side comparisons, etc. .debug[[intro/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Compose_For_Dev_Stacks.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://media.defense.gov/2013/Nov/12/2000897311/-1/-1/0/131108-F-PD986-087.JPG)] --- name: toc-advanced-dockerfiles class: title Advanced Dockerfiles .nav[ [Previous section](#toc-compose-for-development-stacks) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-links-and-resources) ] .debug[(automatically generated title slide)] --- # Advanced Dockerfiles ![construction](images/title-advanced-dockerfiles.jpg) .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## Objectives We have seen simple Dockerfiles to illustrate how Docker build container images. In this section, we will see more Dockerfile commands. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## `Dockerfile` usage summary * `Dockerfile` instructions are executed in order. * Each instruction creates a new layer in the image. * Docker maintains a cache with the layers of previous builds. * When there are no changes in the instructions and files making a layer, the builder re-uses the cached layer, without executing the instruction for that layer. * The `FROM` instruction MUST be the first non-comment instruction. * Lines starting with `#` are treated as comments. * Some instructions (like `CMD` or `ENTRYPOINT`) update a piece of metadata. (As a result, each call to these instructions makes the previous one useless.) .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## The `MAINTAINER` instruction The `MAINTAINER` instruction tells you who wrote the `Dockerfile`. ```dockerfile MAINTAINER Docker Education Team ``` It's optional but recommended. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## The `RUN` instruction The `RUN` instruction can be specified in two ways. With shell wrapping, which runs the specified command inside a shell, with `/bin/sh -c`: ```dockerfile RUN apt-get update ``` Or using the `exec` method, which avoids shell string expansion, and allows execution in images that don't have `/bin/sh`: ```dockerfile RUN [ "apt-get", "update" ] ``` .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## More about the `RUN` instruction `RUN` will do the following: * Execute a command. * Record changes made to the filesystem. * Work great to install libraries, packages, and various files. `RUN` will NOT do the following: * Record state of *processes*. * Automatically start daemons. If you want to start something automatically when the container runs, you should use `CMD` and/or `ENTRYPOINT`. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## Collapsing layers It is possible to execute multiple commands in a single step: ```dockerfile RUN apt-get update && apt-get install -y wget && apt-get clean ``` It is also possible to break a command onto multiple lines: It is possible to execute multiple commands in a single step: ```dockerfile RUN apt-get update \ && apt-get install -y wget \ && apt-get clean ``` .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## The `EXPOSE` instruction The `EXPOSE` instruction tells Docker what ports are to be published in this image. ```dockerfile EXPOSE 8080 EXPOSE 80 443 EXPOSE 53/tcp 53/udp ``` * All ports are private by default. * Declaring a port with `EXPOSE` is not enough to make it public. * The `Dockerfile` doesn't control on which port a service gets exposed. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## Exposing ports * When you `docker run -p ...`, that port becomes public. (Even if it was not declared with `EXPOSE`.) * When you `docker run -P ...` (without port number), all ports declared with `EXPOSE` become public. A *public port* is reachable from other containers and from outside the host. A *private port* is not reachable from outside. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## The `COPY` instruction The `COPY` instruction adds files and content from your host into the image. ```dockerfile COPY . /src ``` This will add the contents of the *build context* (the directory passed as an argument to `docker build`) to the directory `/src` in the container. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## Build context isolation Note: you can only reference files and directories *inside* the build context. Absolute paths are taken as being anchored to the build context, so the two following lines are equivalent: ```dockerfile COPY . /src COPY / /src ``` Attempts to use `..` to get out of the build context will be detected and blocked with Docker, and the build will fail. Otherwise, a `Dockerfile` could succeed on host A, but fail on host B. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## `ADD` `ADD` works almost like `COPY`, but has a few extra features. `ADD` can get remote files: ```dockerfile ADD http://www.example.com/webapp.jar /opt/ ``` This would download the `webapp.jar` file and place it in the `/opt` directory. `ADD` will automatically unpack zip files and tar archives: ```dockerfile ADD ./assets.zip /var/www/htdocs/assets/ ``` This would unpack `assets.zip` into `/var/www/htdocs/assets`. *However,* `ADD` will not automatically unpack remote archives. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## `ADD`, `COPY`, and the build cache * Before creating a new layer, Docker checks its build cache. * For most Dockerfile instructions, Docker only looks at the `Dockerfile` content to do the cache lookup. * For `ADD` and `COPY` instructions, Docker also checks if the files to be added to the container have been changed. * `ADD` always needs to download the remote file before it can check if it has been changed. (It cannot use, e.g., ETags or If-Modified-Since headers.) .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## `VOLUME` The `VOLUME` instruction tells Docker that a specific directory should be a *volume*. ```dockerfile VOLUME /var/lib/mysql ``` Filesystem access in volumes bypasses the copy-on-write layer, offering native performance to I/O done in those directories. Volumes can be attached to multiple containers, allowing to "port" data over from a container to another, e.g. to upgrade a database to a newer version. It is possible to start a container in "read-only" mode. The container filesystem will be made read-only, but volumes can still have read/write access if necessary. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## The `WORKDIR` instruction The `WORKDIR` instruction sets the working directory for subsequent instructions. It also affects `CMD` and `ENTRYPOINT`, since it sets the working directory used when starting the container. ```dockerfile WORKDIR /src ``` You can specify `WORKDIR` again to change the working directory for further operations. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## The `ENV` instruction The `ENV` instruction specifies environment variables that should be set in any container launched from the image. ```dockerfile ENV WEBAPP_PORT 8080 ``` This will result in an environment variable being created in any containers created from this image of ```bash WEBAPP_PORT=8080 ``` You can also specify environment variables when you use `docker run`. ```bash $ docker run -e WEBAPP_PORT=8000 -e WEBAPP_HOST=www.example.com ... ``` .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## The `USER` instruction The `USER` instruction sets the user name or UID to use when running the image. It can be used multiple times to change back to root or to another user. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## The `CMD` instruction The `CMD` instruction is a default command run when a container is launched from the image. ```dockerfile CMD [ "nginx", "-g", "daemon off;" ] ``` Means we don't need to specify `nginx -g "daemon off;"` when running the container. Instead of: ```bash $ docker run /web_image nginx -g "daemon off;" ``` We can just do: ```bash $ docker run /web_image ``` .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## More about the `CMD` instruction Just like `RUN`, the `CMD` instruction comes in two forms. The first executes in a shell: ```dockerfile CMD nginx -g "daemon off;" ``` The second executes directly, without shell processing: ```dockerfile CMD [ "nginx", "-g", "daemon off;" ] ``` .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- class: extra-details ## Overriding the `CMD` instruction The `CMD` can be overridden when you run a container. ```bash $ docker run -it /web_image bash ``` Will run `bash` instead of `nginx -g "daemon off;"`. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## The `ENTRYPOINT` instruction The `ENTRYPOINT` instruction is like the `CMD` instruction, but arguments given on the command line are *appended* to the entry point. Note: you have to use the "exec" syntax (`[ "..." ]`). ```dockerfile ENTRYPOINT [ "/bin/ls" ] ``` If we were to run: ```bash $ docker run training/ls -l ``` Instead of trying to run `-l`, the container will run `/bin/ls -l`. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- class: extra-details ## Overriding the `ENTRYPOINT` instruction The entry point can be overriden as well. ```bash $ docker run -it training/ls bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr $ docker run -it --entrypoint bash training/ls root@d902fb7b1fc7:/# ``` .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## How `CMD` and `ENTRYPOINT` interact The `CMD` and `ENTRYPOINT` instructions work best when used together. ```dockerfile ENTRYPOINT [ "nginx" ] CMD [ "-g", "daemon off;" ] ``` The `ENTRYPOINT` specifies the command to be run and the `CMD` specifies its options. On the command line we can then potentially override the options when needed. ```bash $ docker run -d /web_image -t ``` This will override the options `CMD` provided with new flags. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- ## Advanced Dockerfile instructions * `ONBUILD` lets you stash instructions that will be executed when this image is used as a base for another one. * `LABEL` adds arbitrary metadata to the image. * `ARG` defines build-time variables (optional or mandatory). * `STOPSIGNAL` sets the signal for `docker stop` (`TERM` by default). * `HEALTHCHECK` defines a command assessing the status of the container. * `SHELL` sets the default program to use for string-syntax RUN, CMD, etc. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- class: extra-details ## The `ONBUILD` instruction The `ONBUILD` instruction is a trigger. It sets instructions that will be executed when another image is built from the image being build. This is useful for building images which will be used as a base to build other images. ```dockerfile ONBUILD COPY . /src ``` * You can't chain `ONBUILD` instructions with `ONBUILD`. * `ONBUILD` can't be used to trigger `FROM` and `MAINTAINER` instructions. .debug[[intro/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/intro/Advanced_Dockerfiles.md)] --- class: title, self-paced Thank you! .debug[[common/thankyou.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/common/thankyou.md)] --- class: title, in-person That's all folks! Questions? ![end](images/end.jpg) .debug[[common/thankyou.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/common/thankyou.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://upload.wikimedia.org/wikipedia/commons/4/4d/Locomotive_4700_with_a_container_train_at_Concordancia_de_Poceirao.jpg)] --- name: toc-links-and-resources class: title Links and resources .nav[ [Previous section](#toc-advanced-dockerfiles) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-) ] .debug[(automatically generated title slide)] --- # Links and resources - [Docker Community Slack](https://community.docker.com/registrations/groups/4316) - [Docker Community Forums](https://forums.docker.com/) - [Docker Hub](https://hub.docker.com) - [Docker Blog](http://blog.docker.com/) - [Docker documentation](http://docs.docker.com/) - [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker) - [Docker on Twitter](http://twitter.com/docker) - [Play With Docker Hands-On Labs](http://training.play-with-docker.com/) .footnote[These slides (and future updates) are on β http://container.training/] .debug[[common/thankyou.md](https://github.com/jpetazzo/container.training/tree/qconsf2017intro/slides/common/thankyou.md)]