Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Understanding Docker

 

As discussed, containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship them all out as one package. Docker is software that facilitates creating, deploying, and running containers.

The starting point is the source code for the Docker image file, and from there you can build the image to be stored and distributed to any registry – most commonly a Docker hub – and use this image to run the containers.

Docker uses the client-server architecture shown in Figure 1. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker daemon does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon communicate using a REST API over UNIX sockets or a network interface.

Figure 1: Docker Architecture
Docker Architecture

Containers don’t exist in a vacuum, and in production environments you won’t have just one host with multiple containers, but rather multiple hosts running hundreds, if not thousands, of containers, which raises two important questions:

  • How do these containers communicate with each other on the same host or in different hosts, as well as with the outside world? (Basically, the networking parts of containers.)

  • Who determines which containers get launched on which host? Based on what? Upgrade? Number of containers per application? Basically, who orchestrates that?

These two questions are answered, in detail, throughout the rest of the book, but if you want a quick answer, just think Juniper Contrail and Kubernetes!

Let’s start with the basic foundations of the Juniper Contrail Platform.