Outside the Container

Docker and microservices bring new agility and economy to the data center.

Large enterprises are exploring the possibilities enabled by emerging container technologies such as Docker. At Juniper, we see this trend as a milestone in data center innovation, offering significant gains in efficiency, productivity, and agility for large enterprises that offer cloud as a service.

With the accelerating evolution of companion technologies such as Kubernetes, the combination of containerized applications and a microservices architecture holds immense promise for these enterprise companies in 2016.

Containers and Microservices

The primary purpose of containerized applications is to improve the effectiveness of software teams, making it easier for people to work together while lowering the communications overhead. In large enterprises, applications such as ERP or CRM software suites often begin as simple projects, but as time passes, they quickly become clunky and inefficient, with a monolithic code base that slows progress for development teams.

To get beyond this inefficiency, a new approach breaks down the application into smaller, bite-size components known as microservices. Adopting a microservices architecture gives development teams agility and operational efficiency by virtue of the smaller code base in each application component.

Adopting a microservices architecture gives development teams agility and operational efficiency by virtue of the smaller code base in each application component.

As the software goes through its various stages of development, it may move from the developer’s PC to a lab or test environment; it may move from a physical to a virtual environment, and ultimately, to a production environment. In each of these, the app must perform consistently. Containers address the problem of how to make software work in different computing environments. They enable software developers to encapsulate an application component in a single, lightweight package. Inherently Linux-based, containers offer the promise of running consistently from one computing environment to another, virtual or physical.

Part of a DevOps Approach

Microservices jibe with the shift in IT departments toward a DevOps culture, in which development and operations teams work closely together to support an application throughout its lifecycle. Containers, too, are an ideal DevOps tool for both developers and system administrators. Containers free developers to focus on their core competency, while operations staff benefit from flexibility, a smaller footprint in the data center, and lower overhead.

Containers work best when each one is assigned to a single process. For this reason, the initial setup of a microservices architecture for a large application or software project can be resource-intensive, but the effort offers a worthwhile return in terms of the agility it creates. That agility stems partly from the speed at which containers can be deployed, which ranges from milliseconds to a few seconds. The application workload in the container uses the host server's operating system kernel, removing the need to retrieve the OS as part of the startup process.

Containers free developers to focus on their core competency, while operations staff benefit from flexibility, a smaller footprint in the data center, and lower overhead.

Another substantial benefit of using containerized applications is the efficiency they bring, both organizationally and in terms of space and power in the data center, because they require very little memory space. Unlike virtual machines (VMs), which need a full OS and a system image that increases host CPU overhead, containers are lightweight. They share the host kernel OS and have the potential to utilize CPU and memory more efficiently than VMs, lowering computing costs as well as power and real estate demands in the data center.

Managing the Cluster

A group of physical or virtual machines on which containers can be executed is referred to as a cluster, and it requires some form of management. As part of the evolution of container technologies, enterprises can employ cluster management tools, including OpenShift and the Google-developed Kubernetes, which works in conjunction with Docker. Serving as the execution engine for containers, Docker also provides the management for their file systems.

Cluster management tools such as Kubernetes typically provide an abstraction at the level of an application component. This abstraction, referred as a pod, comprises a group of one or more containers, their shared storage, and options for how to run the containers. The action of scheduling pods on a machine within the cluster results in Docker executing a container.

Building a Network for Containers

The network plays a vital role in containerization. In multitenant environments, one essential need is the ability to provide access control and auditing capabilities for network flows. The access controls provided by the network complement application-based authentication and authorization mechanisms. Together, they provide a common layer across heterogeneous authentication methods. This function addresses a frequent requirement in environments where third-party software—such as virtualized firewalls—is in use, or when multiple generations of software technologies are running simultaneously.

At Juniper, we see great potential for containers, and we anticipate that IT organizations will increasingly adopt microservices architectures and containerized applications, secured with firewall functionality.

Network access-control, combined with security at Layers 3–7, should encompass the clusters that are executing containerized workloads, as well as external environments such as existing OpenStack or bare-metal servers. In these heterogeneous environments, the network is the glue that holds together the diverse elements.

A technology such as Contrail Networking from Juniper exemplifies this functionality. Contrail can provide microsegmentation for a container ecosystem, securely isolating networks within a multitenant environment. It enables the cluster management tool to connect different virtual networks between applications running on containers and VMs, and also connect elements outside the cluster management tool, such as legacy infrastructure or databases running on bare-metal servers.

As container technology matures, network technologies will emerge to support containerized applications. There will be a growing need for security that can scale in these environments, including container technology that can be executed at the same speeds as the application components.

At Juniper, we see great potential for containers, and we anticipate that IT organizations will increasingly adopt microservices architectures and containerized applications, secured with firewall functionality. Our mission is to protect the network from end to end, automating complex networking problems and developing robust, open platforms that pave the way for adoption of technologies such as these.

Learn more about containerized applications—and how to build a network to support them—in the resources for this article.