Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Running Third-Party Applications in Containers

To run your own applications on Junos OS Evolved, you have the option to deploy them as a Docker container. The container runs on Junos OS Evolved, and the applications run in the container, keeping them isolated from the host OS. Containers are installed in a separate partition mounted at /var/extensions. Containers persist across reboots and software upgrades.

Note:

Docker containers are not integrated into Junos OS Evolved, they are created and managed entirely through Linux by using Docker commands. For more information on Docker containers and commands, see the official Docker documentation: https://docs.docker.com/get-started/

Containers have default limits for the resources that they can use from the system:

  • Storage – The size of the /var/extensions partition is platform driven: 8GB or 30% of the total size of /var, whichever is smaller.

  • Memory – Containers have no default physical memory limit. This can be changed.

  • CPU – Containers have no default CPU limit. This can be changed.

Note:

You can modify the resource limits on containers if necessary. See Modifying Resource Limits for Containers.

Deploying a Docker Container

To deploy a Docker container:

  1. Start the Docker service bound to a VRF (for example vrf0). For Junos OS Evolved Releases 23.4R1 and earlier, all the containers managed by this Docker service will be bound to this Linux VRF. For Junos OS Evolved Release 24.1R1 and later, we recommend binding specific tasks within the container to a VRF. See Selecting a VRF for a Docker Container for more details.
  2. Set the Docker socket for the client by configuring the following environment variable:
  3. Import the image.
    Note:

    The URL for the import command needs to be changed for different containers.

  4. Make sure the image is downloaded, and get the image ID.
  5. Create a container using the image ID and enter a bash session in that container.
  6. Create a container with Packet IO and Netlink capablity using the image ID and enter a bash session in that container.
    Note:

    Docker containers are daemonized by default unless you use the -it argument.

Managing a Docker Container

Docker containers are managed through standard Docker Linux workflow. Use the docker ps, ps or top Linux commands to show which Docker containers are running, and use Docker commands to manage the containers. For more information on Docker commands, see: https://docs.docker.com/engine/reference/commandline/cli/

Note:

Junos OS Evolved high availability features are not supported for custom applications in Docker containers, If an application has high availability functionality then you should run the application on each RE to ensure it can sync itself. Such an application will need to have the required business logic to manage itself and communicate with all instances.

Selecting a VRF for a Docker Container

For Junos OS Evolved Releases 23.4R1 and earlier, containers inherit virtual routing and forwarding (VRF) from the Docker process. In order to run containers in a distinct VRF, a Docker process instance needs to be started in the corresponding VRF. The docker@vrf.service instance allows for starting a process in the corresponding VRF. If the VRF is unspecified, the VRF defaults to vrf0.

The docker.service runs in vrf:none by default.

For Junos OS Evolved Releases 24.1R1 and later, we recommend binding a specific task within the container to a specific Linux VRF by using the ip vrf exec task command. This requires the container to be started with the option --privileged, and the container needs to have a compatible version of iproute2 installed. The container should also share the network namespace with the host. You can also use the socket option SO_BINDTODEVICE to bind the socket for a specific task or application within the container to a specific Linux VRF device, in which case iproute2 is not needed.

The ip vrf show command lists all available Linux VRFs. If you choose to bind the sockets for a task within the container to a VRF using iproute2, we recommend overwriting some env variables by using --env-file=/run/docker-vrf0/jnet.env, so libnli.so won't be preloaded to avoid it interfering with iproute2.

You can launch a container and bind the socket associated with the container's task to the default vrf vrf0 with the following commands:

With this approach, different sockets associated with different tasks within the container can be associated with different VRFs instead of all sockets bound to just one VRF.

The Docker process for a specific VRF listens on corresponding socket located at /run/docker-vrf.sock.

This is the VRF as seen on the Linux and not the Junos OS Evolved VRF. The utility evo_vrf_name (available starting in Junos OS Evolved release 24.1) can be used to find the Linux VRF that corresponds to a Junos OS Evolved VRF.

The Docker client gets associated with the VRF specific Docker process by use the following arguments:

For example, to run a container in vrf0 enter the following Docker command and arguments:

Note:

A container can only be associated to a single VRF.

Modifying Resource Limits for Containers

The default resource limits for containers are controlled through a file located at /etc/extensions/platform_attributes. You will see the following text upon opening this file:

To change the resource limits for containers, add values to the EXTENSIONS entries at the bottom of the file. Make sure to do this prior to starting the Docker process.

  • EXTENSIONS_FS_DEVICE_SIZE_MIB= controls the maximum storage space that containers can use. Enter the value in megabytes. The default value is 8000 or 30% of the total size of /var, whichever is smaller. Make sure to add this entry before starting the Docker process for the first time. If you need to change this value later on, you will need to delete the existing partition, which can lead to loss of data on this partition. If this storage partition needs to be changed after the Docker service has already been started then Docker process needs to be stopped first with the systemctl stop docker command, and the existing partition can be deleted using the systemctl stop var-extensions.mount command followed by the rm /var/extensions_fs command. Once this attribute has been changed, start the Docker process again and the new partition with the specified size will be created. You can also restart var-extensions.mount with the systemctl restart var-extensions.mount command to achieve the same result. We suggest taking a backup of the partition to avoid losing important data. We do not recommend increasing this value beyond 30% of the /var partition as this can affect the normal function of Junos OS Evolved.

  • EXTENSIONS_CPU_QUOTA_PERCENTAGE= controls the maximum CPU usage that containers can use. Enter a value as a percentage of CPU usage. The default value is 20% but can vary depending on the platform.

  • EXTENSIONS_MEMORY_MAX_MIB= controls the maximum amount of physical memory that containers can use. Enter the value in megabytes. The default value is 2000 but it can vary depending on the platform. If this needs to be modified, the swap value EXTENSIONS_MEMORY_SWAP_MAX_MIB= should also be specified. Note that Linux cgroup does not allow unreasonable values to be set for memory and CPU limits. If the values set are not reflected in the cgroup, the most likely reason is that the values are wrong (possibly very high or very low).

  • EXTENSIONS_MEMORY_SWAP_MAX_MIB= controls the maximum amount of swap memory that containers can use. Enter the value in megabytes. The default value is 15% of available swap space, but it can vary depending on the platform. Both the EXTENSION_MEMORY_MAX_MIB= and EXTENSIONS_MEMORY_SWAP_MAX_MIB= should be set if either one is being modified. Recommended value for swap is 15% of EXTENSION_MEMORY_MAX_MIB=. The actual cgroup value for swap would be EXTENSION_MEMORY_MAX_MIB + EXTENSIONS_MEMORY_SWAP_MAX_MIB.

By default these are set to platform-specific values, so we recommend setting the values before starting containers.

CAUTION:

Before modifying the resource limits for containers, be aware of the CPU and memory requirements for the scale you have to support in your configuration. Exercise caution when increasing resource limits for containers to prevent them from causing a strain on your system.