Running Third-Party Applications in Containers
To run your own applications on Junos OS Evolved, you have the option to deploy them inside a Docker container. The container runs on Junos OS Evolved, and the agents run inside the container, keeping them isolated from the OS. Containers are installed in a separate partition mounted at /var/extensions.
Docker containers are not integrated into Junos OS Evolved, they are created and managed entirely through Linux by using Docker commands. For more information on Docker containers and commands, see the official Docker documentation: https://docs.docker.com/get-started/
Containers have default limits for the resources that they can use from the system:
Storage – The size of the /var/extensions partition is platform driven: 8GB or 30% of the total size of /var, whichever is smaller.
Memory – Containers have a default limit of 2GB or 10% of total physical memory, whichever is smaller.
CPU – Containers have a default limit of 20% max CPU use across all cores.
You can modify the resource limits on containers if necessary. See Modifying Resource Limits for Containers.
Deploying a Docker Container
To deploy a docker container:
Managing a Docker Container
Docker containers are managed through Linux workflow. Use the ps
or top
Linux commands to show which Docker containers are
running, and use Docker commands to manage the containers. For more information
on Docker commands, see: https://docs.docker.com/engine/reference/commandline/cli/
Junos OS Evolved high availability features are not supported for custom applications in Docker containers, If an application has high availability functionality then you should run the application on each RE to ensure it can sync itself.
Enabling Netlink or Packet IO in a Container
You need to provide additional arguments to Docker commands if your container requires extra capabilities like Netlink or Packet IO. The following example shows how to activate Netlink or Packet IO capabilities for a container by adding arguments to a Docker command:
Create a read-only name persistent volume upon starting Docker services:
--mount source=jnet,destination=/usr/evo
Share the host’s network namespace with the container process:
--network=host
Automatically start the container upon system reboot:
--restart=always
Enable net admin capability, which is required by Netlink and Packet IO libraries:
--cap-add=NET_ADMIN
Enable the environmental variables required for Netlink and Packet IO:
--env-file=/run/docker/jnet.env
Selecting a VRF for a Docker Container
Containers inherit virtual routing and forwarding (VRF) from
the Docker daemon. In order to run containers in a distinct VRF, a
Docker daemon instance needs to be started in the corresponding VRF.
The docker@vrf.service
instance allows
for starting a daemon in the corresponding VRF. If the VRF is unspecified,
the VRF defaults to vrf0
.
The docker.service
runs in vrf:none
by
default.
The docker daemon for a specific VRF listens on corresponding socket located at /run/docker-vrf.sock.
The Docker client gets associated with the VRF specific docker daemon by use the following arguments:
--env-file /run/docker-vrf/jnet.env --host unix:///run/docker-vrf.sock or export DOCKER_HOST=unix:///run/docker-vrf.sock
For example, to run a container in vrf0
enter
the following Docker command and arguments:
[vrf:none] user@host:~#docker -H unix:///run/docker-vrf0.sock run --rm -it --network=host --cap-add=NET_ADMIN --mount source=jnet,destination=/usr/evo --env-file=/run/docker-vrf0/jnet.env debian:stretch ip link 1002: et-01000000000: BROADCAST,MULTICAST,UP mtu 1514 state UP qlen 1 link/ether ac:a:a:18:01:ff brd ff:ff:ff:ff:ff:ff 1001: mgmt-0-00-0000: BROADCAST,MULTICAST,UP mtu 1500 state UP qlen 1 link/ether 50:60:a:e:08:bd brd ff:ff:ff:ff:ff:ff 1000: lo0_0: LOOPBACK,UP mtu 65536 state UP qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
A container can only be associated to a single VRF.
Modifying Resource Limits for Containers
The default resource limits for containers are controlled through a file located at /etc/extensions/platform_attributes. You will see the following text upon opening this file:
## Edit to change upper cap of total resource limits for all containers. ## applies only to containers and does not apply to container runtimes. ## memory.memsw.limit_in_bytes = EXTENSIONS_MEMORY_MAX_MIB + EXTENSIONS_MEMORY_SWAP_MAX_MIB:-0 ## check current defaults, after starting extensions-cglimits.service ## $ /usr/libexec/extensions/extensions-cglimits get ## please start extensions-cglimits.service to apply changes here ## device size limit will be ignored once extensionsfs device is created #EXTENSIONS_FS_DEVICE_SIZE_MIB= #EXTENSIONS_CPU_QUOTA_PERCENTAGE= #EXTENSIONS_MEMORY_MAX_MIB= #EXTENSIONS_MEMORY_SWAP_MAX_MIB=
To change the resource limits for containers, add values to
the EXTENSIONS
entries at the bottom of the file:
EXTENSIONS_FS_DEVICE_SIZE_MIB=
controls the maximum storage space that containers can use. Enter the value in bytes. The default value is 8GB or 30% of the total size of /var, whichever is smaller.EXTENSIONS_CPU_QUOTA_PERCENTAGE=
controls the maximum CPU usage that containers can use. Enter a value as a percentage of CPU usage. The default value is 20% max CPU use across all coresEXTENSIONS_MEMORY_MAX_MIB=
controls the maximum amount of physical memory that containers can use. Enter the value in bytes. The default value is 2GB or 10% of total physical memory, whichever is smaller.
Before modifying the resource limits for containers, be aware of the CPU and memory requirements for the scale you have to support in your configuration. Exercise caution when increasing resource limits for containers to prevent them from causing a strain on your system.