Running Third-Party Applications in Containers
To run your own applications on Junos OS Evolved, you have the option to deploy them as a Docker container. The container runs on Junos OS Evolved, and the applications run in the container, keeping them isolated from the host OS. Containers are installed in a separate partition mounted at /var/extensions. Containers persist across reboots and software upgrades.
Docker containers are not integrated into Junos OS Evolved, they are created and managed entirely through Linux by using Docker commands. For more information on Docker containers and commands, see the official Docker documentation: https://docs.docker.com/get-started/
Containers have default limits for the resources that they can use from the system:
-
Storage – The size of the /var/extensions partition is platform driven: 8GB or 30% of the total size of /var, whichever is smaller.
-
Memory – Containers have no default physical memory limit. This can be changed.
-
CPU – Containers have no default CPU limit. This can be changed.
You can modify the resource limits on containers if necessary. See Modifying Resource Limits for Containers.
Deploying a Docker Container
To deploy a Docker container:
Managing a Docker Container
Docker containers are managed through standard Docker Linux workflow. Use the
docker ps
, ps
or top
Linux commands to show which Docker containers are running, and use Docker
commands to manage the containers. For more information on Docker commands, see:
https://docs.docker.com/engine/reference/commandline/cli/
Junos OS Evolved high availability features are not supported for custom applications in Docker containers, If an application has high availability functionality then you should run the application on each RE to ensure it can sync itself. Such an application will need to have the required business logic to manage itself and communicate with all instances.
Enabling Netlink or PacketIO in a Container
You need to provide additional arguments to Docker commands if your container
requires extra capabilities like Netlink or PacketIO. You will also need to
enable nlsd
service for enabling Netlink functionality on
certain releases. The following example shows how to activate Netlink or
PacketIO capabilities for a container by adding arguments to a Docker
command:
Create a read-only name persistent volume upon starting Docker services. Mounting the
jnet
volume will mount required libraries needed for PacketIO and Netlink functionality over WAN/data ports:--mount source=jnet,destination=/usr/evo
Share the host’s Network and IPC namespace with the container. Containers requiring PacketIO and Netlink functionality over WAN/data ports will need to be in the host Network and IPC namespace:
--network=host --ipc=host
Automatically start the container upon system reboot:
--restart=always
Enable net admin capability, which is required by Netlink and PacketIO libraries:
--cap-add=NET_ADMIN
Enable the environmental variables required for Netlink and PacketIO over WAN/data ports:
--env-file=/run/docker/jnet.env
Mount the jtd0 device from host to the container to help with PacketIO:
--device=/dev/jtd0
Mount the host’s /dev/shm directory to the container for Netlink and PacketIO over WAN/data ports:
-v /dev/shm:/dev/shm
If multicast group management is required by the container application, mount the /dev/mcgrp directory from the host to the container:
-v /dev/mcgrp:/dev/mcgrp
After Junos OS Evolved release 24.1R1, containers in the host network namespace that want to have DNS resolution will need to pass the
--dns ::1
option to thedocker run
command. This is not required for Junos OS Evolved release 23.4 and earlier:--dns ::1
If your container requires Netlink related processing, then you also need to enable the Netlink asynchronous API (
nlsd
) process in Junos OS Evolved with the following CLI configuration:[edit] user@host# set system processes nlsd enable
Native Linux or container-based applications that require PacketIO and
Netlink functionality should be dynamically linked. We recommend using
Ubuntu based Docker containers, as they are the only containers that are
officially qualified by Juniper Networks. Ubuntu-based containers should use
the glibc
compatible with the base Junos Evolved OS
glibc
.
Selecting a VRF for a Docker Container
For Junos OS
Evolved Releases 23.4R1 and earlier,
containers
inherit virtual routing and forwarding (VRF) from the Docker process. In order
to run containers in a distinct VRF, a Docker process instance needs to be
started in the corresponding VRF. The
docker@vrf.service
instance allows for
starting a process in the corresponding VRF. If the VRF is unspecified, the VRF
defaults to vrf0
.
The docker.service
runs in vrf:none
by
default.
For Junos OS Evolved Releases 24.1R1 and later, we recommend binding a specific
task within the container to a specific Linux VRF by using the ip vrf
exec task
command. This requires the container
to be started with the option --privileged
, and the container
needs to have a compatible version of iproute2
installed. The
container should also share the network namespace with the host. You can also
use the socket option SO_BINDTODEVICE
to bind the socket for a
specific task or application within the container to a specific Linux VRF
device, in which case iproute2
is not needed.
The ip vrf show
command lists all available Linux VRFs. If you
choose to bind the sockets for a task within the container to a VRF using
iproute2
, we recommend overwriting some env variables by
using --env-file=/run/docker-vrf0/jnet.env
, so
libnli.so
won't be preloaded to avoid it interfering with
iproute2
.
You can launch a container and bind the socket associated with the container's
task to the default vrf vrf0
with the following commands:
[vrf:none] user@host:~# docker -H unix:///run/docker-vrf0.sock run --rm -it –-privileged --network=host --ipc=host --cap-add=NET_ADMIN --mount source=jnet,destination=/usr/evo --device=/dev/jtd0 -v /dev/mcgrp:/dev/mcgrp -v /dev/shm:/dev/shm --env-file=/run/docker-vrf0/jnet.env --dns ::1 debian:stretch bash # explicitly preload libsi.so and avoid libnli.so. Bind ping’s socket to vrf0 (default) VRF [vrf:none] user@host: my-container/# LD_PRELOAD=libsi.so.0 ip vrf exec vrf0 ping 1.2.3.4
With this approach, different sockets associated with different tasks within the container can be associated with different VRFs instead of all sockets bound to just one VRF.
The Docker process for a specific VRF listens on corresponding socket located at /run/docker-vrf.sock.
This is the VRF as seen on the Linux and not the Junos OS Evolved VRF. The
utility evo_vrf_name
(available starting in Junos OS Evolved
release 24.1) can be used to find the Linux VRF that corresponds to a Junos OS
Evolved VRF.
The Docker client gets associated with the VRF specific Docker process by use the following arguments:
--env-file /run/docker-vrf/jnet.env --host unix:///run/docker-vrf.sock or export DOCKER_HOST=unix:///run/docker-vrf.sock
For example, to run a container in vrf0
enter the following
Docker command and arguments:
[vrf:none] user@host:~# docker -H unix:///run/docker-vrf0.sock run --rm -it --network=host --ipc=host --cap-add=NET_ADMIN --mount source=jnet,destination=/usr/evo --device=/dev/jtd0 -v /dev/mcgrp:/dev/mcgrp -v /dev/shm:/dev/shm --env-file=/run/docker-vrf0/jnet.env --dns ::1 debian:stretch ip link 1002: et-01000000000: BROADCAST,MULTICAST,UP mtu 1514 state UP qlen 1 link/ether ac:a:a:18:01:ff brd ff:ff:ff:ff:ff:ff 1001: mgmt-0-00-0000: BROADCAST,MULTICAST,UP mtu 1500 state UP qlen 1 link/ether 50:60:a:e:08:bd brd ff:ff:ff:ff:ff:ff 1000: lo0_0: LOOPBACK,UP mtu 65536 state UP qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
A container can only be associated to a single VRF.
Modifying Resource Limits for Containers
The default resource limits for containers are controlled through a file located at /etc/extensions/platform_attributes. You will see the following text upon opening this file:
## Edit to change upper cap of total resource limits for all containers. ## applies only to containers and does not apply to container runtimes. ## memory.memsw.limit_in_bytes = EXTENSIONS_MEMORY_MAX_MIB + EXTENSIONS_MEMORY_SWAP_MAX_MIB:-0 ## please start extensions-cglimits.service to apply changes to CPU and Memory values here ## please restart var-extensions.mount to apply changes to partition resize here ## make sure the docker daemon is stopped before changing mount size ## For changing EXTENSIONS_FS_DEVICE_SIZE_MIB, please also remove file rm /var/extensions_fs ## make sure to create a backup before partition resize ## check current defaults, after starting extensions-cglimits.service ## $ /usr/libexec/extensions/extensions_cglimits get ## you can also set current values like this as an alternative to starting extensions-cglimits.service ## $ /usr/libexec/extensions/extensions_cglimits set ## if you set one of the memory values, set the other one as well – mandated by cgroup ## device size limit will be ignored once extensionsfs device is created #EXTENSIONS_FS_DEVICE_SIZE_MIB= #EXTENSIONS_CPU_QUOTA_PERCENTAGE= #EXTENSIONS_MEMORY_MAX_MIB= #EXTENSIONS_MEMORY_SWAP_MAX_MIB=
To change the resource limits for containers, add values to the
EXTENSIONS
entries at the bottom of the
file. Make
sure to do this prior to starting the Docker process.
-
EXTENSIONS_FS_DEVICE_SIZE_MIB=
controls the maximum storage space that containers can use. Enter the value in megabytes. The default value is 8000 or 30% of the total size of /var, whichever is smaller. Make sure to add this entry before starting the Docker process for the first time. If you need to change this value later on, you will need to delete the existing partition, which can lead to loss of data on this partition. If this storage partition needs to be changed after the Docker service has already been started then Docker process needs to be stopped first with thesystemctl stop docker
command, and the existing partition can be deleted using thesystemctl stop var-extensions.mount
command followed by therm /var/extensions_fs
command. Once this attribute has been changed, start the Docker process again and the new partition with the specified size will be created. You can also restartvar-extensions.mount
with thesystemctl restart var-extensions.mount
command to achieve the same result. We suggest taking a backup of the partition to avoid losing important data. We do not recommend increasing this value beyond 30% of the /var partition as this can affect the normal function of Junos OS Evolved. -
EXTENSIONS_CPU_QUOTA_PERCENTAGE=
controls the maximum CPU usage that containers can use. Enter a value as a percentage of CPU usage. The default value is 20% but can vary depending on the platform. -
EXTENSIONS_MEMORY_MAX_MIB=
controls the maximum amount of physical memory that containers can use. Enter the value in megabytes. The default value is 2000 but it can vary depending on the platform. If this needs to be modified, the swap valueEXTENSIONS_MEMORY_SWAP_MAX_MIB=
should also be specified. Note that Linuxcgroup
does not allow unreasonable values to be set for memory and CPU limits. If the values set are not reflected in thecgroup
, the most likely reason is that the values are wrong (possibly very high or very low). -
EXTENSIONS_MEMORY_SWAP_MAX_MIB=
controls the maximum amount of swap memory that containers can use. Enter the value in megabytes. The default value is 15% of available swap space, but it can vary depending on the platform. Both theEXTENSION_MEMORY_MAX_MIB=
andEXTENSIONS_MEMORY_SWAP_MAX_MIB=
should be set if either one is being modified. Recommended value for swap is 15% ofEXTENSION_MEMORY_MAX_MIB=
. The actualcgroup
value for swap would beEXTENSION_MEMORY_MAX_MIB
+EXTENSIONS_MEMORY_SWAP_MAX_MIB
.
By default these are set to platform-specific values, so we recommend setting the values before starting containers.
Before modifying the resource limits for containers, be aware of the CPU and memory requirements for the scale you have to support in your configuration. Exercise caution when increasing resource limits for containers to prevent them from causing a strain on your system.