Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Packet I/O Driver Selection for a cSRX Container

    The cSRX container exchanges packets by using the Linux host user space driver over the VETH interface. The setting of the packet I/O driver can impact the forwarding performance and scalability of a cSRX container. You can launch a cSRX to use either the poll mode driver (default seting) or interrupt mode driver to define how packets are exchanged.

    Note: Poll mode is the default setting for the CSRX_PACKET_DRIVER environment variable.

    Table 1 compares the two packet I/O drivers supported by cSRX.

    Table 1: cSRX Poll and Interrupt Mode Driver Comparison

    Specification

    Poll Mode Driver

    Interrupt Mode Driver

    Performance

    Higher forwarding performance per cSRX.

    Lower forwarding performance per cSRX.

    Scalability

    Reduced scalability; support for a single cSRX per vCPU.

    Improved scalability; support for multiple cSRX containers per vCPU.

    Scenario

    Deployment of a cSRX supporting a virtualized network function (VNF).

    Deployment of a cSRX supporting a large number of concurrent security services.

    This section includes the following topics:

    Specifying the Poll Mode Driver

    The poll mode driver uses a PCAP-based DPDK driver to poll packets from the Linux VETH driver. Packets are exchanged between user and kernel space by using a Berkeley Packet Filter (BPF). The poll mode driver can obtain the best performance for a single cSRX container (for example, as a VNF).

    Note: When using the poll mode driver, the srxpfe process will always keep a CPU core at 100% utilization, even when the cSRX has no traffic to process.

    To configure the cSRX container to use the poll mode driver, include the -e CSRX_PACKET_DRIVER="poll" environment variable in the docker run command.

    root@csrx-ubuntu3:~/csrx# docker run -d --privileged --network=mgt_bridge -e CSRX_FORWARD_MODE="routing" -e CSRX_PACKET_DRIVER="poll" -e CSRX_CTRL_CPU="0x1" -e CSRX_DATA_CPU="0x6" --name=<csrx-container-name> <csrx-image-name>

    Specifying the Interrupt Mode Driver

    The interrupt mode driver receives and transmits packets using the packet socket on user space. By using the epoll mechanism provided by the Linux operating system, the interrupt mode driver can aid the srxpfe process in waiting until packets arrive on the VETH interfaces. If no packets load on the revenue ports of a cSRX instance, the srxpfe process remains in a sleep state as a means to help preserve CPU resources. With the support of the epoll mechanism, the Linux server can then sustain a large number of cSRX instances, in particular when there are multiple cSRX instances per CPU. In this case, the scheduler keeps track of which srxpfe process is busy and allocates CPU resources to that srxpfe process.

    When you launch start a cSRX instance, you can include the CSRX_CTRL_CPU and CSRX_DATA_CPU environmental variables to specify a specific CPU to run control plane and data plane tasks. The CPU will schedule the srxpfe process among those CPUs according to their CPU status. SeeCPU Affinity for a cSRX Container for details on the CSRX_CTRL_CPU and CSRX_DATA_CPU environmental variables.

    To configure the cSRX container to use the interrupt mode driver, include the -e CSRX_PACKET_DRIVER="interrupt" environment variable in the docker run command.

    root@csrx-ubuntu3:~/csrx# docker run -d --privileged --network=mgt_bridge -e CSRX_FORWARD_MODE="routing" -e CSRX_PACKET_DRIVER="interrupt" -e CSRX_CTRL_CPU="0x1" -e CSRX_DATA_CPU="0x6" --name=<csrx-container-name> <csrx-image-name>

    Modified: 2018-02-05