Architecture of the Services SDK

The control and data components in an application created with the Services SDK can each be run on multiple Multiservices PICs.

This allows you to scale your applications over time. (For additional details about the control and data components in applications, see Functionality in the Services SDK.)

The following figure gives an overview of the Services SDK architecture.

services-architecture-g017460.gif

Overview of the Services SDK Architecture

The following figure shows how a complex application could be structured to run on the Routing Engine, control and data components. (Not all of the functionality shown here is available through libaries in this release; see the release notes for details on what libraries are currently available.)

mp-complex-app-g017462.gif

Architecture of a Complex Application

Architecture of the Multiservices PIC CPU

Each Multiservices PIC contains eight processing cores, each with four hardware threads.

Each hardware thread is a virtual CPU (vCPU). The system distributes cycles of idle vCPUs to other vCPUs in the same core. The vCPUs are allocated as follows depending on the type of core:

The following figure shows one PIC with three of the possible eight cores (for simplification): one data core and two control cores. The four virtual CPUs in the data core are shown as vCPUn through vCPU(n+3).

pic-packet-flow-g016850.gif

Multiservices PIC Architecture

pic-config Allocating Cores When you configure your system, you can allocate up to seven cores for data; core 0 is reserved by the system for running internal system processes. Data cores can process packet loops and can access raw packets. (For introductory information about configuring the system, see System and User Interface Configuration; for a sample system configuration, see Setting Up and Running the Application in the documentation for the sample gateway application.)

Although it is not mandatory to designate any cores as data cores, it is advisable to designate a minimum of five to achieve good performance, depending on the nature of the application.

You must also allocate at least one (and up to six) control cores; these run the JUNOS operating system, have a full protocol stack, and behave as a single multi-processing system.

Finally, you can leave one or more user cores unallocated, to perform any non packet-related dedicated processing needed by your application.

For additional assistance in designing your application architecture, contact JUNOS SDK Developer Support.

Packet Processing Architecture

Control packets are delivered through the JUNOS IP network stack.

For data packets, packet processing is done within a packet loop you write that includes code to retrieve the packet, read and manipulate it, and forward or drop it. The system performs packet I/O through shared FIFO queues; a packet loop blocks on the input FIFO until there is a packet available. During packet transmission, the loop is non-blocking: it either succeeds or fails.

The queues are allocated at application initialization time. Applications call SDK functions to register the queues with the kernel for receiving packets destined to the application.

For details about using the SDK libraries to process packets and allocate FIFOs, see Using the Services SDK Data Handling Functions.

The system implements packet loops as POSIX threads (pthreads). A pthread is always associated with a vCPU. Each pthread has complete access to shared data from other pthreads.

The default size of the shared FIFO queues is 1023 jbufs. A jbuf is a data structure that describes a block of data that can vary in size depending on its contents. Jbufs describe packets, in addition to other data relevant to network protocol handling. (For additional information, see Using the Services SDK jbuf Library for Packet Processing.)

Note:
The FIFO queues and the functions that access them are only for packet processing; applications cannot use these queues for other purposes.

Multiservices PIC Startup

The Multiservices PIC does not have any persistent storage of its own: all its filesystems are either MFS or NFS and are loaded in the following sequence:

  1. The PIC boots from the loader image (mpsdk.jbf).

  2. The loader deflates and loads the attached kernel.

  3. The kernel mounts a read-only root file system in the PIC's memory.

  4. Once the file system is mounted, the system decompresses and mounts the packages containing the SDK.

Scope and Persistency Considerations

The Routing Engine has a global view of the router: its kernel knows about all the interfaces in the router, and data (BLOBs) is replicated to the backup Routing Engine. The file system is persistent, hard disk or flash.

The Multiservices PIC has a local view of the PIC's interfaces only. Data is kept only locally, and survives daemon restart but not PIC reboot. The file system is in-memory only and does not survive PIC reboot.

In most cases, the system restarts data applications by default, without rebooting the PIC. For details, see Application Restart.

Reliability and Performance Considerations

By default, in production use, writes from the Multiservices PIC to the NFS-mounted Routing Engine file systems are configured to be asynchronous and are set for maximum performance. For development purposes, or where more reliable write access is desired, you can trade off some performance for added reliability by setting the NFS version 3 commit-on-close option via the related sysctl call on the Multiservices PIC. Doing so helps to ensure that the write accesses are more reliable, especially if an intervening reboot were to occur between the NFS CLOSE RPC and NFS COMMIT RPC operations under normal usage.

Note that even if the Multiservices PIC crashes before issuing an NFS COMMIT RPC, as long as the Routing Engine does not crash at the same time, the data will be written to disk.

Also, an application that issues fsync(2) and waits for a successful return will always have data on stable storage.

The Multiservices PIC does not store time persistently: it always starts from a default time when rebooting. To set the time properly, you must enable NTP in the configuration. For example:

system {
    ntp {           
        boot-server 10.227.2.100;
        server 10.227.2.100 prefer;
        server 10.227.2.101;
    }
}

© 2007-2009 Juniper Networks, Inc. All rights reserved. The information contained herein is confidential information of Juniper Networks, Inc., and may not be used, disclosed, distributed, modified, or copied without the prior written consent of Juniper Networks, Inc. in an express license. This information is subject to change by Juniper Networks, Inc. Juniper Networks, the Juniper Networks logo, and JUNOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other trademarks, service marks, registered trademarks, or registered service marks are the property of their respective owners.
Generated on Sun May 30 20:26:47 2010 for Juniper Networks Partner Solution Development Platform JUNOS SDK 10.2R1 by Doxygen 1.4.5