Route Manipulation and Management

The SDK service daemon (ssd) runs on the Routing Engine and serves both SDK and non-SDK applications, either on the Routing Engine or on the Multiservices PIC.

ssd provides controlled access to Junos system resources from daemons deployed as part of an SDK application. The following figure summarizes how ssd works:

ssd-g017285.gif

SDK Service Daemon

The SDK libssd library communicates with ssd and provides the ability to add, delete, and manage the following kinds of routes:

The library includes the following functionality:

For a code example that shows how to use the SDK Service Daemon library, see Route Manager Application. For details about debugging ssd functionality in your applications, see Debugging the SDK Service Daemon.

SDK Service Daemon Basics

ssd hides the complexity of the Junos route and event management system. The following diagram illustrates how ssd works with client applications:

ssd-overview-g016940.gif

The TCP-Based IPC Mechanism Provided by libssd

Route Management Using ssd

You can use libssd to handle various kinds of routes.

A simple route has no next-hop information that needs to be added separately to the kernel. These routes are managed by the routing protocol daemon (rpd) and are modified using the functions provided by librpd. rpd handles all the routes and all the next hops. The following diagram depicts simple route addition using libssd.

ssd-simple-route-g016941.gif

Simple Route Addition

For routes pointing to the Multiservices PIC (sometimes called service routes), the next-hop information must be passed from the clients along with the routes. There are two steps in this process:

  1. ssd adds the next hop in the kernel as requested by the client. The client receives the ID of the next hop added using libssd. As part of next-hop addition, clients can specify the following:

    1. The type of packet distribution required for this next hop. Two types are supported: round-robin and socket affinity. The default value is round-robin.

    2. Application-specific data: Clients have the option to specify data to be added in every packet that is forwarded to the Multiservices PIC as a result of this next hop. ssd processes this as opaque data and transparently adds it to the next hop.

    3. rpd then adds the route with the valid next-hop ID that was specified in the request.

    The following diagrams illustrate adding service routes on the Multiservices PIC:

    Step 1

    ssd-route-to-pic-1-g016942.gif

    Step 2: This step is similar to the simple route addition, passing a valid next-hop ID with the route add request.

    ssd-route-to-pic-2-g016943.gif

    Applications can install next-hop table entries in the routing table for specified addresses, which is useful in multitopology routing. Applications can use libssd to add a table next hop, or they can use dynamic firewall filter actions to redirect traffic to a routing instance. For details on both, see Directing Traffic to a Different Routing Table.

Support for Graceful Restart

ssd handles client restart as follows:

The following diagram illustrates this design. Acknowledgements are asynchronous to requests. Multiple outstanding transactions are permitted from the clients.

ssd-client-restarts-g016944.gif

Handling Client Restarts

The scenarios for graceful restart handling are as follows and apply to both simple routes and service routes:

To support graceful restart, the client must store all the routes it manages in some persistent storage. If the application detects a change in the route database, it is advisable to replay (readd) the routes previously added using the SSD_ROUTE_ADD_FLAG_OVERWRITE flag. When more than 15 seconds have elapsed, ssd sends the route service reset message SSD_ROUTE_SERVICE_RESET upon receiving a connection setup request with the previously used client ID. At this point, the previous route service is invalid and previously added routes have been purged. The client must then request a new route service by calling the function ssd_setup_route_service() and re-add the routes without the overwrite flag.

If the Multiservices PIC reboots or is taken offline and then restored, applications must use ssd to delete and re-add all routes and next hops.

Using libssd

To use libssd, you first configure the route in the CLI (see Route Configuration). You then call functions to communicate with ssd, which allow you to add, delete, and manage routes programmatically. For a sample application that performs these tasks, see Route Manager Application.

The ability to distribute packets using socket affinity and the ability to pass opaque data to a next hop are two advantages of using libssd on the Multiservices PIC.

Flow Affinity

Data packets are delivered to data processing CPUs using one of two algorithms:

You specify flow affinity distribution by setting the pkt_dist_type field in the ssd_nh_add_parms structure to SSD_NEXTHOP_PKT_DIST_FA before sending the next-hop request. (You can also specify flow affinity as part of the configuration; for more information, see Flow Affinity on the Data Plane.)

Both packet distribution and opaque data are specified in the ssd_nh_add_parms structure that you pass to the ssd_request_nexthop_add() function in libssd. That structure is defined as follows:

/**
 * Structure for next-hop addition
*/
struct ssd_nh_add_parms {
    
    ifl_idx_t               ifl;           /* IFL for the next hop */
    struct ssd_nh_opq_data  opq_data;      /* Opaque data to be added to the next hop */
    u_int8_t                pkt_dist_type; /* SSD_NEXTHOP_PKT_DIST_RR : round robin
                                              SSD_NEXTHOP_PKT_DIST_FA : socket affinity */
};

Handling Opaque Data

You use the following structure to pass opaque data to a next hop; data points to the opaque data:

     struct ssd_nh_opq_data {
         char *data;
         int  len;
     };

For details on specifying flow affinity and handling opaque data, see the Library Reference documentation for libssd. The functions for libssd are in sandbox/src/junos/lib/libssd/h/jnx/ssd_ipc.h in your backing sandbox.


2007-2009 Juniper Networks, Inc. All rights reserved. The information contained herein is confidential information of Juniper Networks, Inc., and may not be used, disclosed, distributed, modified, or copied without the prior written consent of Juniper Networks, Inc. in an express license. This information is subject to change by Juniper Networks, Inc. Juniper Networks, the Juniper Networks logo, and JUNOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other trademarks, service marks, registered trademarks, or registered service marks are the property of their respective owners.
Generated on Sun May 30 20:26:47 2010 for Juniper Networks Partner Solution Development Platform JUNOS SDK 10.2R1 by Doxygen 1.4.5