Application Support for Stateful Line Module Switchover

Applications are either supported or unsupported by stateful line module switchover.

The sections that follow describe the working behavior of applications that support stateful line module switchover.

Note: Only the applications discussed in the sections that follow are compatible with stateful line module switchover.

Policy Management

Because the policy application in the line module does not contain the complete state of all the policy definitions in mirrored containers, the SRP module is used to download the policy definitions and attachments to the newly active line module when a stateful switchover occurs. The policy application sends multiple policy attachment requests from the SRP module to the line module in a single notify operation and in a bulk manner, instead of one policy attachment request in each notify event. This method of transferring policy attachment requests in bulk reduces the time to download all the attachments to the newly active line module.


QoS configuration is maintained in each line module and these settings are mirrored to the standby line module. During a stateful line module switchover, the QoS agent in the line module restores the configuration in the newly active line module. The QoS agent clients (such as IP and Ethernet) bind and register to the QoS agent before they replay the interfaces for creating QoS attachments. The QoS agents ensure that the queues are reestablished appropriately for the interfaces.

Connection Manager and Queue Manager

The queue manager resides on the SRP and the queue manager agents are present on all the line modules. When the primary line module resets, the spare module takes over the usage of the redundancy database. The queue manager identifies a connection based on the queue ID, the connection manager uses the stream ID to recognize a connection, the forwarding controller uses the stream ID, similar to the connection manager, to determine a connection. For example, when slot 2 communicates with slot 1, the queue manager identifies this connection as QID1. Similarly, when slot 3 communicates with slot 2, this link is labeled as QID2.

The connection manager uses SID1 to denote the connection from any slot with slot 2 and SID2 to signify the link from any slot with slot 3. The slot 2 address is specified as 2a2, where ‘2’ refers to the logical slot, ’a’ indicates the active state of the slot, and ‘2’ represents the physical slot. When slot 0 takes over slot 2, the slot that is taken over is identified as 2a0. On reception of the controller up event on the SRP module for the spare line module, the queue manager initiates a request to the connection manager to create a fresh connection for the address 2a0. The connection manager logically labels the stream ID that refers to slot 2 to be down and creates a new stream ID to communicate with slot 0. The forwarding controller database that possesses a mapping of the slot ID, stream ID, and traffic class is updated accordingly to replace any streams that earlier pointed to slot 2 to start referring to slot 0. The queue manager agent running on the line modules handle the forwarding controller updates.


The PPP application on the line module contains the basic protocol, timers, and state machines in a running state. All the dynamic session data collected from protocol negotiations is present in the mirrored storage containers on the line module. For stateful line module switchover, all the mirrored storage data is saved on the standby module, replicating the session on the standby module. After the switchover takes place, the application initialization process on the standby module reconstructs the mirrored data and brings up the sessions to the established state (operational status is up). Some of the sessions that are still in the process of being created (alternating between the up and down operational states) are not retained during the switchover. This behavior of not preserving sessions that are not established is similar to the characteristic followed during unified ISSU, where sessions that are not completely created retry after the newly configured primary line module is available.

The total time required for the standby module to become active is dependent on the size of the configuration parameters. On a normal basis, it takes about 2-3 minutes for the new primary module to become active, in which case, clients running small intervals of keepalives expire. This system of expiry of keepalives poses a limitation on the stateful switchover model. This limitation is similar to the restriction seen during the upgrade phase of the unified ISSU process in which traffic forwarding is interrupted for a brief period. To work around this restriction, echo requests for the sessions that terminate on the failed line module are redirected to a different hardware. For failures on tunnel server modules (ES2 4G LMs with Service IOA), the access module handles such problems.


L2TP configuration and operation data are maintained in the line module and this information is mirrored to the standby module. After the switchover of the primary tunnel server module to the secondary module occurs, the L2TP application on the line module restores the configuration and operation data to the newly active primary module. This mechanism is similar to the warm start procedure during unified ISSU. The L2TP application on the SRP module handles the line module events related to the primary and secondary modules.

Forwarding Controller

When a stateful line module switchover occurs, the forwarding controller (FC) tables that refer to the failed line module are updated with stream IDs that map to the line module (ES2 4G LM with Service IOA) that has taken over the role of the primary module. FC tables use a combination of slot ID, stream ID, and key hash table. The modifications to the FC tables enables packets to be sent to the newly functioning primary module after the switchover is complete.

During the stateful line module switchover, PPP subscriber sessions on an LNS device in an L2TP tunnel might be terminated due to the lack of PPP keepalive responses from the LNS device. To prevent the termination of subscriber sessions, the access module in the LNS device handles the PPP echo requests from all active subscriber sessions (on behalf of the failed line module) and responds with valid PPP echo reply messages. After a successful switchover, the access module in the LNS stops responding to the PPP echo request messages.

When the access module in the LNS receives an event from the application, such as PPP, to denote a failure with the primary line module, the access module starts processing the PPP echo requests that are destined for the LNS. The access module in the LNS concludes the handling of PPP echo requests after it receives a notification that the switchover is complete.

The following configuration events also take place during a stateful switchover on tunnel server modules that are installed on E120 and E320 routers that operate as LNS devices in an L2TP tunnel:

When you perform a stateful switchover on one pair of line modules enabled for high availability, L2TP sessions continue to be established on the other tunnel server modules. The Server Card manager (SCM) application selects the circuits from other tunnel server modules to reroute the L2TP sessions until the stateful switchover from the primary module to the secondary module is completed. The L2TP application notifies the SCM after the switchover is completed and the SCM continues to balance the sessions across all the available tunnel server modules.

Mirroring Subsystem

The mirroring application is used to synchronize the configuration information available on the line modules. The mirroring state machine resides on both the primary and secondary line modules. The mirroring functionality uses interchassis communication (ICC) sessions to coordinate between line modules. Mirroring is supported for the volatile memory present on the line modules. After an initial bulk synchronization of storage data from the primary line module to the secondary line module occurs, any subsequent data is mirrored as and when transactions are posted. When a stateful switchover occurs, applications recover to the steady state by restoring the configuration data from the mirrored containers.

State machine-dependent applications, such as PPP, L2TP, and QoS applications, contain a dummy forwarding controller database that is populated on the access line module (receives traffic from low-speed circuits and routes them to uplink modules). This dummy database enables responses to be sent from the access line module to the keepalives that it receives until the switchover completes. This method of sending responses to hello packets ensures minimal data outage during the switchover of line modules. After the stateful switchover, the stateful applications start their regular processing by reestablishing their containers and perform a synchronization with the SRP module for dynamic data.

Unified ISSU

A unified ISSU operation proceeds properly if the configured secondary line module had taken over as the newly active primary line module. When you enter the issu start command to begin the upgrade phase of the unified ISSU process, the secondary line module is disabled. The disabled line module during unified ISSU is cold booted after the unified ISSU operation is complete. Only the primary line module participates in the unified ISSU operation.


Interchassis Communication Protocol (ICCP) is used to establish communication sessions between line modules that are configured for stateful switchover (configured in the high availability pair). Controller events are generated for existing sessions on the line modules with a notification about the session establishment and session teardown. The applications that are running on the SRP module with ICC sessions formed between the SRP and line modules are notified with the controller events after a stateful line module switchover occurs.

The line module high availability manager resides on the SRP module to enable the stateful switchover from a failed primary module to the secondary module in a high availability pair of devices. The high availability manager interacts with its peer agent on the line modules using ICC session and control bus. After the modules in a high availability pair become operational in primary and secondary modes, the high availability manager notifies interchassis controller (ICC) to enable ICC communication between the line modules.

Related Documentation