Configuring the Service Redundancy Daemon
Before you configure srd processing, we recommend that you be familiar with Configuring ICCP for Multichassis Link Aggregation, which explains peer relationships between gateways that are enabled to exchange master and standby roles.
You use the following configuration statements:
redundancy-policy under the [edit policy-options] hierarchy level
redundancy-event under the [edit event-options] hierarchy level
redundancy-set under the [edit services] hierarchy level
The actions to be performed when configured redundancy events occur are defined in redundancy policies. Redundancy polices are associated with redundancy sets; they are analogous to rules associated with service sets. Redundancy sets are associated to redundancy groups by redundancy group IDs. Redundancy group details are defined by the underlying ICCPd configuration. Finally, service sets and redundancy sets are associated through the redundancy-sets statement in service sets configuration.
To configure srd, perform the following configuration tasks in the recommended sequence. Configurations are show for two gateways for which mastership may change.
The procedures that follow, redundancy events that are configured and associated with a redundancy policy. The redundancy policy is associated with a redundancy set to take appropriate action of mastership-release or mastership-acquire. If an event is associated with a policy that takes the release-mastership action, srd checks whether the redundancy peer’s state is ready or warned. If the standby is in a warned state, then the release-mastership action fails. You can take restore the healthcheck and manually execute the release-mastership action.
To release mastership in any case, you can either configure the policy action as release-mastership-force or use force option in the operational CLI. Even if your configuration specifies the force option, using the force option in the CLI takes precedence and mastership is released. Similarly, if a redundancy event is configured with a policy with an acquire-mastership action, then srd checks the local redundancy set state. In the case of a wait state, the action fails unless the force option is used. We recommend that you determine why health checks fail and take action to correct the failure. After that, when the redundancy set state returns to STANDBY, then this mastership change action succeeds.
Configuring Redundancy Events
To configure redundancy events:
- Configure any link-down redundancy events for the master
gateway.user@gateway1# set event-options redundancy-event redundancy-event monitor link-down link-down
For example:
user@gateway1# set event-options redundancy-event RELS_MSHIP_CRIT_EV monitor link-down ms-2/3/0.0user@gateway1# set event-options redundancy-event RELS_MSHIP_CRIT_EV monitor link-down xe-3/0/0.0 - Configure any process redundancy events for the master
gateway.user@gateway1# set event-options redundancy-event redundancy-event monitor process routing restart
For example:
user@gateway1# set event-options redundancy-event RELS_MSHIP_CRIT_EV monitor process routing restart - Configure any link-down redundancy events for the standby
gateway.user@gateway2# set event-options redundancy-event redundancy-event monitor link-down link-down
For example:
user@gateway2# set event-options redundancy-event WARN_EV monitor link-down ms-2/3/0.0user@gateway2# set event-options redundancy-event WARN_EV monitor link-down xe-3/0/0.0 - Configure any process redundancy events for the standby
gateway.user@gateway2# set event-options redundancy-event redundancy-event monitor process routing restart
For example:
user@gateway2# set event-options redundancy-event WARN_EV monitor process routing restart - Configure any peer redundancy events for the standby gateway.user@gateway2# set event-options redundancy-event redundancy-event monitor peer (mastership-acquire | mastership-release)
For example:
user@gateway2# set event-options redundancy-event PEER_MSHIP_ACQU_EV monitor peer mastership-acquireuser@gateway2# set event-options redundancy-event PEER_MSHIP_RELS_EV monitor peer mastership-release
Configuring Redundancy Policies
Service redundancy policies specify actions triggered by monitored redundancy events.
To configure redundancy policies:
- Specify a redundancy policy and redundancy event for the
master gateway. Follow the same steps for the standby gateway.user@gateway1# edit policy-options redundancy-policy policy-name redundancy-event event-name then
- Specify an action of acquiring or releasing mastership.user@gateway1# set acquire-mastership
or
user@gateway1# set (release-mastership | release-mastership-force | release-mastership-if-standby-clear - (Optional) Specify an action of adding a static route.user@gateway1# set add-static-route destination (receive | next-hop next-hop) routing-instance vrf-name
Best Practice We recommend using the receive option.
- (Optional) Specify an action of deleting a static route.user@gateway1# set delete-static-route destination routing-instance vrf-name
The following example demonstrates configuring redundancy policies for two peer gateways:
Configuring Redundancy Set and Group
The redundancy group IDs that srd uses are associated with those configured for the ICCP daemon (iccpd) through the existing ICCP configuration hierarchy by using the same redundancy group ID in the configuration of the services redundancy group.
iccp { local-ip-addr 10.1.1.1; peer 10.2.2.2 { redundancy-group-id-list 1; liveness-detection { minimum-interval 1000; } } }
To configure redundancy sets:
- Specify redundancy set and group for the master gateway.user@gateway1# set redundancy-set redundancy-set redundancy-group redundancy-group
For example:
user@gateway1# set redundancy-set 1 redundancy-group 1 - Specify redundancy policies for the redundancy set.user@gateway1# set redundancy-set redundancy-set redundancy-policy [redundancy-policy-list]
For example:
user@gateway1# set redundancy-set 1 redundancy-policy ACQU_MSHIP_POL RELS_MSHIP_POL WARN_POL - Specify redundancy set and group for the peer gateway.user@gateway2# set redundancy-set redundancy-set redundancy-group redundancy-group
For example:
user@gateway2# set redundancy-set 1 redundancy-group 1 - Specify redundancy policies for the redundancy set.user@gateway2# set redundancy-set redundancy-set redundancy-policy [redundancy-policy-list]
For example:
user@gateway1# set redundancy-set 1 redundancy-policy [ACQU_MSHIP_POL RELS_MSHIP_POL WARN_POL]
Configuring Routing Policies Supporting Redundancy
To configure routing policies that support redundancy:
- At the [edit policy-options condition] hierarchy
level, use the if-route-exists configuration statement
set a condition based on the existence of signal routes that requires
redundancy-related routing changes. Specify the routing table that
includes [edit policy-options condition condition-name}user@gateway# set if-route-exists signal-route table routing-table
For example:
[edit policy-options condition switchover-route-exists]user@gateway# set if-route-exists 10.45.45.0/24 table bgp1_table - At the [edit policy-options policy-statement statement-name] hierarchy level, specify routing
changes based on the condition indicating the existence of the signal
route. For BGP, routing changes typically include change to local-preference
and as-path-prepend values.
To change local-preference, specify local-preference in the then clause of the policy statement.
[edit policy-options policy-statement policy-name]user@gateway# set term term from protocol [protocol variables] prefix-list prefix-list condition condition-name then local-preference preference-value acceptFor example:
[edit policy-options policy-statement ha-export-v6-policy]user@gateway# set term update-local-pref from protocol static bgp prefix-list ipv4-default-route condition switchover-route-exists then local-preference 350 acceptTo change as-path-prepend values, specify as-path-prepend in the then clause of the policy statement.
[edit policy-options policy-statement policy-name]user@gateway# set term term from prefix-list prefix-list condition condition-name then as-path-prepend [as-prepend-values] next-hop self acceptFor example:
[edit policy-options policy-statement ha-export-v6-policyuser@gateway# set term update-as-prepend prefix-list ipv6-default-route condition switchover-route-exists then as-path-prepend "64674 64674 64674 64674" next-hop self accept
Configuring Service Sets
Specify stateful sync of services for a service set.
- Specify the service set and redundancy-set.[edit]user@gateway1# set services service-set service-set redundancy-set redundancy-set
For example:
[edit]user@gateway1# set services service-set CGN4_SP-7-0-0 redundancy-set 1 - Specify the replication threshold and services to be replicated.[edit]user@gateway1# set services service-set service-set replicate-services replication-threshold replication-threshold <stateful-firewall> <nat>
For example:
[edit]user@gateway1# set services service-set service-set replicate-services replication-threshold 360 stateful-firewall nat