Scaling the SSR Cluster
You scale your SSR cluster by adding separately licensed Expansion Kits to the Starter Kit. The Starter Kit licenses you for the minimum cluster configuration of two client nodes, each hosting a management server and two data nodes. Expansion Kits are available to scale both the back end and front end of the cluster.
Scaling the Front End of the Cluster
You scale the front end of your SSR cluster by adding licenses for additional client nodes. Optionally, you can add a Management Server Expansion Kit, which allows you to add a third management server component to the on a client node. Each management server component must run on a separate client node.
The service capacity of the SSR cluster grows when you add additional client nodes to the front end. Adding additional client nodes, each of which can host an SSR client component, increases the resiliency of the cluster and the speed of processing a particular transaction because wait time is reduced. Up to twenty four client nodes are supported. At least one of the client nodes must be configured to host the management server component. For redundancy, at least two client nodes must be configured to host the management server component.
The client nodes do not require identical configurations; they can be configured with different components or communications interfaces. For example, one client node might host the Subscriber Information Collector (SIC) component used for the PTSP feature. Another client node might host a different SRC component, such as the SAE or NIC. However, to ensure no single point of failure, we recommend that you configure your cluster with enough client nodes to provide redundancy of the components. For example, for redundancy in a cluster running the SIC component, you would want to have at least:
- Two client nodes, each hosting the SIC component and the management server component
- Two data nodes
Scaling the Back End of the Cluster
You scale the back end of the cluster by adding a data node Expansion Kit which licenses you for two additional data nodes, bringing the total number of data nodes to four, which is the maximum allowed. The additional data nodes form a second node group in the example shown in Figure 58, that provides more working memory for the SSR shared database. Each node group manages a partition of the primary SSR database and replicas. The data in each partition is synchronously replicated between the group’s data nodes, so if one data node fails, the remaining node can still access all the data. This configuration also provides very quick failover times if a node fails.
Node groupings are managed by the management server. Node groups may not be formed in the same way shown in Figure 58. For example, it is possible a new node and an existing node could form one group and the other nodes form another group.
Figure 58: SSR Cluster with Four Data Nodes Forming Two-Node Groups
