External Authentication and Accounting
Much of the performance of a RADIUS system using external authentication and accounting is related to the latency and throughput of the back end. There is fixed overhead per transaction in terms of CPU utilization, but this should generally scale linearly with total CPU GHz of the system. The overhead for external server latency is accountable to additional memory utilization to keep a copy of the state of the in-process transaction in memory, along with the overhead for threads to manage that state.
The amount of processing increases if multiple external authentication servers must be tried in order. Setting the order so that the most utilized method is attempted first is the easiest way to increase the performance.
Most external authentication and accounting systems accept multiple concurrent connections. Increasing values such as the MaxConcurrent in radsql.aut/acc increases the throughput of the system towards the capacity of the back-end system. This value is closely related to the number of authentication or accounting threads that are in use, and there must be more threads than the MaxConcurrent (so that while concurrent transactions are processing on the back-end server, RADIUS-level processing for additional transactions can still be done within SBRC), often up to double or more, depending on latency. Beyond that point, you manage spike traffic using flood queues (see Flood Control for more information) to avoid having too many attempted concurrent transactions, which might cause the downstream server to stop processing efficiently.
LDAP, even more than SQL, can have poor performance with badly constructed searches (which might cause an unnecessary object tree walk) and filters, or with the use of functionality such as BindName (although the latter is sometimes necessary in some use cases of non-reversibly encrypted passwords with certain types of authentication protocols). Having multiple open connections to an LDAP repository can send more simultaneous requests.
Storing less data is usually better than storing more data. Selects, updates, and deletes by primary key are generally faster than by secondary indexes, which in turn are much faster than by non-indexed fields. Inserts into tables with many indexes outside the primary key are slower than inserts into tables with one or few indexes. Optimizing your back end to avoid extraneous indexes, querying regularly only on items that are necessary, and using well-optimized primary keys can produce a large throughput increase and latency decrease.
Other SQL items worth investigating include validating whether stored procedures are optimized and having no hidden table scans or other iterative behavior that could be optimized out with a different data model. In general terms, raw inserts into one table that might be subsequently batch-processed outside of the transaction path (by an external job) are faster than manipulated inserts into many tables within a stored procedure; however, the ability to do batch-processing depends greatly on the use case.
Measuring the performance of common inserts and selects with SQL analyzers and keeping tabs on the performance of back-end servers in terms of CPU and I/O (as recommended for SBRC) will help determine scalability issues with back-end servers, and can help reduce the extraneous overhead on the front-end applications as well.
Finally, load balancing between multiple back-end servers, as well as enabling failover behavior, can reduce performance issues caused by back-end throughput in a relatively linear manner.
See the SBR Carrier Reference Guide for information on configuring the following parameters:
MaxConcurrent (radsql.aut and .acc)
SQL load balancing