Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

AWS Elastic Load Balancing and Elastic Network Adapter

 

This section provides information about AWS Elastic Load Balancing (ELB) and Elastic Network Adapter (ENA) support.

AWS Elastic Load Balancing

This section provides information about AWS ELB with vSRX 3.0.

Benefits of AWS Elastic Load Balancing

  • Ensures high availability by automatically distributing the incoming traffic across multiple targets in multiple availability zones (AZs) so that only the healthy targets receive traffic.

  • Provides flexibility to virtualize your application targets by alloing you to host more applications on the same instance and to centrally manage TLS settings and offload CPU intensive workloads from your applications.

  • Provides robust security features such as integrated certificate management, user-authentication, and SSL/TLS decryption.

  • Supports Auto Scaling of sufficient applications to meet varying levels of application load without requiring manual intervention.

  • Allows you to monitor your applications and their performance in real time with Amazon CloudWatch metrics, logging, and request tracing.

  • Offers load balance across AWS and on-premises resources using the same load balancer.

Overview of AWS Elastic Load Balancing

Elastic Load Balancing (ELB) is a load-balancing service for Amazon Web Services (AWS) deployments with vSRX 3.0.

ELB distributes incoming application or network traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, in multiple availability zones. Elastic Load Balancing scales your load balancer as traffic to your application changes over time, and can scale to the vast majority of workloads automatically.

AWS ELB using application laod balancers enables automation by using certain AWS services:

AWS Elastic Load Balancing Components

AWS Elastic Load Balancing (ELB) components include:

  • Load balancers—A load balancer serves as the single point of contact for clients. The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple availability zones (AZs), thereby increases the availability of your application. You add one or more listeners to your load balancer.

  • Listeners or vSRX instances—Process for checking connection requests, using the protocol and port that you configure. vSRX instances as listeners check for connection requests from clients, using the protocol and port that you configure, and forwards requests to one or more target groups, based on the rules that you define. Each rule specifies a target group, condition, and priority. When the condition is met, the traffic is forwarded to the target group. You must define a default rule for each vSRX instance, and you can add rules that specify different target groups based on the content of the request (also known as content-based routing).

  • Target groups or vSRX application workloads—Each vSRX application as target group is used to route requests to one or more registered targets. When you create each vSRX instance as a listener rule, you specify a vSRX application and conditions. When a rule condition is met, traffic is forwarded to the corresponding vSRX application. You can create different vSRX applications for different types of requests. For example, create one vSRX application for general requests and other vSRX applications for requests to the micro services for your application.

Elastic Load Balancing supports three types of load balancers: Application Load Balancers, Network Load Balancers, and Classic Load Balancers. You can select a load balancer based on your application needs. For more information on types of AWS load balancers, see AWS Elastic Load Balancing.

Overview of Application Load Balancer

Starting with Junos OS Release 18.4R1, vSRX 3.0 supports Elastic Load Balancing (ELB) using application Load Balancer is supported to provide scalable security to the Internet facing traffic using native AWS services. An Application load balancer automatically distributes incoming application traffic and scales resources to meet traffic demands.

You can configure health checks, that are used to monitor the health of the registered targets so that the load balancer can send requests only to the healthy targets.

The key features of an Application Load Balancer are:

  • Layer-7 load balancing

  • HTTPS support

  • High availability

  • Security features

  • Containerized application support

  • HTTP/2 support

  • WebSockets support

  • Native IPv6 support

  • Sticky sessions

  • Health checks with operational monitoring, logging, request tracing

  • Web Application Firewall (WAF)

When the application load balancer receives a request, it evaluates the rules of vSRX instance in order of priority to determine which rule to apply, and then selects a target from the vSRX application for the rule action. You can configure a vSRX instance rules to route requests to different target groups based on the content of the application traffic. Routing is performed independently for each target group, even when a target is registered with multiple target groups.

You can add and remove targets from your load balancer as your needs change, without disrupting the overall flow of requests to your application. Elastic Load Balancing scales your load balancer as traffic to your application changes over time. Elastic Load Balancing can scale to the vast majority of workloads automatically.

Application load balancer launch sequence and current screen can be viewed using the vSRX instance properties. When running vSRX as an AWS instance, logging in to the instance through SSH starts a session on Junos OS. Standard Junos OS CLI can be used to monitor health and statistics of the vSRX instance. If the #load_balancer=true tag is sent in user data then boot-up messages will mention that the vSRX interfaces are configured for ELB and Automatic scaling support. Interfaces eth0 and eth1 are then swapped.

If an unsupported Junos OS configuration is sent to vSRX instance in user data, vSRX instance reverts to its factory-default configuration. If the If the #load_balancer=true tag is missing, then interfaces are not swapped.

Deployment of AWS Application Load Balancer

AWS Application Load Balancer (ELB) can be deployed in two ways:

  • vSRX behind ELB

  • ELB sandwich

vSRX Behind AWS Application Load Balancer Deployment

In this type of deployment, the vSRX instances are attached to the Application Load Balancer, in one or more availability zones (AZs) and the application workloads are behind the vSRX instances. Application Load Balancer sends traffic only to the primary interface of the instance. For vSRX instance, the primary interface is the management interface fxp0.

To enable ELB in this deployment, you have to swap the management and the first revenue interface.

Figure Figure 1 shows vSRX behind ELB deployment.

Figure 1: vSRX Behind AWS Application Load Balancer Deployment
vSRX Behind AWS Application
Load Balancer Deployment

Enabling AWS ELB with vSRX Behind AWS Application Load Balancer Deployment

The following are the prerequisites for enabling AWS ELB with vSRX behind AWS Application Load Balancer type of deployment:

  • All incoming and outgoing traffic to ELB are monitored from the ge-0/0/0 interface associated with the vSRX instance.

  • vSRX instance at launch has three interfaces in which the subnets containing the interfaces are connected to the internet gateway (IGW).

  • Source/Destination check is disabled on the eth1 interface of the vSRX instance.

For deploying AWS ELB with vSRX behind AWS Application Load Balancer:

The vSRX 3.0 instance contains:

  • Cloud initialization (cloud-init) user data with ELB tag as #load_balancer=true.

  • The user data configuration with #junos-config tag, fxp0 (dhcp), ge-0/0/0 (dhcp) (must be DHCP any security group that it needs to define)

  • Cloud-Watch triggers an Simple Notification Service (SNS) that in-turn triggers a Lambda function which creates and attaches an Elastic Network Interface (ENI) with Elastic IP address (EIP) to the vSRX instance. Multiple new ENIs (maximum of 8) can be attached to this instance.

  • vSRX Instance must be rebooted. Reboot has to be performed for all subsequent times vSRX instance launches with swapped interfaces.

    Note

    Chassis cluster support for swapping the ENI between instances and IP monitoring does not work.

Note

You can also launch vSRX instance in an Auto Scaling groups (ASG). This can be automated using a cloud formation template (CFT).

Sandwich Deployment of AWS Application Load Balancer

In this deployment model, you can scale security and the applications. vSRX 3.0 instances and the applications are in different Auto Scaling groups and each of these Auto Scaling group is attached to a different Application Load Balancer. This type of ELB is and elegant and simplified way to manually scale vSRX 3.0 deployments to address planned or projected traffic increases while also delivering multi Availability Zone HA. Ensures inbound high availability and scaling for AWS deployments.

Because the load balancer scales dynamically, its virtual IP address (VIP) is a fully qualified domain name (FQDN). This FQDN resolves to multiple IP addresses according to the availability zone. To enable this resolution, the vSRX 3.0 instance should be able to send and receive traffic from the FQDN (or the multiple addresses that it resolves to).

This configuration of FQDN is done using the set security zones security-zone ELB-TRAFFIC address-book address ELB dns-name FQDN_OF_ELB command.

Figure Figure 2 shows ELB sandwich deployment for vSRX 3.0.

Figure 2: Sandwich Deployment of AWS Application Load Balancer
 Sandwich Deployment
of AWS Application Load Balancer

Enabling Sandwich Deployment of AWS Application Load Balancer for vSRX

For AWS Application Elastic Load Balancer Sandwich Deployment for vSRX:

  • vSRX receives the #load_balancer=true tag in cloud-init user data.

  • In Junos OS, the initial boot process scans the mounted disk for the presence of the flag file in the setup_vsrx file. If the file is present, then that indicates that the two interfaces with DHCP in two different Virtual References (VRs) needs to be configured. This scan and configuration update is performed in the default configuration and on top of the user data if the flag file is present.

    Note

    The boot time after the second or the third mgd process commit increases if user data (if present).

  • You must reboot the vSRX instance. Perform reboot for all the subsequent times vSRX instance is launched with swapped interfaces.

    Note

    Chassis cluster support for swapping the Elastic Network Interfaces (ENI) between instances and IP monitoring does not work.

Note

You can also launch vSRX instance in an Auto Scaling group (ASG) and automate the deployment using a cloud formation template (CFT).

Overview of AWS Elastic Network Adapter (ENA) for vSRX Instances

Benefits of AWS Elastic Network Adapter (ENA) Support for vSRX Instances

  • Supports Multi-Queue device interface. ENA makes uses of multiple transmit and receive queues to reduce internal overhead and to increase scalability. The presence of multiple queues simplifies and accelerates the process of mapping incoming and outgoing packets to a particular vCPU.

  • The ENA driver supports industry-standard TCP/IP offload features such as checksum offload and TCP transmit segmentation offload (TSO).

  • Supports Receive-side scaling (RSS) is for multicore scaling. Some of the ENA devices support a working mode called Low-latency Queue (LLQ), which saves several more microseconds.

Enhanced networking uses single-root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, higher packet per second (pps) performance, and consistently lower inter-instance latencies. There is no additional charge for using enhanced networking.

Amazon Elastic Compute Cloud (EC2) provides the Elastic Network Adapter (ENA), the next- generation network interface and accompanying drivers that provide enhanced networking on EC2 vSRX instances.

Amazon EC2 provides enhanced networking capabilities through the Elastic Network Adapter (ENA).

ENA is a custom network interface optimized to deliver high throughput and packet per second (pps) performance, and consistently low latencies on EC2 vSRX instances. Using ENA for vSRX C5 instances (with (2 VCPUs and 4G memory) you can utilize up to 20 Gbps of network bandwidth. ENA-based enhanced networking is supported on vSRX instances.

The ENA driver exposes a lightweight management interface with a minimal set of memory-mapped registers and an extendable command set through an admin queue.

The driver supports a wide range of ENA adapters, is link-speed independent (that is, the same driver is used for 10 Gbps, 25 Gbps, 40 Gbps, and so on), and negotiates and supports various features. ENA adapters allow high-speed and low-overhead Ethernet traffic processing by providing a dedicated Tx/Rx queue pair per CPU core.

The DPDK drivers for ENA are available at https://github.com/amzn/amzn-drivers/tree/master/userspace/dpdk.

Note

When AWS Application load balancers are used, the eth0 (first) and eth1 (second) interfaces are swapped for the vSRX instance. The AWS ENA detects and rebinds the interface with its corresponding kernel driver.