Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

vSRX 3.0 Scaling for Internal and Outbound Traffic Using Azure Load Balancer and Virtual Machine Scale Sets

This section provides you details on vSRX 3.0 scaling and performance improvements for internal traffic (traffic between the Microsoft Azure virtual network and the internet) and outbound traffic using Microsoft Azure Load Balancer (LB) and Microsoft Azure Virtual Machine Scale Sets (VMSS). It also provides you details on how you deploy vSRX 3.0 with Azure Load Balancer and Virtual Machine Scale Sets in various ways to scale out or scale in vSRX 3.0.

Overview

vSRX instances are inline firewalls that serve as security gateways on Azure Coud to protect traffic between the west and east subnets. Sometimes a single vSRX instance cannot handle huge traffic throughput, and any throughput or connection scaling limitations on these firewalls limit the performance and scaling of the entire virtual network.

To handle such huge throughput, you can use multiple vSRX 3.0 instances for the traffic inside the virtual network and for the outbound traffic, as required. You can scale out or scale in vSRX 3.0 instances by adding in and removing vSRX 3.0 instances using Azure infrastructure.

Starting in Junos OS Release 20.3R1, vSRX 3.0 can automatically scale out or scale in for internal and outbound traffic using Azure LB and Azure Virtual Machine Scale Sets. You can use the suggested deployments with Azure Load Balancer and VMSS to achieve vSRX 3.0 scaling and better performance for your business needs.

With Azure Load Balancer, you can scale your applications and create high availability for your services. Azure Load Balancer supports inbound and outbound scenarios, provides low latency and high throughput, and scales up to millions of flows for all TCP and UDP applications. Load Balancer distributes new inbound flows that arrive on the Load Balancer's front-end to back-end pool instances, according to rules and health probes.

Azure VMSS let you create and manage a group of identical, load balanced virtual machines (VMs). The number of VM instances can automatically increase or decrease in response to demand or a defined schedule. Scale sets provide high availability to your applications, and allow you to centrally manage, configure, and update a large number of VMs. With VMSS, you can build large-scale services for areas such as compute, big data, and container workloads.

Azure Load Balancer also checks vSRX 3.0 health by means of a health probe. If one vSRX 3.0 instance is not healthy according to the health probe, it will be moved out of load balancing.

Note:

Do not configure NAT for west-east traffic. If NAT is configured, traffic might be distributed to different vSRX 3.0 instances for each direction.

For information about Azure Load Balancer high availability port limitations, see Azure Load Balancer – HA ports.

The core architecture of the vSRX 3.0 scale-out and scale-in solution consists of the following components:

  • Azure Load Balancer—Provides traffic distribution toward vSRX 3.0 in the back-end pool.

  • Azure Virtual Machine Scale Sets (VMSS)—Creates and manages a group of vSRX 3.0 VMs as the back-end pool of Load Balancer. Defines automatic scale-in and scale-out rule to trigger automatic scaling.

  • Initial Junos OS configuration—With the help of cloud-init, autoconfigures each vSRX 3.0 instance in VMSS.

Benefits of vSRX 3.0 Scaling Using Azure Load Balancer and Virtual Machine Scale Sets

  • Build highly reliable applications—Improve application reliability through health checks. Azure Load Balancer probes the health of your application instances, automatically takes unhealthy instances out of rotation, and reinstates them when they become healthy again. Use Load Balancer to improve application uptime.

  • Instantly add scale to your applications—With built-in load balancing for cloud services and VMs, you can create highly available and scalable applications in minutes. Azure Load Balancer supports TCP/UDP-based protocols such as HTTP, HTTPS, and SMTP, and protocols used for real-time voice and video messaging applications.

  • High availability and robust performance for your applications—Azure Load Balancer automatically scales with increasing application traffic. Without you needing to reconfigure or manage the Load Balancer, your applications provide a better customer experience.

  • Load-balance Internet and private network traffic—Use the internal Azure load balancer for traffic between VMs inside your private virtual networks, or use it to create multitiered hybrid applications.

  • Secure your networks—Provides flexible NAT rules for better security. Control your inbound and outbound network traffic, and protect private networks using built-in Network Address Translation (NAT). Secure your network and integrate network security groups with Azure Load Balancer.

Understanding the vSRX Scale-Out and Scale-In Solution for East-West Traffic

You can manage the east-west traffic internal traffic in Azure Cloud by deploying vSRX 3.0 as demonstrated in Figure 1.

Figure 1: vSRX 3.0 Scaling Deployment for East-West TrafficvSRX 3.0 Scaling Deployment for East-West Traffic

Components of this deployment are:

  • West and east Azure vnet or subnet

  • Internal Azure Load Balancer

  • vSRX 3.0 VMSS

The west and east segments in this illustration represent two user networks, and these networks need to access each other (west-east traffic) or Internet (outbound traffic). The standard Azure internal Load Balancer that helps load-balance all traffic that comes from west and east. The high availability rule for load balancing is configured on the Azure Load Balancer's high availability ports. The high availability ports rule is set with front-end and back-end as port 0 and protocol is set as ALL. The vSRX 3.0 VMSS builds up a vSRX 3.0 group with multiple identical vSRX 3.0 VMs. It acts as a back-end pool of the internal Load Balancer. The Azure internal Load Balancer only distributes traffic to vSRX 3.0 VMSS per flow (5-tuples) according to the load-balancing algorithm. It does not make changes on packets, like destination NAT translation. Its front-end IP is only a route next hop for west or east networks.

Traffic flow and management illustrated in Figure 2.

Figure 2: Packet Flow Between West -East TrafficPacket Flow Between West -East Traffic

Manual Deployment of vSRX Scale-In and Scale-Out Solution for East-West Traffic

To manually implement this demonstrated deployment:

  1. Sign in to the Azure Portal.
  2. Add an Azure network in a new resource group, with four subnets: MGMT, EAST, WEST, SOUTH.
    • Add a Frontend IP

    • Add a Backend pool

    • Add a health probe

    • Add a load-balancing rule (enable high availability [HA]

    • Add a network security group, and add inbound security rules for enabling SSH and Wweb service.

  3. Add an Azure Load Balancer with type as Internal and SKU as Standard.
  4. Create a VMSS with Image as vSRX and Size as Standard_DS3_v2.
    1. In the Networking section, create two network information collectors (NICs) in subnets MGMT and South, respectively and specify the above values in the internal Azure Load Balancer.

    2. In the scaling section, specify the initial instance count as 2, and add a custom scale-out and scale-in rule based on CPU available.

  5. Deploy a Linux host in the west or east subnet.
  6. Define a route table for subnets west and east, add a User Defined Routing (UDR) route with the next hop as the front-end IP of internal Load Balancer.
  7. Configure each vSRX 3.0 instance.

Understanding vSRX Scale-Out and Scale-In Deployment for South-North Traffic

You can manage the south-north traffic in Azure Cloud by deploying vSRX 3.0 as demonstrated in Figure 3.

Figure 3: vSRX 3.0 Scaling Deployment for South-North TrafficvSRX 3.0 Scaling Deployment for South-North Traffic

Components of this deployment are:

  • West and East Azure vnet or subnet

  • Internal Microsoft Azure Load Balancer

  • vSRX 3.0 VMSS

Traffic flow and management are illustrated in Figure 4 and Figure 5.

Figure 4: Packet Flow Between Internet (as request starter) and West Network SegmentPacket Flow Between Internet (as request starter) and West Network Segment
Figure 5: Packet Flow Between Internet and West Network Segment (as Request Starter)Packet Flow Between Internet and West Network Segment (as Request Starter)

Manual Deployment of vSRX Scale-Out and Scale-In Solution for South-North Traffic

To manually implement this demonstrated deployment:

  1. Sign in to the Azure Portal.
  2. Add an Azure network in a new resource group, with four subnets: MGMT, EAST, WEST, SOUTH.
    • Add a front-end IP

    • Add a back-end pool

    • Add a health probe

    • Add a load-balancing rule (enable high availability [HA])

    • Add a network security group, and add Inbound security rules for enabling SSH and web service.

  3. Add an Azure Load Balancer with type as Internal and SKU as Standard.
  4. Create a virtual machine scale set with Image as vSRX and Size as Standard_DS3_v2.
    1. In the Networking section, create two NICs in subnets MGMT and SOUTH, respectively and specify the above values in the internal Load Balancer.

    2. In the Scaling section, specify initial instance count as 2, and add custom scale-out and scale-in rule based on CPU available.

  5. Deploy a Linux host in the west or east subnet.
  6. Define a route table for subnets west and east, add a UDR route with a next hop as the front-end IP of internal laod balancer.
  7. Add an Azure Load Balancer with type as Public and SKU as Standard, and create a new public IP.
    • Add a front-end IP by using the above public IP

    • Add a back-end pool, and associate it with vSRX 3.0 VMSS

    • Add a health probe

    • Add a load-balancing rule with port 80, back-end port 80

  8. Configure a webserver in the west host.
  9. Configure each vSRX 3.0 instance.

Automatic Deployment of Solutions for vSRX Scaling

This topic provides you steps on how to automatically deploy the vSRX 3.0 scaling solutions, for east-west and south-north traffic.

  1. Download vSRX-Azure tool: https://github.com/Juniper/vSRX-Azure/archive/primary.zip.
  2. Change the directory using the cd vSRX-Azure/sample-templates/arm-templates-tool command.
  3. Deploy east-west solution:

    ./templates/vsrx-scale-out/vsrx.scale.e-w.vsrx.conf.sh > vsrx.scale.e-w.vsrx.conf ./deploy-azure-vsrx.sh -f templates/vsrx-scale-out/vsrx.scale.e-w.json -e templates/vsrx-scale-out/vsrx.scale.parameters.json -r vsrx.scale.e-w.vsrx.conf -g vsrx_scale_e_w ./templates/vsrx-scale-out/linux.deploy.sh vsrx_scale_e_w

  4. Deploy south-north solution:

    ./templates/vsrx-scale-out/vsrx.scale.s-n.vsrx.conf.sh > vsrx.scale.s-n.vsrx.conf ./deploy-azure-vsrx.sh -f templates/vsrx-scale-out/vsrx.scale.s-n.json -e templates/vsrx-scale-out/vsrx.scale.parameters.json -r vsrx.scale.s-n.vsrx.conf -g vsrx_scale_s_n ./templates/vsrx-scale-out/linux.deploy.sh --web vsrx_scale_g_n

    After deploying south to north solution by using deploy-azure-vsrx.sh, you cannot access the web server in Azure west or east vnet with front-end public IP of the public Load Balancer. You must replace IP address 1.1.1.1 with the real front-end public IP at each vSRX instance of Azure VMSS using the replace pattern 1.1.1.1 with x.x.x.x command and then commit the configuration.