Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

vSRX Virtual Firewall Cluster Staging and Provisioning for KVM

You can provision the vSRX Virtual Firewall VMs and virtual networks to configure chassis clustering.

The staging and provisioning of the vSRX Virtual Firewall chassis cluster includes the following tasks:

Chassis Cluster Provisioning on vSRX Virtual Firewall

Chassis cluster requires the following direct connections between the two vSRX Virtual Firewall instances:

  • Control link, or virtual network, which acts in active/passive mode for the control plane traffic between the two vSRX Virtual Firewall instances

  • Fabric link, or virtual network, which acts in active/active mode for the data traffic between the two vSRX Virtual Firewall instances

    Note:

    You can optionally create two fabric links for more redundancy.

The vSRX Virtual Firewall cluster uses the following interfaces:

  • Out-of-band Management interface (fxp0)

  • Cluster control interface (em0)

  • Cluster fabric interface (fab0 on node0, fab1 on node1)

Note:

The control interface must be the second vNIC. You can optionally configure a second fabric link for increased redundancy.

Figure 1: vSRX Virtual Firewall Chassis ClustervSRX Virtual Firewall Chassis Cluster

vSRX Virtual Firewall supports chassis cluster using the virtio driver and interfaces, with the following considerations:

  • When you enable chassis cluster, you must also enable jumbo frames (MTU size = 9000) to support the fabric link on the virtio network interface.

  • If you configure a chassis cluster across two physical hosts, disable igmp-snooping on each host physical interface that the vSRX Virtual Firewall control link uses to ensure that the control link heartbeat is received by both nodes in the chassis cluster.

  • After you enable chassis cluster, the vSRX Virtual Firewall instance maps the second vNIC to the control link, em0. You can map any other vNICs to the fabric link.

Note:

For virtio interfaces, link status update is not supported. The link status of virtio interfaces is always reported as Up. For this reason, a vSRX Virtual Firewall instance using virtio and chassis cluster cannot receive link up and link down messages from virtio interfaces.

The virtual network MAC aging time determines the amount of time that an entry remains in the MAC table. We recommend that you reduce the MAC aging time on the virtual networks to minimize the downtime during failover.

For example, you can use the brctl setageing bridge 1 command to set aging to 1 second for the Linux bridge.

You configure the virtual networks for the control and fabric links, then create and connect the control interface to the control virtual network and the fabric interface to the fabric virtual network.

Creating the Chassis Cluster Virtual Networks with virt-manager

In KVM, you create two virtual networks (control and fabric) to which you can connect each vSRX Virtual Firewall instance for chassis clustering.

To create a virtual network with virt-manager:

  1. Launch virt-manager and select Edit>Connection Details. The Connection details dialog box appears.
  2. Select Virtual Networks. The list of existing virtual networks appears.
  3. Click + to create a new virtual network for the control link. The Create a new virtual network wizard appears.
  4. Set the subnet for this virtual network and click Forward.
  5. Select Enable DHCP and click Forward.
  6. Select Isolated virtual network and click forward.
  7. Verify the settings and click Finish to create the virtual network.

Creating the Chassis Cluster Virtual Networks with virsh

In KVM, you create two virtual networks (control and fabric) to which you can connect each vSRX Virtual Firewall for chassis clustering.

To create the control network with virsh:

  1. Use the virsh net-define command on the host OS to create an XML file that defines the new virtual network. Include the XML fields described in Table 1 to define this network.
    Note:

    See the official virsh documentation for a complete description of available options.

    Table 1: virsh net-define XML Fields

    Field

    Description

    <network>...</network>

    Use this XML wrapper element to define a virtual network.

    <name>net-name</name>

    Specify the virtual network name.

    <bridge name=”bridge-name” />

    Specify the name of the host bridge used for this virtual network.

    <forward mode=”forward-option” />

    Specify routed or nat. Do not use the <forward> element for isolated mode.

    <ip address=”ip-address” netmask=”net-mask

    <dhcp range start=”start” end=”end” </dhcp> </ip>

    Specify the IP address and subnet mask used by this virtual network, along with the DHCP address range.

    The following example shows a sample XML file that defines a control virtual network.

  2. Use the virsh net-start command to start the new virtual network.

    hostOS# virsh net-start control

  3. Use the virsh net-autostart command to automatically start the new virtual network when the host OS boots.

    hostOS# virsh net-autostart control

  4. Optionally, use the virsh net-list –all command in the host OS to verify the new virtual network.
  5. Repeat this procedure to create the fabric virtual network.

Configuring the Control and Fabric Interfaces with virt-manager

To configure the control and fabric interfaces for chassis clustering with virt-manager:

  1. In virt-manager, double-click the vSRX Virtual Firewall VM and select View>Details. The vSRX Virtual Firewall Virtual Machine details dialog box appears.
  2. Select the second vNIC and select the control virtual network from the Source device list.
  3. Select virtio from the Device model list and click Apply.
  4. Select a subsequent vNIC, and select the fabric virtual network from the Source device list.
  5. Select virtio from the Device model list and click Apply.
  6. For the fabric interface, use the ifconfig command on the host OS to set the MTU to 9000.

    hostOS# ifconfig vnet1 mtu 9000

Configuring the Control and Fabric Interfaces with virsh

To configure control and fabric interfaces to a vSRX Virtual Firewall VM with virsh:

  1. Type virsh attach-interface --domain vsrx-vm-name --type network --source control-vnetwork --target control --model virtio on the host OS.

    This command creates a virtual interface called control and connects it to the control virtual network.

  2. Type virsh attach-interface --domain vsrx-vm-name --type network --source fabric-vnetwork --target fabric --model virtio on the host OS.

    This command creates a virtual interface called fabric and connects it to the fabric virtual network.

  3. For the fabric interface, use the ifconfig command on the host OS to set the MTU to 9000.

    hostOS# ifconfig vnet1 mtu 9000

Configuring Chassis Cluster Fabric Ports

After the chassis cluster is formed, you must configure the interfaces that make up the fabric (data) ports.

Ensure that you have configured the following:

  • Set the chassis cluster IDs on both vSRX Virtual Firewall instances and rebooted the vSRX Virtual Firewall instances.

  • Configured the control and fabric links.

  1. On the vSRX Virtual Firewall node 0 console in configuration mode, configure the fabric (data) ports of the cluster that are used to pass real-time objects (RTOs). The configuration will be synchronized directly through the control port to vSRX Virtual Firewall node 1.
    Note:

    A fabric port can be any unused revenue interface.

  2. Reboot vSRX Virtual Firewall node 0.