Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Using TOR Switches and OVSDB to Extend the Contrail Cluster to Other Instances

    Overview: Support for TOR Switch and OVSDB

    Contrail Release 2.1 and later supports extending a cluster to include bare metal servers and other virtual instances connected to a top-of-rack (TOR) switch that supports the Open vSwitch Database Management (OVSDB) Protocol. The bare metal servers and other virtual instances can belong to any of the virtual networks configured in the Contrail cluster, facilitating communication with the virtual instances running in the cluster. Contrail policy configurations can be used to control this communication.

    OVSDB protocol is used to configure the TOR switch and to import dynamically-learned addresses. VXLAN encapsulation is used in the data plane communication with the TOR switch.

    TOR Services Node (TSN)

    A new node, the TOR services node (TSN), is introduced and provisioned as a new role in the Contrail system. The TSN acts as the multicast controller for the TOR switches. The TSN also provides DHCP and DNS services to the bare metal servers or virtual instances running behind TOR ports.

    The TSN receives all the broadcast packets from the TOR, and replicates them to the required compute nodes in the cluster and to other EVPN nodes. Broadcast packets from the virtual machines in the cluster are sent directly from the respective compute nodes to the TOR switch.

    The TSN can also act as the DHCP server for the bare metal servers or virtual instances, leasing IP addresses to them, along with other DHCP options configured in the system. The TSN also provides a DNS service for the bare metal servers. Multiple TSN nodes can be configured in the system based on the scaling needs of the cluster.

    Contrail TOR Agent

    A TOR agent provisioned in the Contrail cluster acts as the OVSDB client for the TOR switch, and all of the OVSDB interactions with the TOR switch are performed by using the TOR agent. The TOR agent programs the different OVSDB tables onto the TOR switch and receives the local unicast table entries from the TOR switch.

    The typical practice is to run the TOR agent on the TSN node.

    Configuration Model

    The following figure depicts the configuration model used in the system.

    Figure 1: Configuration Model

    Configuration Model

    The TOR agent receives the configuration information for the TOR switch. The TOR agent translates the Contrail configuration to OVSDB and populates the relevant OVSDB table entries in the TOR switch.

    The following table maps the Contrail configuration objects to the OVSDB tables.

    Contrail Object

    OVSDB Table

    Physical device

    Physical switch

    Physical interface

    Physical port

    Virtual networks

    Logical switch

    Logical I\interface

    <VLAN physical port> binding to logical switch

    Layer 2 unicast route table

    Unicast remote and local table

    Multicast remote table

    Multicast local table

    Physical locator table

    Physical locator set table

    Control Plane

    The TOR agent receives the EVPN route entries for the virtual networks in which the TOR switch ports are members, and adds the entries to the unicast remote table in the OVSDB.

    MAC addresses learned in the TOR switch for different logical switches (entries from the local table in OVSDB) are propagated to the TOR agent. The TOR agent exports the addresses to the control node in the corresponding EVPN tables, which are further distributed to other controllers and subsequently to compute nodes and other EVPN nodes in the cluster.

    The TSN node receives the replication tree for each virtual network from the control node. It adds the required TOR addresses to the received replication tree, forming its complete replication tree. The other compute nodes receive the replication tree from the control node, whose tree includes the TSN node.

    Data Plane

    The data plane encapsulation method is VXLAN. The virtual tunnel endpoint (VTEP) for the bare metal end is on the TOR switch.

    Unicast traffic from bare metal servers is VXLAN-encapsulated by the TOR switch and forwarded, if the destination MAC address is known within the virtual switch.

    Unicast traffic from the virtual instances in the Contrail cluster is forwarded to the TOR switch, where VXLAN is terminated and the packet is forwarded to the bare metal server.

    Broadcast traffic from bare metal servers is received by the TSN node. The TSN node uses the replication tree to flood the broadcast packets in the virtual network.

    Broadcast traffic from the virtual instances in the Contrail cluster is sent to the TSN node, which replicates the packets to the TORs.

    Using the Web Interface to Configure TOR Switch and Interfaces

    The Contrail Web user interface can be used to configure a TOR switch and the interfaces on the switch.

    Select Configure > Physical Devices > Physical Routers and create an entry for the TOR switch, providing the TOR's IP address and VTEP address. Also configure the TSN and TOR agent addresses for the TOR.

    Figure 2: Add Physical Router Window

    Add Physical Router Window

    Select Configure > Physical Devices > Interfaces and add the logical interfaces to be configured on the TOR. The name of the logical interface must match the name on the TOR, for example, ge-0/0/0.10. Also enter other logical interface configurations, such as VLAN ID, MAC address, and IP address of the bare metal server and the virtual network to which it belongs.

    Figure 3: Add Interface Window

    Add Interface Window

    Provisioning with Fab Commands

    The TSN can be provisioned using fab commands.

    To provision with fab commands, the following changes are required in the testbed.py file.

    1. In env.roledefs, add hosts for the roles tsn and toragent.
    2. Configure the TSN node in the compute role.
    3. Use the following example to configure the TOR agent.the TOR agent node should also be configured into the compute node.
      env.tor_agent = {
          host2: [{
      	'tor_ip':'<ip address>',    # IP address of the TOR
      	'tor_id':'<1>',             # Numeric value to uniquely identify TOR
      	 'tor_type':'ovs',           # always ovs
      	'tor_ovs_port':'9999',      # the TCP port to connect on the TOR
      	'tor_ovs_protocol':'tcp',   # always tcp, for now
      	'tor_tsn_ip':'<ip address>' # IP address of the TSN for this TOR
      	'tor_tsn_name':'<name>',     # Name of the TSN node
      	'tor_name':'<switch name>',  # Name of the TOR switch
      	'tor_tunnel_ip':'<ip address>',  # IP address of Data tunnel endpoint
      	'tor_vendor_name':'<name>',      # Vendor name for TOR switch
      	'tor_http_server_port':'<port>', # HTTP port for TOR Introspect
      
          }]
      
      }
    4. Two TOR agents provisioned on different hosts are considered redundant to each other if the tor_name and tor_ovs_port in their respective configurations are the same. Note that this means the TOR agents are listening on the same port for SSL connections on both the nodes.

      Use the tasks add_tsn and add_tor_agent to provision the TSN and TOR agents.

      To configure an existing compute node as a TSN or a TOR Agent, use the following fab tasks:

      add_tsn : Provision all the TSNs given in the testbed. 
      add_tor_agent : Add all the tor-agents given in the testbed. 
      add_tor_agent_node : Add all tor-agents in specified node 
      (e.g., fab add_tor_agent_node:root@<ip>). 
      add_tor_agent_by_id : Add the specified tor-agent, identified by tor_agent_id 
      (e.g., fab add_tor_agent_by_id:1,root@<ip>). 
      add_tor_agent_by_index : Add the specified tor-agent,
      identified by index/position in testbed 
      (e.g., fab add_tor_agent_by_index:0,root@<ip>). 
      add_tor_agent_by_index_range : Add a group of tor-agents,
      identified by indices in testbed 
      <be>(e.g., fab add_tor_agent_by_index_range:0-2,root@<ip>).</be> 
      delete_tor_agent : Remove all tor-agents in all nodes. 
      delete_tor_agent_node : Remove all tor-agents in specified node
      (e.g., fab delete_tor_agent_node:root@<ip>). 
      delete_tor_agent_by_id : Remove the specified tor-agent,
      identified by tor-id 
      (e.g., fab delete_tor_agent_by_id:2,root@<ip>). 
      delete_tor_agent_by_index : Remove the specified tor-agent,
      identified by index/position in testbed  (e.g., fab delete_tor_agent_by_index:0,root@<ip>). 
      delete_tor_agent_by_index_range : Remove a group of tor-agents,
      identified by indices in testbed 
      (e.g., fab delete_tor_agent_by_index_range:0-2,root@<ip>). 
      setup_haproxy_config : provision HA Proxy. 
      To configure an existing compute node as a TSN or a TOR Agent, use the following fab tasks: 
      ​fab add_tsn_node:True,user@<ip> 
      fab add_tor_agent_node:True,user@<ip> 
      Note that fab setup_all would provision appropriately when run with updated testbed.
      
      
    5. Vrouter limits on the TSN node have to be configured to suit the scaling requirements in the setup. The following can be updated in the testbed file, before the setup, so that appropriate vrouter options are configured by fab task.
      env.vrouter_module_params = {
           host4:{'mpls_labels':'196000', 'nexthops':'521000', 'vrfs':'65536', 'macs':'1000000'},
           host5:{'mpls_labels':'196000', 'nexthops':'521000', 'vrfs':'65536', 'macs':'1000000'}
      }
      

      The following applies:

      • mpls_labels = (max number of VNs * 3) + 4000
      • nexthops = (max number of VNs * 4) + number of TORs + number of compute nodes + 100
      • vrfs = Max number of VNs
      • macs = Maximum number of MACs in a VN

    On a TSN node or on a compute node, the currently configured limits can be seen using vrouter --info. If these have to be changed, update them by editing the /etc/modprobe.d/vrouter.conf file with the following (numbers updated to the desired values) and restarting the node.

    options vrouter vr_mpls_labels=196000 vr_nexthops=521000 vr_vrfs=65536 vr_bridge_entries=1000000

    Prerequisite Configuration for QFX5100 Series Switch

    When using the Juniper Networks QFX5100 Series switches, ensure the following configurations are made on the switch before configuring to extend the Contrail cluster. 

    1. Enable OVSDB.
    2. Set the connection protocol.
    3. Indicate the interfaces that will be managed via OVSDB.
    4. Configure the controller in case pssl is used. If HA proxy is used, use the address of the HA Proxy node and use the vIP when VRRP is used between multiple nodes running HA Proxy.
      set interfaces lo0 unit 0 family inet address
      
      set switch-options ovsdb-managed
      
      set switch-options vtep-source-interface lo0.0
      
      set protocols ovsdb interfaces
      
      set protocols ovsdb passive-connection protocol tcp port
      
      set protocols ovsdb controller <tor-agent-ip> inactivity-probe-duration 10000 protocol ssl port <tor-agent-port>
      
    5. When using SSL to connect, CA-signed certificates must be copied to the /var/db/certs directory in the QFX device. One way to get these is using the following commands (could be run on any server).
      apt-get install openvswitch-common 
      ovs-pki init 
      ovs-pki req+sign vtep 
      scp vtep-cert.pem root@<qfx>:/var/db/certs 
      scp vtep-privkey.pem root@<qfx>:/var/db/certs 
      cacert.pem file will be available in /var/lib/openvswitch/pki/switchca, when the above are done. This is the file to be provided in the above testbed (in env.ca_cert_file).  
      

    Debug QFX5100 Configuration

    On the QFX, use the following commands to show the OVSDB configuration.

    show ovsdb logical-switch
    
    show ovsdb interface
    
    show ovsdb mac
    
    show ovsdb controller
    
    show vlans

    Using the agent introspect on the TOR agent and TSN nodes show the configuration and operational state of these modules.

    The TSN module is like any other contrail-vrouter-agent on a compute node, with introspect access available on port 8085 by default. Use the introspect on port 8085 to view operational data such as interfaces, virtual network, and VRF information, along with their routes.

    The port on which the TOR agent introspect access is available is in the configuration file provided to contrail-tor-agent. This provides the OVSDB data available through the client interface, apart from the other data available in a Contrail Agent.

    Changes to Agent Configuration File

    Changes are made to the agent features in the configuration file. In the /etc/contrail/contrail-vrouter-agent.conf file for TSN, the agent _mode option is now available in the DEBUG section to configure the agent to be in TSN mode.

    agent_mode = tsn

    The following are typical configuration items in a TOR agent configuration file.

    agent_mode = tsn
    

    The following are typical configuration items in a TOR agent configuration file

    [DEFAULT]
    
    agent_name = noded2-1 # Name (formed with hostname and TOR id from below)
    
    agent_mode = tor # Agent mode
    
    http_server_port=9010 # Port on which Introspect access is available
     
    
    [DISCOVERY]
    
    server=<ip> # IP address of discovery server
     
    
    [TOR]
    
    tor_ip=<ip> # IP address of the TOR to manage
    
    tor_id=1 # Identifier for ToR Agent.
    
    tor_type=ovs # ToR management scheme - only “ovs” is supported
    
    tor_ovs_protocol=tcp # IP-Transport protocol used to connect to TOR, can be tcp or pssl
    
    tor_ovs_port=port # OVS server port number on the ToR
    
    tsn_ip=<ip> # IP address of the TSN
    
    tor_keepalive_interval=10000 # keepalive timer in ms 
    
    ssl_cert=/etc/contrail/ssl/certs/tor.1.cert.pem # path to SSL certificate on TOR Agent, needed for pssl
    
    ssl_privkey=/etc/contrail/ssl/private/tor.1.privkey.pem # path to SSL private key on TOR Agent, needed for pssl
    
    ssl_cacert=/etc/contrail/ssl/certs/cacert.pem # path to SSL CA cert on the node, needed for pssl 
    

    REST APIs

    For information regarding REST APIs for physical routers and physical and logical interfaces, see REST APIs for Extending the Contrail Cluster to Physical Routers, and Physical and Logical Interfaces

    Modified: 2015-09-02