Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Configuring an IP Fabric using Junos Space Network Director or OpenClos

    Juniper Networks provides tools to help automate the creation of spine-and-leaf IP fabrics for SaaS environments. This example includes two options to help with IP fabric creation: OpenClos and Junos Space Network Director.

    OpenClos is a Python script library that enables you automate the design, deployment, and maintenance of a Layer 3 fabric built on BGP. To create an IP fabric that uses a spine-and-leaf architecture, the script generates device configuration files and uses zero touch provisioning (ZTP) to push the configuration files to the devices.

    OpenClos functionality has also been built into Network Director 2.0 (and later), which allows you to provision spine-and-leaf Layer 3 fabrics using a GUI-based wizard.

    Leaf Device Configuration

    The Layer 3 Fabric wizard and OpenClos tools autogenerate the following configuration elements for leaf devices:

    • System configuration (hostname, root password, services, syslog, and so on)
    • Upstream interfaces to each spine device
    • Downstream Layer 2 interfaces for server access
    • Loopback interface
    • IRB interface to provide gateway address for servers
    • VLAN to aggregate server-facing interfaces
    • Static routes and other routing options
    • EBGP sessions to each spine device
    • Routing policy
    • LLDP
    • SNMP and event options

    Spine Device Configuration

    The Layer 3 Fabric wizard and OpenClos tools autogenerate the following configuration elements for spine devices:

    • System configuration (hostname, root password, services, syslog, and so on)
    • Downstream interfaces to each leaf device
    • Loopback and management interfaces
    • Static routes and other routing options
    • EBGP sessions to each leaf device
    • Routing policy
    • LLDP
    • SNMP and event options

    The following sections describe how to use these automation tools to provision the leaf and spine layers of an IP fabric.

    Using Network Director to Provision Leaf and Spine Devices

    This section describes how to use the Layer 3 Fabrics wizard in Junos Space Network Director 2.5 to provision the leaf and spine layers of an IP fabric.

    Note: Network Director 2.0 provided initial support for Layer 3 fabrics, using QFX5100 switches. Network Director 2.5 adds support for QFX10002 switches.

    The procedure below creates the leaf configuration shown in the section Example: Configuring the Software as a Service Solution, and creates the spine configuration shown in the section Example: Configuring the Software as a Service Solution.

    Note: More detailed information on creating Layer 3 fabrics using Network Director can be found at Creating Layer 3 Fabrics.

    To autoprovision the main spine and leaf configuration elements using Network Director:

    1. In the Views drop-down menu, select Logical View; then in the Tasks section, select Device Management > Manage Layer 3 Fabrics.
    2. In the Manage Layer-3 Fabrics section, select the Create option.
    3. On the Fabric Requirement page:
      1. Enter a name in the Fabric Name field.
      2. In the Spines section, ensure the Model field is set to QFX10002-72Q, and set the Initial Capacity and Max. Capacity fields to 4.

        Note: This procedure was originally created using QFX5100-24Q-2P switches as spine devices, thus their use in the screenshots below. To meet the SaaS solution’s current specifications, simply select QFX10002-72Q as the model for the spine switches.

      3. In the Leaves section, add two qfx5100-48t-6q and four qfx5100-48s-6q devices, and set the Max. Capacity field to 6.
      4. Click Next.
    4. On the Devices page, confirm that the device listing is correct and click Next.
    5. On the Configurations page, fill in the fields as shown below.
    6. On the Cabling page, review the cabling plan for the devices in the Layer 3 fabric and click Next.

      Note: The cabling plan displays the exact port numbers that you must use to connect the spine and leaf devices. Configurations created and deployed by Network Director will use these interface names.

    7. On the ZTP Settings page, enter the appropriate data in the various fields, including the serial number or management interface MAC address for all spine devices, and click Next.

      Note: Using these details, the spine devices will be autodiscovered using LLDP, once cabling is completed.

    8. On the Review page, review the configuration settings for the Layer 3 fabric, and when you are ready to deploy the configuration files to the devices, click Deploy.
    9. Monitor the progress of the deployment by using the ZTP Provisioning dialog box.
    10. To verify the initial configuration and connectivity of the leaf and spine devices, go to the Views drop-down menu and select Logical View; then in the Tasks section, select View Inventory.

      The leaf and spine devices should appear in the Device Inventory window with a connection state of UP and configuration state of In Sync.

      Note: Again, note that this procedure was originally created using QFX5100-24Q-2P switches as spine devices, thus their appearance in the screenshots above. Using the steps in this procedure, the platform displayed for spine devices should be QFX10002-72Q.

    For More Information

    The resulting leaf configuration from this procedure can be found at Example: Configuring the Software as a Service Solution.

    The resulting spine configuration from this procedure can be found at Example: Configuring the Software as a Service Solution.

    More detailed information on creating Layer 3 fabrics using Network Director can be found at Creating Layer 3 Fabrics.

    Using OpenClos to Provision Leaf and Spine Devices

    This section describes how to use OpenClos 3.0 to provision the leaf and spine layers of an IP fabric.

    The procedure below creates the leaf configuration shown in the section Example: Configuring the Software as a Service Solution, and creates the spine configuration shown in the section Example: Configuring the Software as a Service Solution.

    Note: More detailed information on creating Layer 3 fabrics using OpenClos can be found at https://github.com/Juniper/OpenClos.

    Before you begin, install OpenClos on a Linux server, as follows:

    1. Install OpenClos from https://github.com/Juniper/OpenClos/tree/devR3.0.
    2. OpenClos should be installed in /var/tmp.
      user@ubuntu-Openclos:/var/tmp$ ls
      OpenClos-devR3.0
      
      

    To autoprovision the main spine and leaf configuration elements using OpenClos:

    1. Navigate to the closDefinition.yaml file.

      This file is used to include the IP fabric requirements.

      user@ubuntu-Openclos:/var/tmp$ cd OpenClos-devR3.0/jnpr/openclos/conf
      user@ubuntu-Openclos:/var/tmp/OpenClos-devR3.0/jnpr/openclos/conf$
    2. Open the closDefinition.yaml file and edit the ZTP settings, number of PODs, and Junos OS image locations.
      solution@ubuntu-Openclos:/var/tmp/OpenClos-devR3.0/jnpr/openclos/conf$ more closDefinition.yaml
      ztp:
          # the image file should be placed under <install-dir>/jnpr/openclos/conf/ztp     
          # if not placed under this dir, the file would not be accessible from http server     
          # and ZTP process will be broken, can be overridden at each pod for Spine/Leaf     
          # this field is optional 
      #    junosImage : jinstall-qfx-5-14.1X53-D10.4-domestic-signed.tgz 
      
          dhcpSubnet : 10.94.63.150/24 
          # dhcpOptionRoute is the Gateway address for any out-of-band network including 
          # management network, this will get configured using static route.
          # by default openclos would run on same subnet as devices.
          dhcpOptionRoute  : 10.94.63.254
      
          # Following two fields are optional, if not provided start and end
          # includes complete dhcp subnet, example for 10.0.2.0/24
          # dhcpOptionRangeStart: 10.0.2.1, dhcpOptionRangeEnd: 10.0.2.255
         dhcpOptionRangeStart : 10.94.63.150 
         dhcpOptionRangeEnd : 10.94.63.170
      
      pods:
          # pod name or pod identifier
          labLeafSpine:
              spineCount : 4
              # possible options for spine deviceType are qfx5100-24q-2p, qfx10002-***, qfx10008-***
              # the image file should be placed under <install-dir>/jnpr/openclos/conf/ztp         
              # if not placed under this dir, the file would not be accessible from http server         
              # and ZTP process will be broken, these are optional, overrides global setting ztp.junosImage
              spineSettings :
                  - deviceType : qfx10002-72q
                    junosImage : jinstall-qfx-10-f-15.1X53-D32.2-domestic-signed.tgz
      
              leafCount : 6
              # possible options for leafDeviceType are qfx5100-96S, qfx5100-48s-6q
              # for complete list refer to openclos.yaml
              # the image file should be placed under <install-dir>/jnpr/openclos/conf/ztp
              leafSettings :
                  - deviceType : qfx5100-48s-6q
                    junosImage : jinstall-qfx-5-flex-15.1R3.6-domestic-signed.tgz
                  - deviceType : OCX1100-48SX
                    junosImage : jinstall-ocx-11-flex-14.1X53-D35.3-domestic.tgz
              # Number of uplink from leaf must be properly connected and up to indicate
              # the leaf device as "good". If the leaf device is not in "good" state
              # it would not go through 2-stage ZTP/configuration process.
              # Possible value is in between 2 and spineCount, inclusive both end.
              # This field is optional, default value is max(2, math.celi(spineCount/2))
              leafUplinkcountMustBeUp : 3 
              hostOrVmCountPerLeaf : 25
              interConnectPrefix : 192.168.11.0/24
              vlanPrefix : 172.16.64.0/24
              loopbackPrefix : 10.0.16.0/24
              # either managementPrefix or (managementStartingIP, managementMask) is mandatory. Here is how it works:
              # case 1: managementPrefix : 1.2.3.7/24
              #         from cidr notation of managementPrefix we know available block is [1.2.3.0 - 1.2.3.255]
              #         from ip portion of managementPrefix we know starting ip is 1.2.3.7
              #         so the effective range is [1.2.3.7 - 1.2.3.7+spineCount+leafCount]
              # case 2: managementStartingIP : 1.2.3.7
              #         managementMask : 24
              #         from cidr notation of 'managementStartingIP/managementMask' we know available block is [1.2.3.0 - 1.2.3.255]
              #         from managementStartingIP we know starting ip is 1.2.3.7
              #         so the effective range is [1.2.3.7 - 1.2.3.7+spineCount+leafCount]
              managementPrefix : 10.94.63.150/24
              # managementStartingIP  : 10.94.63.150
              # managementMask  : 24
              spineAS : 420005000
              leafAS : 420006000
              # possible options for topologyType are threeStage, fiveStageRealEstate, fiveStagePerformance
              topologyType : threeStage
              inventory : inventoryLabLeafSpine.json
              # device default root password 
              # List of out of band networks, example - devices in management network
              #outOfBandAddressList : 
               #   - 10.94.185.18/32
                #  - 10.94.185.19/32
                 # - 172.16.0.0/12
                    0.0.0.0/0
              # Management network gateway address
              # It overrides ztp:dhcpOptionRoute setting
              outOfBandGateway : 10.94.63.254
              
              # device default root password
              devicePassword: <password>
              # possible options for leafDeviceType are qfx5100-96S, qfx5100-48s-6q
              # but we can use qfx5100-24q-2p with customized SKU 
              # following example assumes device has no expansion module, so there
              # are 24 ports. First 16 ports are used as access port and 
              # remaining 8 ports are used as uplink ports
              #leafSettings :
               #   - deviceType : qfx5100-24q-2p
                #    uplinkPorts : ['et-0/0/[16-23]']
                 #   downlinkPorts : ['et-0/0/[0-15]']
              # Number of uplink from leaf must be properly connected and up to indicate
              # the leaf device as "good". If the leaf device is not in "good" state
              # it would not go through 2-stage ZTP/configuration process.
              # Possible value is in between 2 and spineCount, inclusive both end.
              # This field is optional, default value is max(2, math.celi(spineCount/2))
      
      
    3. Confirm that the devices to be used in the IP fabric are included in the deviceFamily.yaml file.

      This file includes details about devices to be used in the IP fabric. OpenClos 3.0 includes native support for several devices, including the QFX5100, QFX10002, and OCX1100 switches.

      user@ubuntu-Openclos:/var/tmp/OpenClos-devR3.0/jnpr/openclos/conf$ more deviceFamily.yaml
      # Device port usage based on device family and topology
      # qfx5100-24q-2p ports could have 32 ports with two four-port expansion modules
      # When used as Fabric, all ports are downlink ports
      # When used as Spine in 3-Stage topology, all ports are used as downlink
      # When used as Spine in 5-Stage topology, ports are split between uplink and downlink 
      
      deviceFamily:
          qfx5100-24q-2p:
              fabric:
                  uplinkPorts: 
                  downlinkPorts: ['et-0/0/[0-23]', 'et-0/1/[0-3]', 'et-0/2/[0-3]']
              spine:
                  uplinkPorts: ['et-0/0/[16-23]', 'et-0/1/[0-3]', 'et-0/2/[0-3]']
                  downlinkPorts: 'et-0/0/[0-15]'
          qfx10002-36q:
              fabric:
                  uplinkPorts: 
                  downlinkPorts: 'et-0/0/[0-35]'
              spine:
                  uplinkPorts: 'et-0/0/[18-35]'
                  downlinkPorts: 'et-0/0/[0-17]'
          qfx10002-72q:
              fabric:
                  uplinkPorts: 
                  downlinkPorts: 'et-0/0/[0-71]'
              spine:
                  uplinkPorts: 'et-0/0/[36-71]'
                  downlinkPorts: 'et-0/0/[0-35]'
          qfx10008:
              fabric:
                  uplinkPorts: 
                  downlinkPorts: 
              spine:
                  uplinkPorts: 
                  downlinkPorts: 
              leaf:
                  uplinkPorts: 
                  downlinkPorts: 
                   
          qfx5100-48s-6q:
              leaf:
                  uplinkPorts: 'et-0/0/[48-53]'
                  downlinkPorts: ['xe-0/0/[0-47]', 'ge-0/0/[0-47]']
          qfx5100-48t-6q:
              leaf:
                  uplinkPorts: 'et-0/0/[48-53]'
                  downlinkPorts: 'xe-0/0/[0-47]'
          OCX1100-48SX:
              leaf:
                  uplinkPorts: 'et-0/0/[48-53]' 
                  downlinkPorts: 'xe-0/0/[0-47]'
          qfx5100-96s-8q:
              leaf:
                  uplinkPorts: 'et-0/0/[96-103]'
                  downlinkPorts: ['xe-0/0/[0-95]', 'ge-0/0/[0-95]']
          qfx5200-32c-32q:
              spine:
                  uplinkPorts: 
                  downlinkPorts: 'et-0/0/[00-31]'
              leaf:
                  uplinkPorts: 'et-0/0/[00-07]'
                  downlinkPorts: 'et-0/0/[08-35]'
          ex4300-24p:
              leaf:
                  uplinkPorts: 'et-0/1/[0-3]'
                  downlinkPorts: 'ge-0/0/[0-23]'
          ex4300-24t:
              leaf:
                  uplinkPorts: 'et-0/1/[0-3]'
                  downlinkPorts: 'ge-0/0/[0-23]'
          ex4300-32f:
              leaf:
                  uplinkPorts: ['et-0/1/[0-1]', 'et-0/2/[0-1]']
                  downlinkPorts: 'ge-0/0/[0-31]'
          ex4300-48p:
              leaf:
                  uplinkPorts: 'et-0/1/[0-3]'
                  downlinkPorts: 'ge-0/0/[0-47]'
          ex4300-48t:
              leaf:
                  uplinkPorts: 'et-0/1/[0-3]'
                  downlinkPorts: 'ge-0/0/[0-47]'
      
      # additional customization of port allocation based on topology
      3Stage:
          qfx5100-24q-2p:
              spine:
                  uplinkPorts: 
                  downlinkPorts: ['et-0/0/[0-23]', 'et-0/1/[0-3]', 'et-0/2/[0-3]']
          qfx10002-36q:
              spine:
                  uplinkPorts:
                  downlinkPorts: 'et-0/0/[0-35]'
          qfx10002-72q:
              spine:
                  uplinkPorts:
                  downlinkPorts: 'et-0/0/[0-71]'
          qfx10008:
              spine:
                  # assuming 8 x ULC-36Q-12Q28 used
                  # if ULC-30Q28 is used, the port range would change to 'et-*/0/[0-29]'
                  uplinkPorts: 
                  downlinkPorts: ['et-0/0/[0-35]', 'et-1/0/[0-35]', 'et-2/0/[0-35]', 'et-3/0/[0-35]', 'et-4/0/[0-35]', 'et-5/0/[0-35]', 'et-6/0/[0-35]', 'et-7/0/[
      0-35]']
              leaf:
                  # assuming 8 x ULC-60S-6Q used
                  uplinkPorts: ['et-0/0/[60-65]', 'et-1/0/[60-65]', 'et-2/0/[60-65]', 'et-3/0/[60-65]', 'et-4/0/[60-65]', 'et-5/0/[60-65]', 'et-6/0/[60-65]', 'et-
      7/0/[60-65]']
                  downlinkPorts: ['et-0/0/[0-59]', 'et-1/0/[0-59]', 'et-2/0/[0-59]', 'et-3/0/[0-59]', 'et-4/0/[0-59]', 'et-5/0/[0-59]', 'et-6/0/[0-59]', 'et-7/0/[
      0-59]']
      
      5Stage:
              
      lineCard:
          ULC-30Q28:
              uplinkPorts: 
              downlinkPorts: 'et-0/0/[0-29]'
          ULC-36Q-12Q28:
              uplinkPorts:
              downlinkPorts: 'et-0/0/[0-35]'
          ULC-60S-6Q:
              uplinkPorts: 'et-0/0/[60-65]'
              downlinkPorts: 'et-0/0/[0-59]'
      
      
    4. Edit the openclos.yaml file to configure OpenClos application settings.
      user@ubuntu-Openclos:/var/tmp/OpenClos-devR3.0/jnpr/openclos/conf$ more openclos.yaml
      # Deployment mode
      # ztpStaged: true/false, true indicates ZTP process goes through 2-stage
      # device configuration. During leaf device boot-strap, it gets generic config,
      # then OpenClos finds the topology and applies new topology.
      # False indicates all leaf configs are generated based on cabling-plan and 
      # deployed to the device using ZTP process. 
      # ztpStagedAttempt: How many times OpenClos tries to connect to leaf
      # to collect lldp data when it receives trap from that leaf. 
      # default is 5 times. 0 means no-op so it basically disables the 2-stage.
      # ztpStagedInterval: How long OpenClos waits in between retries. 
      # default is 60 seconds. 0 means do not wait. 
      # ztpVcpLldpDelay: How long OpenClos waits between delete VCP on EX4300 and LLDP collection
      # ztpStagedAttempt and ztpStagedInterval only take effect
      # when ztpStaged is set to true.
      deploymentMode :
          ztpStaged : true
          ztpStagedAttempt: 5
          ztpStagedInterval: 30
          ztpVcpLldpDelay: 40
          
      # Generated file/configuration location
      # default value 'out' relative to current dir 
      # can take absolute path as '/tmp/out/'
      outputDir : /tmp/out
      
      # Database URL
      # Please NOTE dbUrl is used by sqlite only. For all other databases, please see
      # MySQL parameters below as an example.
      
      # sqlite parameters
      # for relative file location ./data/sqllite3.db, url is sqlite:///data/sqllite3.db
      # absolute file location /tmp/sqllite3.db, url is sqlite:////tmp/sqllite3.db
      dbUrl : sqlite:///data/sqllite3.db
      
      # MySQL  parameters
      #dbDialect : mysql
      #dbHost : localhost
      #dbUser : root
      #dbPassword : password
      #dbName : openclos
      
      # debug SQL and ORM
      # "true" will enable logging all SQL statements to underlying DB
      debugSql : false
      debugRest : true
      
      #device configuration will be stored by default in DB
      #"file" will allow device configuration to store in DB and File
      writeConfigInFile : true
      
      
      # List of colors used in the DOT file to represent interconnects 
      DOT :
         colors :
             - blue
             - green
             - violet
             - brown
             - aquamarine
             - pink
             - cadetblue
         ranksep : 5 equally
      
      # HttpServer for REST and ZTP.
      # To make ZTP work the port has to be 80. IpAddr specified here
      # is used to populate dhcpd.conf fot ZTP. If no address is provided
      # REST will start at localhost
      # If protocol is http: 
      #     - certificate is ignored.
      #     - basic authentication is supported but disabled by default.
      # If protocol is https: 
      #     - basic authentication is enabled by non-empty username and password.
      #     - this openclos.yaml comes with a predefined username 'juniper' and password is 'juniper' ($9$R9McrvxNboJDWLJDikTQEcy)
      #     - if you need to use a different password, run "python crypt.py <cleartext_password>” to generate a 2-way encrypted password.
      #       and copy it to 'password' attribute
      #     - certificate must be full path to the server cert file. it can be generated by running "openssl req -new -x509 -keyout server.pem -out server.pem -da
      ys 365 -nodes"
      #     - you MUST change 'ipAddr' to the IP address of the REST server. REST server won't run if ipAddr is 0.0.0.0 in https mode 
      #restServer :
      #    version : 1
      #    protocol : https
      #    ipAddr : 0.0.0.0
      #    port : 20443
      #    username : juniper 
      #    password : $9$R9McrvxNboJDWLJDikTQEcy
      #    certificate : ~/openclos.pem
      restServer :
          version : 1
          protocol : http
          ipAddr : 10.94.63.190
          port : 20080
          
      # Number of threads used to communicate with devices
      report :
          threadCount : 20
               
      # SNMP trap settings for OpenClos
      # OpenClos uses traps to perform staged ZTP process
      # target address is where OpenClos is running (same as httpServer:ipAddr)
      # threadCount: Number of threads used to start 2-stage configuration for devices
      snmpTrap :
          openclos_trap_group :
              port : 20162
              target : 10.94.63.190
          threadCount : 10
      
      # various scripts
      # Note for release 1.0, the backup database script is engine specific
      script : 
          database: 
              backup : script/backup_sqlite.sh
      
      # CLI configuration
      cli:
          # This is the text that would appear at each prompt
          prompt_text: "openclos"
          # prompt_style follows prompt_text and these together make the command-
          #              prompt of the CLI
          #              The cli code will add <space> after the prompt_style
          prompt_style: "#"
          # header is the text that appears when CLI is invoked, and CLI prompt-
          #        is issued
          header: "Welcome to openclos - by Juniper Networks"
          # on_exit is the message that would appear when CLI session is terminated
          on_exit: "goodbye"
      
      # Optional callback to control 2-stage configuration processing.
      # callback can be a shell command or a shell script. 
      # if the callback exit code is 0, 2-stage configuration for the current leaf continues, 
      # if the callback exit code is not 0, 2-stage configuration for the current leaf aborts 
      #twoStageConfigurationCallback:
      
      # generic plugin configuration
      plugin:
          -
              name: overlay
              package: jnpr.openclos.overlay
              # Number of threads in the thread pool for commiting configuration on device
              threadCount: 10
              # Number of seconds. Controls how frequent to scan "commit job queue"
              dispatchInterval: 10
      
      
    5. After editing the above files, navigate to the sampleApplication.py script and run it to generate the device configurations and push them to the devices.

      These configurations will be pushed to the devices using ZTP (if configured above).

      user@ubuntu-Openclos:/var/tmp/OpenClos-devR3.0/jnpr/openclos/conf$ cd ../tests
      user@ubuntu-Openclos:/var/tmp/OpenClos-devR3.0/jnpr/openclos/tests$ python sampleApplication.py
      user@ubuntu-Openclos:/var/tmp/OpenClos-devR3.0/jnpr/openclos/tests$
    6. To review the generated configuration files, navigate to /tmp/out and then into the directory holding the configuration files you just created.

      The directory name is autogenerated using the format <POD-ID_POD-NAME>. In this example, the directory name is aeb1ba8e-207a-4317-bfb3-08906333bde6-labLeafSpine.

      Note: When using the scripts to create multiple sets of configuration files, it can become difficult to determine which directory holds which files, as the POD ID does not use an intuitive format. One way to distinguish one directory from another is to alter the POD name (in the closDefinition.yaml file) each time you create configuration files. Another method is to look at the timestamps to determine which directory was created most recently.

      user@ubuntu-Openclos:/var/tmp/OpenClos-devR3.0/jnpr/openclos/conf$ cd /tmp/out
      user@ubuntu-Openclos:/tmp/out$ cd aeb1ba8e-207a-4317-bfb3-08906333bde6-labLeafSpine
      user@ubuntu-Openclos:/tmp/out/aeb1ba8e-207a-4317-bfb3-08906333bde6-labLeafSpine$ ls
      2304a28c-6d42-4893-ad60-1a378ce65e30__Spine-03.conf
      24d27dbe-e1bb-475e-a426-165a26f9d53a__Leaf-02.conf
      3cadc8f1-82a4-433c-acde-3dc6a5c82c06__Leaf-03.conf
      46ffc4da-3271-4c39-a9fb-2ef800f2ae0b__Leaf-05.conf
      6d0b5cc1-fa72-4eff-83a3-9c6f35f91bd2__Spine-00.conf
      89fd49f5-e878-4a29-817d-a0fec3949354__Spine-01.conf
      a69d50ef-b749-4eb8-aaea-e7da3a18036d__Spine-02.conf
      ed6dbd70-dbcf-4bc7-8fba-5b45e631ab62__Leaf-01.conf
      f1eea9a7-cfb5-43af-af44-56a170598656__Leaf-04.conf
      f468853c-8781-4863-ad9a-66b2ba3d15b5__Leaf-00.conf
      cablingPlan.dot
      cablingPlan.json
      dhcpd.conf
      
      

    For More Information

    The resulting leaf configuration from this procedure can be found at Example: Configuring the Software as a Service Solution.

    The resulting spine configuration from this procedure can be found at Example: Configuring the Software as a Service Solution.

    More detailed information on creating Layer 3 fabrics using OpenClos can be found at https://github.com/Juniper/OpenClos.

    Modified: 2016-07-28