Use Junos Snapshot Administrator in Python (JSNAPy) in Ansible Playbooks
SUMMARY Execute JSNAPy tests as part of an Ansible playbook to capture and audit runtime environment snapshots of Junos devices.
Junos® Snapshot Administrator in Python (JSNAPy) enables you to capture and audit runtime environment snapshots of your network Junos devices. You can capture and verify the configuration and operational status of a device and verify changes to a device. Juniper Networks provides Ansible modules that enable you to execute JSNAPy tests against Junos devices as part of an Ansible playbook. Table 1 outlines the available modules.
Content Set |
Module Name |
---|---|
|
|
|
You must install Junos Snapshot Administrator in Python on the Ansible control node in order to use the modules. For installation instructions and information about creating JSNAPy configuration and test files, see the Junos Snapshot Administrator in Python Documentation.
Starting in Juniper.junos
Release 2.0.0, the
juniper_junos_jsnapy
module replaces the functionality of the
junos_jsnapy
module.
The following sections discuss how to use the modules in Ansible playbooks.
Module Overview
The jsnapy
and juniper_junos_jsnapy
modules
enable you to execute many of the same JSNAPy functions from an Ansible playbook
as you can execute using JSNAPy on the command line, including:
-
capturing and saving a runtime environment snapshot
-
comparing two snapshots
-
capturing a snapshot and immediately evaluating it
The modules require specifying the action
argument and either
the config_file
or the test_files
argument.
The action
argument specifies the JSNAPy action to perform.
Table 2 outlines the valid action
values and the equivalent JSNAPy
commands.
action Value |
Description |
Equivalent JSNAPy Command |
---|---|---|
|
Compare two existing snapshots based on the given test cases, or if no test cases are supplied, compare the snapshots node by node. |
|
|
Take snapshots for the commands or RPCs specified in the test files after making changes on the given devices. |
|
|
Take snapshots for the commands or RPCs specified in the test files prior to making changes on the given devices. |
|
|
Take snapshots of the commands or RPCs specified in the test files and immediately evaluate the snapshots against pre-defined criteria in the test cases. |
|
When you execute JSNAPy on the command line, JSNAPy performs the requested action
on the hosts specified in the hosts
section of the
configuration file. In contrast, the Ansible modules execute the requested
action on the hosts in the Ansible inventory group defined in the playbook. As a
result, the module can either reference a configuration file, ignoring the
hosts
section, or it can directly reference one or more
test files.
Thus, in addition to the action
argument, the
jsnapy
and juniper_junos_jsnapy
modules
also require either the config_file
or the
test_files
argument to specify the JSNAPy configuration
file or the JSNAPy test files to use for the given action.
Table 3 outlines the config_file
and test_files
arguments.
Module argument |
Value |
Additional Information |
---|---|---|
|
Absolute or relative file path to a JSNAPy configuration file. |
If the path is relative, the module checks for the configuration file in the following locations and in the order indicated:
If the configuration file references test files by using a
relative file path, the module first checks for the test
files in the playbook directory and then checks for the test
files in the default |
|
Absolute or relative file path to a JSNAPy test file. This can be a single file path or a list of file paths. |
For each test file that specifies a relative path, the module checks for the file in the following locations and in the order indicated:
|
The config_file
and test_files
arguments can
take an absolute or relative file path. When using a relative file path, you can
optionally include the dir
module argument to specify the
directory in which the files reside. If a config_file
or
test_files
argument uses a relative file path, the module
first checks for the file under the Ansible playbook directory, even if the
dir
argument is present. If the file does not exist under
the playbook directory, the module checks under the dir
argument directory, if it is specified, or under the
/etc/jsnapy/testfiles directory, if the
dir
argument is omitted. The playbook generates an error
message if the file is not found.
The following sample playbook performs the snap_pre
action using
the configuration_interface_status.yaml configuration file.
If the configuration file does not exist in the playbook directory, the module
checks for the file in the user’s home directory under the
jsnapy/testfiles subdirectory.
--- - name: Junos Snapshot Administrator tests hosts: dc1a connection: local gather_facts: no tasks: - name: Take a pre-maintenance snapshot of the interfaces juniper.device.jsnapy: action: "snap_pre" dir: "~/jsnapy/testfiles" config_file: "configuration_interface_status.yaml"
Starting in Junos Snapshot Administrator in Python Release 1.3.0, the default location for configuration and test files is ~/jsnapy/testfiles. However, the default location inside a virtual environment or for earlier releases is /etc/jsnapy/testfiles.
The module performs the requested action on the hosts specified in the Ansible
playbook, even if the module references a configuration file that includes a
hosts
section. The module reports failed if it encounters
an error and fails to execute the JSNAPy tests. It does not report failed
if one or more of the JSNAPy tests fail. To check the JSNAPy test results,
register the module’s response, and use the assert
module to
verify the expected result in the response.
Junos Snapshot Administrator in Python logs information regarding its operations
to the /var/log/jsnapy/jsnapy.log file by default. The
jsnapy
and juniper_junos_jsnapy
modules
can optionally include the logfile
argument, which specifies
the path to a writable file on the Ansible control node where information for
the particular task is logged. The level of information logged in the file is
determined by Ansible’s verbosity level and debug options. By default, only
messages of severity level WARNING or higher are logged. To log messages equal
to or higher than severity level INFO or severity level DEBUG, execute the
playbook with the -v
or -vv
command-line
option, respectively.
When you execute JSNAPy tests in an Ansible playbook, you can enable the
jsnapy
callback plugin to capture and summarize information
for failed JSNAPy tests. To enable the callback plugin, add the
callback_whitelist = jsnapy
statement to the Ansible
configuration file. For more information, see Enable the jsnapy Callback Plugin.
Take and Compare Snapshots
JSNAPy enables you to capture runtime environment snapshots of your network Junos
devices before and after a change and then compare the snapshots to verify the
expected changes or identify unexpected issues. The jsnapy
and
juniper_junos_jsnapy
Ansible modules enable you to take and
compare JSNAPy snapshots as part of an Ansible playbook. The modules save each
snapshot for each host in a separate file in the default JSNAPy snapshot
directory using a predetermined filename. For more information about the output
files, see Understanding the jsnapy and juniper_junos_jsnapy Module Output.
To take baseline snapshots of one or more devices prior to making changes, set
the module’s action
argument to snap_pre
, and
specify a configuration file or one or more test files.
The following playbook saves PRE snapshots for each device in the Ansible inventory group. The task references the configuration_interface_status.yaml configuration file in the ~/jsnapy/testfiles directory and logs messages to the jsnapy_tests.log file in the playbook directory.
--- - name: Junos Snapshot Administrator tests hosts: dc1 connection: local gather_facts: no tasks: - name: Take a pre-maintenance snapshot of the interfaces juniper.device.jsnapy: action: "snap_pre" dir: "~/jsnapy/testfiles" config_file: "configuration_interface_status.yaml" logfile: "jsnapy_tests.log"
To take a snapshot of one or more devices after performing changes, set the
module’s action
argument to snap_post
, and
specify a configuration file or one or more test files.
The following playbook saves POST snapshots for each device in the Ansible inventory group. The task references the same configuration_interface_status.yaml configuration file in the ~/jsnapy/testfiles directory and logs messages to the jsnapy_tests.log file in the playbook directory.
--- - name: Junos Snapshot Administrator tests hosts: dc1 connection: local gather_facts: no tasks: - name: Take a post-maintenance snapshot of the interfaces juniper.device.jsnapy: action: "snap_post" dir: "~/jsnapy/testfiles" config_file: "configuration_interface_status.yaml" logfile: "jsnapy_tests.log"
When the jsnapy
or juniper_junos_jsnapy
module
performs a snap_pre
action or a snap_post
action, it saves each snapshot for each host in a separate file using
auto-generated filenames that contain a 'PRE' or 'POST' tag, respectively. To
compare the PRE
and POST
snapshots to quickly
verify the updates or identify any issues that might have resulted from the
changes, set the module’s action
argument to
check
, and specify the same configuration file or test
files that were used to take the snapshots.
When the module performs a check
action, the preexisting
PRE and POST snapshots for each
test on each device are compared and evaluated against the criteria defined in
the tests:
section of the test files. If the test files do not
define any test cases, JSNAPy instead compares the snapshots node by node. To
check the test results, register the module’s response, and use the
assert
module to verify the expected result in the
response.
The following playbook compares the snapshots taken for previously executed
snap_pre
and snap_post
actions for every
device in the Ansible inventory group. The results are evaluated using the
criteria in the test files that are referenced in the configuration file. The
playbook registers the module’s response as 'test_result
' and
uses the assert
module to verify that all tests passed on the
given device.
--- - name: Junos Snapshot Administrator tests hosts: dc1 connection: local gather_facts: no tasks: - name: Compare PRE and POST snapshots juniper.device.jsnapy: action: "check" dir: "~/jsnapy/testfiles" config_file: "configuration_interface_status.yaml" logfile: "jsnapy_tests.log" register: test_result - name: Verify JSNAPy tests passed assert: that: - "test_result.passPercentage == 100"
When you run the playbook, the assertions quickly identify which devices failed the tests.
user@host:~$ ansible-playbook jsnapy-interface-check.yaml PLAY [Junos Snapshot Administrator tests] ************************************* TASK [Compare PRE and POST snapshots] ***************************************** ok: [dc1a.example.net] ok: [dc1b.example.net] TASK [Verify JSNAPy tests passed] ********************************************* ok: [dc1b.example.net] => { "changed": false, "msg": "All assertions passed" } fatal: [dc1a.example.net]: FAILED! => { "assertion": "test_result.passPercentage == 100", "changed": false, "evaluated_to": false, "msg": "Assertion failed" } to retry, use: --limit @/home/user/jsnapy-interface-check.retry PLAY RECAP ******************************************************************** dc1b.example.net : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 dc1a.example.net : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Perform Snapcheck Operations
JSNAPy enables you to take snapshots for the commands or RPCs specified in JSNAPy
test files and immediately evaluate the snapshots against pre-defined criteria
in the test cases. The jsnapy
and
juniper_junos_jsnapy
Ansible modules enable you to perform
a JSNAPy snapcheck operation as part of an Ansible playbook.
To take a snapshot and immediately evaluate it based on the pre-defined set of
criteria in the tests:
section of the test files, set the
module’s action
argument to snapcheck
, and
specify a configuration file or one or more test files. To check the test
results, register the module’s response, and use the assert
module to verify the expected result in the response.
For example, for each device in the Ansible inventory group, the following playbook saves a separate snapshot for each command or RPC in the test files, registers the module’s response, and uses the assert module to verify that all tests defined in the test files passed on that device.
--- - name: Junos Snapshot Administrator tests hosts: dc1 connection: local gather_facts: no tasks: - name: Take a snapshot and immediately evaluate it juniper.device.jsnapy: action: "snapcheck" dir: "~/jsnapy/testfiles/" test_files: - "test_interface_status.yaml" - "test_bgp_neighbor.yaml" logfile: "jsnapy_tests.log" register: test_result - name: Verify JSNAPy tests passed assert: that: - "test_result.passPercentage == 100"
Understanding the jsnapy and juniper_junos_jsnapy Module Output
When the jsnapy
or juniper_junos_jsnapy
module
performs a snap_pre
, snap_post
, or
snapcheck
action, it automatically saves the snapshots in the
JSNAPy snapshots directory. The modules use the default JSNAPy
directories unless you modify the JSNAPy configuration file to specify a different
location. The module creates a separate file for each command or RPC executed on
each device in the Ansible inventory group. Table 4 outlines the filenames of the snapshot files for each value of the
action
argument.
Starting in Junos Snapshot Administrator in Python Release 1.3.0, the default directories for the JSNAPy test files and snapshots are ~/jsnapy/testfiles and ~/jsnapy/snapshots, respectively. However, the default directories inside a virtual environment or for earlier releases are /etc/jsnapy/testfiles and /etc/jsnapy/snapshots.
|
Output Files |
---|---|
|
hostname_PRE_hash_command.format |
|
hostname_POST_hash_command.format |
|
hostname_snap_temp_hash_command.format |
where:
-
hostname—Hostname of the device on which the command or RPC is executed.
-
(PRE | POST | snap_temp)—Tag identifying the action. The
snapcheck
operation uses thePRE
tag in current releases; in earlier releases the operation uses thesnap_temp
tag. -
hash—Hash generated from
kwargs
for test files that include therpc
andkwargs
keys.If test files use the same RPC but include different arguments, and the RPCs are executed on the same host, the hash ensures unique output filenames in those cases. If a test file defines the
command
key or if a test file defines therpc
key but does not include thekwargs
key, the hash is omitted. -
command—Command or RPC executed on the managed device. The module replaces whitespace and special characters in the command or RPC name with underscores ( _ ).
-
format—Format of the output, for example, xml.
The jsnapy
and juniper_junos_jsnapy
modules
only differentiate the snapshot filenames for a given action based on hostname
and command or RPC. As a result, if the module takes snapshots on the same
device for the same action using test files that define the same command or RPC,
the module will generate snapshots with the same filename, and the new file will
overwrite the old file.
For example, if the module includes action: "snap_pre"
and
references test files that execute the show chassis fpc
and
show interfaces terse
commands on devices dc1a.example.net and
dc1b.example.net, the resulting files are:
user@ansible-cn:~$ ls jsnapy/snapshots dc1a.example.net_PRE_show_chassis_fpc.xml dc1a.example.net_PRE_show_interfaces_terse.xml dc1b.example.net_PRE_show_chassis_fpc.xml dc1b.example.net_PRE_show_interfaces_terse.xml
If the module includes action: "snap_post"
and references a test
file that executes the get-interface-information
RPC with
kwargs
item interface_name: lo0
on device
dc1a.example.net, the resulting file is:
dc1a.example.net_POST_r1w59I99HXxC3u0VXXshbw==_get_interface_information.xml
In addition to generating the snapshot files, the jsnapy
and
juniper_junos_jsnapy
modules can also return the following keys
in the module response:
-
action
—JSNAPy action performed by the module. -
changed
—Indicates if the device’s state changed. Since JSNAPy only reports on state, the value is alwaysfalse
. -
failed
—Indicates if the playbook task failed. -
msg
—JSNAPy test results.
Enable the jsnapy Callback Plugin
When you execute JSNAPy tests against Junos devices and one or more tests fail,
it can be difficult to identify and extract the failed tests if the output is
extensive. The jsnapy
callback plugin enables you to easily
extract and summarize the information for failed JSNAPy tests. When you enable
the jsnapy
callback plugin and execute a playbook that includes
JSNAPy tests, the plugin summarizes the information for the failed JSNAPy tests
after the playbook PLAY RECAP
.
The jsnapy
callback plugin is not enabled by default. To enable
the jsnapy
callback plugin, add the
callback_whitelist = jsnapy
statement to the Ansible
configuration file.
[defaults] callback_whitelist = jsnapy
When you enable the jsnapy
callback plugin and run a playbook,
the plugin summarizes the failed JSNAPy tests in a human-readable format. For
example:
... PLAY RECAP **************************************************************** qfx10002-01 : ok=3 changed=0 unreachable=0 failed=1 qfx10002-02 : ok=3 changed=0 unreachable=0 failed=1 qfx5100-01 : ok=1 changed=0 unreachable=0 failed=1 JSNAPy Results for: qfx10002-01 ******************************************* Value of 'peer-state' not 'is-equal' at '//bgp-information/bgp-peer' with {"peer-as": "64502", "peer-state": "Active", "peer-address": "198.51.100.21"} Value of 'peer-state' not 'is-equal' at '//bgp-information/bgp-peer' with {"peer-as": "64510", "peer-state": "Idle", "peer-address": "192.168.0.1"} Value of 'oper-status' not 'is-equal' at '//interface-information/physical-interface[normalize-space(admin-status)='up' and logical-interface/address-family/address-family-name ]' with {"oper-status": "down", "name": "et-0/0/18"} JSNAPy Results for: qfx10002-02 ******************************************* Value of 'peer-state' not 'is-equal' at '//bgp-information/bgp-peer' with {"peer-as": "64502", "peer-state": "Active", "peer-address": "198.51.100.21"}
Example: Use Ansible to Perform a JSNAPy Snapcheck Operation
The jsnapy
module enables you to execute JSNAPy tests against Junos
devices as part of an Ansible playbook. This examples uses the
jsnapy
module to perform a snapcheck
action to
verify the operational state of Junos devices after applying specific configuration
changes.
- Requirements
- Overview
- Configuration
- Execute the Playbook
- Verification
- Troubleshoot Ansible Playbook Errors
Requirements
This example uses the following hardware and software components:
-
Ansible control node running:
-
Python 3.7 or later
-
Ansible 2.10 or later with the
juniper.device
collection installed -
Junos PyEZ Release 2.6.0 or later
-
Junos Snapshot Administrator in Python Release 1.3.6 or later
-
Before executing the Ansible playbook, be sure you have:
-
Junos devices with NETCONF over SSH enabled and a user account configured with appropriate permissions
-
SSH public/private key pair configured for the appropriate user on the Ansible control node and Junos device
-
Existing Ansible inventory file with required hosts defined
Overview
In this example, the Ansible playbook configures BGP peering sessions on three
Junos devices and uses the jsnapy
module to verify that the BGP
session is established for each neighbor address. If the playbook verifies that
the sessions are established on a device, it confirms the commit for the new
configuration. If the playbook does not confirm the commit, the Junos device
automatically rolls back to the previously committed configuration. The Ansible
project defines the group and host variables for the playbook under the
group_vars
and host_vars
directories,
respectively.
The playbook has two plays. The first play, Load and commit BGP
configuration
, generates and assembles the configuration, loads the
configuration on the device, and commits it using a commit confirmed operation.
If the configuration is updated, one handler is notified. The play executes the
following tasks:
Remove build directory |
Deletes the existing build directory for the given device, if present. |
Create build directory |
Creates a new, empty build directory for the given device. |
Build BGP configuration |
Uses the |
Assemble configuration parts |
Uses the In this example, only the BGP configuration file will be present, and
thus the resulting configuration file is identical to the BGP
configuration file rendered in the previous task. If you later add
new tasks to generate additional configuration files from other
templates, the |
Load and commit config, require confirmation |
Loads the configuration onto the Junos device and commits the
configuration using a If the requested configuration is already present on the device, the
|
The second play, Verify BGP
, performs a JSNAPy
snapcheck
operation on each device using the tests in the
JSNAPy test files and confirms the commit, provided that all the tests pass. The
play executes the following tasks:
Execute snapcheck |
Performs a JSNAPy In this example, the playbook directly references JSNAPy test files
by setting the |
Confirm commit |
Executes a commit check operation, which confirms the previous commit operation, provided that the first playbook play updated the configuration and that all of the JSNAPy tests passed. If the playbook updates the configuration but does not confirm the commit, the Junos device automatically rolls the configuration back to the previously committed configuration. Note:
You can confirm the previous commit operation with either a
|
Verify BGP configuration |
(Optional) Explicitly indicates whether the JSNAPy tests passed or failed on the given device. This task is not specifically required, but it more easily identifies when the JSNAPy tests fail and on which devices. |
Configuration
- Define the Group Variables
- Define the Jinja2 Template and Host Variables
- Create the JSNAPy Test Files
- Create the Ansible Playbook
- Results
Define the Group Variables
Step-by-Step Procedure
To define the group variables:
-
In the group_vars/all file, define variables for the build directory and for the filenames of the configuration and log files.
build_dir: "{{ playbook_dir }}/build_conf/{{ inventory_hostname }}" junos_conf: "{{ build_dir }}/junos.conf" logfile: "junos.log"
Define the Jinja2 Template and Host Variables
Define the Jinja2 Template
To create the Jinja2 template that is used to generate the BGP configuration:
Create a file named bgp-template.j2 in the project’s playbook directory.
Add the BGP configuration template to the file.
interfaces { {% for neighbor in neighbors %} {{ neighbor.interface }} { unit 0 { description "{{ neighbor.name }}"; family inet { address {{ neighbor.local_ip }}/30; } } } {% endfor %} lo0 { unit 0 { family inet { address {{ loopback }}/32; } } } } protocols { bgp { group underlay { import bgp-in; export bgp-out; type external; local-as {{ local_asn }}; multipath multiple-as; {% for neighbor in neighbors %} neighbor {{ neighbor.peer_ip }} { peer-as {{ neighbor.asn }}; } {% endfor %} } } lldp { {% for neighbor in neighbors %} interface "{{ neighbor.interface }}"; {% endfor %} } } routing-options { router-id {{ loopback }}; forwarding-table { export bgp-ecmp; } } policy-options { policy-statement bgp-ecmp { then { load-balance per-packet; } } policy-statement bgp-in { then accept; } policy-statement bgp-out { then { next-hop self; accept; } } }
Define the Host Variables
To define the host variables that are used with the Jinja2 template to generate the BGP configuration:
In the project’s host_vars directory, create a separate file named hostname.yaml for each host.
Define the variables for host r1 in the r1.yaml file.
--- loopback: 192.168.0.1 local_asn: 64521 neighbors: - interface: ge-0/0/0 name: to-r2 asn: 64522 peer_ip: 198.51.100.2 local_ip: 198.51.100.1 peer_loopback: 192.168.0.2 - interface: ge-0/0/1 name: to-r3 asn: 64523 peer_ip: 198.51.100.6 local_ip: 198.51.100.5 peer_loopback: 192.168.0.3
Define the variables for host r2 in the r2.yaml file.
--- loopback: 192.168.0.2 local_asn: 64522 neighbors: - interface: ge-0/0/0 name: to-r1 asn: 64521 peer_ip: 198.51.100.1 local_ip: 198.51.100.2 peer_loopback: 192.168.0.1 - interface: ge-0/0/1 name: to-r3 asn: 64523 peer_ip: 198.51.100.10 local_ip: 198.51.100.9 peer_loopback: 192.168.0.3
Define the variables for host r3 in the r3.yaml file.
--- loopback: 192.168.0.3 local_asn: 64523 neighbors: - interface: ge-0/0/0 name: to-r1 asn: 64521 peer_ip: 198.51.100.5 local_ip: 198.51.100.6 peer_loopback: 192.168.0.1 - interface: ge-0/0/1 name: to-r2 asn: 64522 peer_ip: 198.51.100.9 local_ip: 198.51.100.10 peer_loopback: 192.168.0.2
Create the JSNAPy Test Files
Step-by-Step Procedure
The jsnapy
module references JSNAPy test files in the
~/jsnapy/testfiles directory. To create the
JSNAPy test files:
Create the jsnapy_test_file_bgp_states.yaml file, which executes the
show bgp neighbor
command and tests that the BGP peer state is established.bgp_neighbor: - command: show bgp neighbor - ignore-null: True - iterate: xpath: '//bgp-peer' id: './peer-address' tests: # Check if peers are in the established state - is-equal: peer-state, Established err: "Test Failed!! peer <{{post['peer-address']}}> state is not Established, it is <{{post['peer-states']}}>" info: "Test succeeded!! peer <{{post['peer-address']}}> state is <{{post['peer-state']}}>"
Create the jsnapy_test_file_bgp_summary.yaml file, which executes the
show bgp summary
command and asserts that the BGP down peers count must be 0.bgp_summary: - command: show bgp summary - item: xpath: '/bgp-information' tests: - is-equal: down-peer-count, 0 err: "Test Failed!! down-peer-count is not equal to 0. It is equal to <{{post['down-peer-count']}}>" info: "Test succeeded!! down-peer-count is equal to <{{post['down-peer-count']}}>"
Create the Ansible Playbook
Define the First Play to Configure the Device
To create the first play, which renders the configuration, loads it on the device, and commits the configuration as a commit confirmed operation:
Include the boilerplate for the playbook and the first play, which executes the modules locally.
--- - name: Load and commit BGP configuration hosts: bgp_routers connection: local gather_facts: no
Create the tasks that replace the existing build directory with an empty directory, which will store the new configuration files.
tasks: - name: Remove build directory file: path: "{{ build_dir }}" state: absent - name: Create build directory file: path: "{{ build_dir }}" state: directory
Create the task that renders the BGP configuration from the Jinja2 template file and host variables and stores it in the bgp.conf file in the build directory for that host.
- name: Build BGP configuration template: src: "{{ playbook_dir }}/bgp-template.j2" dest: "{{ build_dir }}/bgp.conf"
Create a task to assemble the configuration files in the build directory into the final junos.conf configuration file.
- name: Assemble configuration parts assemble: src: "{{ build_dir }}" dest: "{{ junos_conf }}"
Create the task that loads the configuration on the device, performs a commit operation that requires confirmation, and notifies the given handler, provided the configuration was changed.
- name: Load and commit config, require confirmation juniper.device.config: load: "merge" format: "text" src: "{{ junos_conf }}" confirm: 5 comment: "config by Ansible" logfile: "{{ logfile }}" register: config_result # Notify handler, only if configuration changes. notify: - Waiting for BGP peers to establish connections
Create a handler that pauses playbook execution if the device configuration is updated, and set the pause time to an appropriate value for your environment.
handlers: - name: Waiting for BGP peers to establish connections pause: seconds=60
Define the Second Play to Perform JSNAPy Operations
To create the second play, which performs a JSNAPy snapcheck operation and confirms the committed configuration, provided that the configuration changed and the JSNAPy tests passed:
Include the boilerplate for the second play, which executes the modules locally.
- name: Verify BGP hosts: bgp_routers connection: local gather_facts: no
Create a task to perform a JSNAPy snapcheck operation based on the tests in the given JSNAPy test files, and register the module’s response.
tasks: - name: Execute snapcheck juniper.device.jsnapy: action: "snapcheck" dir: "~/jsnapy/testfiles" test_files: - "jsnapy_test_file_bgp_states.yaml" - "jsnapy_test_file_bgp_summary.yaml" logfile: "{{ logfile }}" register: snapcheck_result
Create the task to confirm the commit provided that the given conditions are met.
# Confirm commit only if configuration changed and JSNAPy tests pass - name: Confirm commit juniper.device.config: check: true commit: false diff: false logfile: "{{ logfile }}" when: - config_result.changed - "snapcheck_result.passPercentage == 100"
(Optional) Create a task that uses the
assert
module to assert that the JSNAPy tests passed.- name: Verify BGP configuration assert: that: - "snapcheck_result.passPercentage == 100" msg: "JSNAPy test on {{ inventory_hostname }} failed"
Results
On the Ansible control node, review the completed playbook. If the playbook does not display the intended code, repeat the instructions in this section to correct the playbook.
--- - name: Load and commit BGP configuration hosts: bgp_routers connection: local gather_facts: no tasks: - name: Remove build directory file: path: "{{ build_dir }}" state: absent - name: Create build directory file: path: "{{ build_dir }}" state: directory - name: Build BGP configuration template: src: "{{ playbook_dir }}/bgp-template.j2" dest: "{{ build_dir }}/bgp.conf" - name: Assemble configuration parts assemble: src: "{{ build_dir }}" dest: "{{ junos_conf }}" - name: Load and commit config, require confirmation juniper.device.config: load: "merge" format: "text" src: "{{ junos_conf }}" confirm: 5 comment: "config by Ansible" logfile: "{{ logfile }}" register: config_result # Notify handler, only if configuration changes. notify: - Waiting for BGP peers to establish connections handlers: - name: Waiting for BGP peers to establish connections pause: seconds=60 - name: Verify BGP hosts: bgp_routers connection: local gather_facts: no tasks: - name: Execute snapcheck juniper.device.jsnapy: action: "snapcheck" dir: "~/jsnapy/testfiles" test_files: - "jsnapy_test_file_bgp_states.yaml" - "jsnapy_test_file_bgp_summary.yaml" logfile: "{{ logfile }}" register: snapcheck_result # Confirm commit only if configuration changed and JSNAPy tests pass - name: Confirm commit juniper.device.config: check: true commit: false diff: false logfile: "{{ logfile }}" when: - config_result.changed - "snapcheck_result.passPercentage == 100" - name: Verify BGP configuration assert: that: - "snapcheck_result.passPercentage == 100" msg: "JSNAPy test on {{ inventory_hostname }} failed"
Execute the Playbook
To execute the playbook:
-
Issue the
ansible-playbook
command on the control node, and provide the playbook path and any desired options.user@ansible-cn:~/ansible$ ansible-playbook ansible-pb-bgp-configuration.yaml PLAY [Load and commit BGP configuration] ************************************* TASK [Remove build directory] ************************************************ changed: [r1] changed: [r2] changed: [r3] TASK [Create build directory] ************************************************ changed: [r1] changed: [r2] changed: [r3] TASK [Build BGP configuration] *********************************************** changed: [r2] changed: [r1] changed: [r3] TASK [Assemble configuration parts] ****************************************** changed: [r3] changed: [r2] changed: [r1] TASK [Load and commit config, require confirmation] ************************** changed: [r2] changed: [r1] changed: [r3] RUNNING HANDLER [Waiting for BGP peers to establish connections] ************* Pausing for 60 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) ok: [r3]
PLAY [Verify BGP] ************************************************************ TASK [Execute snapcheck] ***************************************************** ok: [r2] ok: [r1] ok: [r3] TASK [Confirm commit] ******************************************************** ok: [r2] ok: [r1] ok: [r3] TASK [Verify BGP configuration] ********************************************** ok: [r1] => { "changed": false, "msg": "All assertions passed" } ok: [r2] => { "changed": false, "msg": "All assertions passed" } ok: [r3] => { "changed": false, "msg": "All assertions passed" } PLAY RECAP ******************************************************************* r1 : ok=8 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 r2 : ok=8 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 r3 : ok=9 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Verification
Verify the BGP Neighbors
Purpose
Verify that the BGP session is established for each neighbor address.
The JSNAPy test files test that the BGP session is established for each
neighbor address and that the there are no down peers. The
Verify BGP configuration
task output enables you to
quickly verify that the given device passed all JSNAPy tests. If the
JSNAPy passPercentage
is equal to 100 percent, the task
includes "msg": "All assertions passed"
in the task
output.
Action
Review the Verify BGP configuration
task output, and
verify that each device returns the All assertions
passed
message.
TASK [Verify BGP configuration] ********************************************** ok: [r1] => { "changed": false, "msg": "All assertions passed" } ok: [r2] => { "changed": false, "msg": "All assertions passed" } ok: [r3] => { "changed": false, "msg": "All assertions passed" }
Meaning
The All assertions passed
message indicates that the BGP
sessions are successfully established on the devices.
Troubleshoot Ansible Playbook Errors
- Troubleshoot Configuration Load Errors
- Troubleshoot Failed JSNAPy Tests
- Troubleshoot Failed Commit Confirmations
Troubleshoot Configuration Load Errors
Problem
The Ansible playbook generates a ConfigLoadError
error
indicating that it failed to load the configuration on the device
because of a syntax error.
fatal: [r1]: FAILED! => {"changed": false, "msg": "Failure loading the configuraton: ConfigLoadError(severity: error, bad_element: protocol, message: error: syntax error\nerror: error recovery ignores input until this point)"}
Solution
The playbook renders the Junos OS configuration by using the Jinja2
template and the host variables defined for that device in the
host_vars directory. The playbook generates a
syntax error when the Jinja2 template produces an invalid configuration.
To correct this error, update the Jinja2 template to correct the element
identified by the bad_element
key in the error
message.
Troubleshoot Failed JSNAPy Tests
Problem
The Verify BGP configuration
task output indicates that
the assertion failed, because the JSNAPy passPercentage
was not equal to 100 percent.
TASK [Verify BGP configuration] ************************************************************* fatal: [r1]: FAILED! => { "assertion": "snapcheck_result.passPercentage == 100", "changed": false, "evaluated_to": false, "msg": "JSNAPy test on r1 failed" }
The assertion fails when the device has not established the BGP session with its neighbor or the session goes down. If the assertion fails, and the configuration for that device was updated in the first play, the playbook does not confirm the commit for the new configuration on the device, and the device rolls the configuration back to the previously committed configuration.
Solution
The JSNAPy tests might fail if the snapcheck
operation
is taken before the peers can establish the session or because the BGP
neighbors are not configured correctly. If the playbook output indicates
that the configuration was successfully loaded and committed on the
device, try increasing the handler’s pause interval to a suitable value
for your environment and rerun the playbook.
handlers: - name: Waiting for BGP peers to establish connections pause: seconds=75
If the tests still fail, verify that the Jinja2 template and the host variables for each device contain the correct data and that the resulting configuration for each device is correct.
Troubleshoot Failed Commit Confirmations
Problem
The configuration was not confirmed on one or more devices.
TASK [Confirm commit] *********************************************************************** skipping: [r2] skipping: [r2] skipping: [r3]
Solution
The playbook only confirms the configuration if it changed and the JSNAPy
tests pass. If the Load and commit config, require
confirmation
task output indicates that the configuration
did not change, the playbook does not execute the task to confirm the
commit. If the configuration changed but was not confirmed, then the
JSNAPy tests failed. The JSNAPy tests might fail if the BGP neighbors
are not configured correctly or if the playbook does not provide enough
time between the plays for the devices to establish the BGP session. For
more information, see Troubleshoot Failed JSNAPy Tests.
Juniper.junos
Release 2.0.0, the
juniper_junos_jsnapy
module replaces the functionality of
the junos_jsnapy
module.