Application Experience Correlation
Get started using the application experience correlation data to assess the user experiences with Zoom or Microsoft Teams. Assess audio and video quality, and identify root causes of issues.
Juniper Mist collects information such as latency, packet loss, and jitter experienced by every user during a Teams or Zoom session. It correlates this information against the network parameters to identify the root cause for a bad user experience. Juniper Mist then aggregates this individual user information to provide insights into the quality of Teams or Zoom application user experiences at a site or an organization level.
Experience Correlation
Experience correlation provides visibility into the performance of Teams and Zoom applications at a site or an organization level. With detailed insights into the factors impacting the application quality, the correlation data helps network administrators to quickly identify issues causing bad user experiences across a site or an entire organization.
Use the feature ranking graph to identify which features contributed the most to an issue, Also view the insights for the impacted clients and the APs that they're connected to. If clients experience degraded Zoom or Teams call quality, use the experience correlation at the site level to easily identify which APs are involved.
As Juniper Mist provides the correlation based on the latency, loss, and jitter data it fetches from the third-party applications (Zoom and Teams), fewer bad user minutes also serve as a third-party validation of your network.
The feature ranking (Shapley) helps you to troubleshoot Zoom or Teams sessions by ranking the impact of each network feature on the sessions. You can read more about the Shapley feature ranking in Troubleshoot Zoom Sessions Using Shapley Feature Ranking.
To understand how you can integrate the Teams and Zoom applications with Juniper Mist, see:
Site-Level Application Experience Correlation
The Experience Correlation section provides an aggregate of the total good and bad user minutes experienced by all users in a site for a specific duration. It also provides granularity by providing a breakup of the bad user minutes based on the factors that contributed to it—WAN, wireless, or client.
To view the site-level application correlation, select the site and the duration. Here’s an example. You can see the good and bad user minutes listed for the site. You can also see the distribution of the bad user minutes across the WAN, Wireless, and Client categories, with WAN contributing the most.
You can further expand User Minutes to view the following information:
-
Feature ranking—Provides a Shapley feature ranking graph for the audio (in and out) and video (in and out) latency. As shown in the following example, you can expand the Client, Wireless, and WAN categories to drill down to the network feature that is contributing the most to the issue.
Each contributing feature for bad user minutes is ranked in terms of the additional latency that it adds to the Zoom or Teams call. The increase in latency for each contributing feature is measured against the site average latency.
In the following example, you’ll notice that WAN is contributing the most to the issue.
When you expand the categories, you see that the site latency is the major contributing factor to the increased call latency. Based on this information, you can look at the site WAN uplink metrics to confirm the issue and take necessary action.
-
Clients—The Clients tab displays the users that experienced bad user minutes and lists the number of bad call occurrences. Click the MAC address to go into the individual client insights page to view the meeting details, Shapley feature ranking, and pre and post connection metrics for the client. If you look through the list of affected clients for a specific duration (for example, last 24 hours, yesterday, 7 days), you can identify clients that faced a bad user experience consistently. You can also obtain information by entering ‘list bad zoom calls for last 7 days’ or ‘list bad Teams calls for last 7 days’ in the Marvis conversational assistant. You can also view the details for a site or a specific user—for example, ‘list bad zoom calls for host-abc for last 7 days’.
-
Access Points—Shows the APs that the users were connected to when issues occurred. You can click the MAC address of an individual AP to view the insights.
You can also select the individual AP from the drop-down list on the top. Juniper Mist will list the feature ranking specific to the selected AP and its connected clients that experienced the issue.
In the following example, the users connected to the selected AP did not experience any bad user minutes.
If all users connected to an AP do not experience any bad user minutes, then you'll see the page like the following example.
Organization-Level Application Experience Correlation
Juniper Mist also provides an aggregated view of all affected sites at an organization level. Using this data, you can identify sites where users are facing issues consistently for a specific duration. You can also determine the dominant feature for the bad experience along with the total number of clients and APs involved.
In the Application Experience Correlation page, select the organization and time duration for which you want to view the data.
You’ll see a list of all sites that experienced bad user minutes. You can click a site to view the details.
In the following example, you see the sites with bad user experience and the total bad minutes for the WAN, Wireless, and Client categories. At an organization level, this type of data helps the networking teams to enhance and optimize the network, WAN links and any increased client CPU or memory utilization that could be causing the problem.
Zoom and Teams applications are sensitive to any network changes. Viewing application performance at an organization level is essential for assessing and improving the user experience.