Data Center Glossary
A B C D E F G H I J L M N O P Q R S T U V Z
A
Flow monitoring carried out on the same router that forwards the packets being monitored. In contrast, a passive monitoring router does not forward the packets being monitored—it receives mirrored packets from a router that is performing the forwarding. See also flow monitoring.
Logical bundle of physical interfaces managed as a single interface with one IP address. Network traffic is dynamically distributed across ports, so administration of data flowing across a given port is done automatically within the aggregated link. Using multiple ports in parallel provides redundancy and increases the link speed beyond the limits of any single port.
Automatic configuration of a device over the network from a preexisting configuration file created and stored on a configuration server—typically a Trivial File Transfer Protocol (TFTP) server. Autoinstallation occurs on a device that is powered on without a valid configuration (boot) file or that is configured specifically for autoinstallation. Autoinstallation is useful for deploying multiple devices on a network.
B
Bidirectional Forwarding Detection. Protocol that uses control packets and shorter detection time limits to more rapidly detect failures in a network.
Border Gateway Protocol. Exterior gateway protocol (EGP) used to exchange routing information among routers in different autonomous systems. Can act as a label distribution protocol for MPLS.
BFD. Protocol that uses control packets and shorter detection time limits to more rapidly detect failures in a network.
BGP. Exterior gateway protocol (EGP) used to exchange routing information among routers in different autonomous systems. Can act as a label distribution protocol for MPLS.
Broadcast, unknown unicast, and multicast traffic. Essentially multi-destination traffic.
C
Physically connected and configured devices that provide redundancy and ensure service continuity in the event of partial or complete device failure. Chassis clusters provide a resilient system architecture, synchronizing session and kernel states across control and data planes to prevent a single point of failure from disabling the network.
CoS. Method of classifying traffic on a packet-by-packet basis using information in the type-of-service (ToS) byte to provide different service levels to different traffic. See also QoS.
Multistage switching network in which switch elements in the middle stages are connected to all switch elements in the ingress and egress stages. Clos networks are well-known for their nonblocking properties—a connection can be made from any idle input port to any idle output port, regardless of the traffic load in the rest of the system.
Internet‐based environment of virtualized computing resources, including servers, software, and applications that can be accessed by individuals or businesses with Internet connectivity. Cloud types include public, private, and hybrid.
Cloud computing represents a paradigm shift in the way companies allocate IT resources. Fundamentally, a cloud is an Internet-based environment of computing resources comprised of servers, software, and applications that can be accessed by any individual or business with Internet connectivity. Customers, referred to as tenants, can access resources that they need to run their business. Clouds offer customers a pay-as-you-go, lease-style investment with little to no upfront costs, versus buying all of the required hardware and software separately. Clouds allow businesses to scale easily and tier more services and functionality on an as-needed basis. Cloud computing is the basis for Infrastructure as a Service (IaaS) and Software as a Service (SaaS).
Virtual network path used to set up, maintain, and terminate data plane connections. See also data plane.
class of service. Method of classifying traffic on a packet-by-packet basis using information in the type-of-service (ToS) byte to provide different service levels to different traffic. See also QoS.
D
DCB. Set of IEEE specifications that enhances the Ethernet standard to allow it to support converged Ethernet (LAN) and Fibre Channel (SAN) traffic on one Ethernet network. DCB features include priority-based flow control (PFC), enhanced transmission selection (ETS), Data Center Bridging Capability Exchange protocol (DCBX), quantized congestion notification (QCN), and full-duplex 10-Gigabit Ethernet ports.
DCBX. Discovery and exchange protocol for conveying configuration and capabilities among neighbors to ensure consistent configuration across the network. It is an extension of the Link Layer Data Protocol (LLDP, described in IEEE 802.1ab, Station and Media Access Control Connectivity Discovery).
Virtual network path used to distribute data between nodes. Also known as transport plane. See also control plane.
data center bridging. Set of IEEE specifications that enhances the Ethernet standard to allow it to support converged Ethernet (LAN) and Fibre Channel (SAN) traffic on one Ethernet network. DCB features include priority-based flow control (PFC), enhanced transmission selection (ETS), Data Center Bridging Capability Exchange protocol (DCBX), quantized congestion notification (QCN), and full-duplex 10-Gigabit Ethernet ports.
Data Center Bridging Capability Exchange protocol. Discovery and exchange protocol for conveying configuration and capabilities among neighbors to ensure consistent configuration across the network. It is an extension of the Link Layer Data Protocol (LLDP, described in IEEE 802.1ab, Station and Media Access Control Connectivity Discovery).
The EVPN PE responsible for forwarding BUM traffic from the core to the CE.
There are two Director devices (DG0 and DG1) in both QFabric-G and QFabric-M implementations. These Director devices are the brains of the whole QFabric system and host the necessary virtual components (VMs) that are critical to the health of the system. The two Director devices operate in a master/slave relationship. Note that all the protocol/route/inventory states are always synced between the two.
E
ETS. Mechanism that provides finer granularity of bandwidth management within a link.
Enterprise-level software hypervisors from VMware that do not need an additional operating system to run on host server hardware.
Process that enables grouping of Ethernet interfaces at the Physical Layer to form a single Link Layer interface. Also known as 802.3ad link aggregation, link aggregation group (LAG), LAG bundle.
The Ethernet link(s) between a CE device and one or more PE devices. In a multi-homed topology the set of links between the CE and PEs is considered a single “Ethernet Segment.” Each ES is assigned an identifier.
A 10 octet value with range from 0x00 to 0xFFFFFFFFFFFFFFFFFFFF which represents the ES. An ESI must be set to a network-wide unique, non-reserved value when a CE device is multi-homed to two or more PEs. For a single homed CE the reserved ESI value 0 is used. The ESI value of “all FFs” is also reserved.
Identifies the broadcast domain in an EVPN instance. For our purposes the broadcast domain is a VLAN and the Ethernet Tag Identifier is the VLAN ID.
EVPN. Type of VPN that enables you to connect a group of dispersed customer sites by using a Layer 2 virtual bridge. As with other types of VPNs, an EVPN comprises customer edge (CE) devices (routers or switches) connected to provider edge (PE) devices. The PE devices can include an MPLS edge switch that acts at the edge of the MPLS infrastructure.
EVPN Instance, defined on PEs to create the EVPN service.
F
Interconnection of network nodes using one or more network switches that function as a single logical entity.
https://www.juniper.net/techpubs/en_US/junos13.2/topics/concept/vcf-components.html
Identify a packet as high or low priority based on its forwarding class, and associate schedulers with the fabric priorities.
Fibre Channel. High-speed network technology used for storage area networks (SANs).
FCF. The two types of forwarders are:
- Fibre Channel switch that has all physical Fibre Channel ports and the necessary set of services as defined in the T11 Organization Fibre Channel Switched Fabric (FC-SW) standards.
- Device that has the necessary set of services defined in the T11 Organization Fibre Channel Switched Fabric (FC-SW) standards and which has the FCoE capabilities to act as an FCoE-based Fibre Channel switch, as defined by the Fibre Channel Backbone – 5 (FC-BB-5) Rev. 2.00 specification.
FC forwarder, FCoE forwarder. The two types of forwarders are:
- FC forwarder. Fibre Channel switch that has all physical Fibre Channel ports and the necessary set of services as defined in the T11 Organization Fibre Channel Switched Fabric (FC-SW) standards.
- FCoE forwarder. Device that has the necessary set of services defined in the T11 Organization Fibre Channel Switched Fabric (FC-SW) standards and which has the FCoE capabilities to act as an FCoE-based Fibre Channel switch, as defined by the Fibre Channel Backbone – 5 (FC-BB-5) Rev. 2.00 specification.
Fibre Channel over Ethernet. Standard for transporting FC frames over Ethernet networks. FCoE encapsulates Fibre Channel frames in Ethernet so that the same high-speed Ethernet physical infrastructure can transport both data and storage traffic while preserving the lossless CoS that FC requires. FCoE servers connect to a switch that supports both FCoE and native FC protocols. This allows FCoE servers on the Ethernet network to access FC storage devices in the SAN fabric on one converged network
FCF. Device that has the necessary set of services defined in the T11 Organization Fibre Channel Switched Fabric (FC-SW) standards and which has the FCoE capabilities to act as an FCoE-based Fibre Channel switch, as defined by the Fibre Channel Backbone – 5 (FC-BB-5) Rev. 2.00 specification.
FIP. Layer 2 protocol that establishes and maintains Fibre Channel (FC) virtual links between pairs of FCoE devices such as server FCoE nodes (ENodes) and FC switches.
FIP snooping. Security feature enabled for FCoE VLANs on an Ethernet switch that connects FCoE nodes to Fibre Channel switches or FCFs. The two types of FIP snooping inspect data in FIP frames and use that data to create firewall filters that are installed on the ports in the FCoE VLAN. The filters permit only traffic from sources that perform a successful fabric login to the Fibre Channel switch. All other traffic on the VLAN is denied. FIP snooping can also provide additional visibility into FCoE Layer 2 operation.
Switch with a minimum set of features designed to support FCoE Layer 2 forwarding and FCoE security. The switch can also have optional additional features. Minimum feature support is:
- Priority-based flow control (PFC)
- Enhanced transmission selection (ETS)
- Data Center Bridging Capability Exchange (DCBX) protocol, including the FCoE application TLV
- FIP snooping (minimum support is FIP automated filter programming at the ENode edge)
A transit switch has a Fibre Channel stack even though it is not a Fibre Channel switch or an FC forwarder.
Fibre Channel over Ethernet VLAN. VLAN dedicated to carrying only FCoE traffic. FCoE traffic must travel in a VLAN. Only FCoE interfaces should be members of an FCoE VLAN. Ethernet traffic that is not FCoE traffic must travel in a different VLAN.
FC. High-speed network technology used for storage area networks (SANs).
Network of Fibre Channel devices that provides communication among devices, device name lookup, security, and redundancy.
FCoE. Standard for transporting FC frames over Ethernet networks. FCoE encapsulates Fibre Channel frames in Ethernet so that the same high-speed Ethernet physical infrastructure can transport both data and storage traffic while preserving the lossless CoS that FC requires. FCoE servers connect to a switch that supports both FCoE and native FC protocols. This allows FCoE servers on the Ethernet network to access FC storage devices in the SAN fabric on one converged network
FCoE VLAN. VLAN dedicated to carrying only FCoE traffic. FCoE traffic must travel in a VLAN. Only FCoE interfaces should be members of an FCoE VLAN. Ethernet traffic that is not FCoE traffic must travel in a different VLAN.
FCoE Initialization Protocol. Layer 2 protocol that establishes and maintains Fibre Channel (FC) virtual links between pairs of FCoE devices such as server FCoE nodes (ENodes) and FC switches.
FCoE Initialization Protocol snooping. Security feature enabled for FCoE VLANs on an Ethernet switch that connects FCoE nodes to Fibre Channel switches or FCFs. The two types of FIP snooping inspect data in FIP frames and use that data to create firewall filters that are installed on the ports in the FCoE VLAN. The filters permit only traffic from sources that perform a successful fabric login to the Fibre Channel switch. All other traffic on the VLAN is denied. FIP snooping can also provide additional visibility into FCoE Layer 2 operation.
G
Process that allows a router whose control plane is undergoing a restart to continue to forward traffic while recovering its state from neighboring routers. Without graceful restart, a control plane restart disrupts services provided by the router. Implementation varies by protocol. Also known as nonstop forwarding. See also cold restart, warm restart.
GRES. In a router that contains a master and a backup Routing Engine, allows the backup Routing Engine to assume mastership automatically, with no disruption of packet forwarding. Also known as Stateful Switchover (SSO).
Junos OS feature that allows a change from the primary device, such as a Routing Engine, to the backup device without interruption of packet forwarding.
H
high availability. Configuring devices to ensure service continuity in the event of a network outage or device failure. Used to provide fault detection and correction procedures to maximize the availability of critical services and applications. High availability provides both hardware-specific and software-specific methods to ensure minimal downtime and ultimately improve the performance of your network. See also high availability mode, chassis cluster.
HA. Configuring devices to ensure service continuity in the event of a network outage or device failure. Used to provide fault detection and correction procedures to maximize the availability of critical services and applications. High availability provides both hardware-specific and software-specific methods to ensure minimal downtime and ultimately improve the performance of your network. See also high availability mode, chassis cluster.
Ensures rapid system module recovery following a switchover. High availability mode uses an initial bulk file transfer and subsequent transaction-based mirroring. High availability mode also keeps state and dynamic configuration data from memory synchronized between the primary and standby modules. Also known as stateful switchover.
In cloud computing, platform virtualization software that runs on a host computer, allowing multiple instances of operating systems, called guests, to run concurrently on the host within their own VMs and share virtualized hardware resources. A virtualized software layer that manages the relationships between VMs that run on its host and compete for its resources. A hypervisor controls and manages resource allocation. A hypervisor is said to run on bare metal, that is, directly on the hardware whose resources it shares. The term hypervisor was created by IBM to refer to software that is conceptually one level higher than an operating system’s supervisor. The vGW Virtual Gateway inserts a vGW kernel module in the hypervisor of each ESX/ESXi host to be secured. From this vantage, the vGW Virtual Gateway can monitor the security of each VM and apply protections adaptively as needed by changes to the VM security. By processing inspections in the VMware hypervisor kernel, the vGW Virtual Gateway maintains throughput and continuous firewall protection as VMs are moved from one host to another through a process called vMotion. Unlike traditional firewalls, the vGW Virtual Gateway supports live migration by maintaining open connections and security throughout the event. Also known as Virtual Machine Manager (VMM).
I
Infrastructure as a Service
ISSU. General term for one of several different ways that Juniper Networks platforms upgrade software versions with minimal disruption to network traffic. Unified ISSU is used for routing platforms, which operate at Layer 2 and Layer 3. Nonstop software upgrade (NSSU) is used for switching platforms that operate at Layer 2 and Virtual Chassis configurations. Topology-independent in-service software upgrade (TISSU) is used for virtual environments, where devices are not linked by a hardware-based topology. See also NSSU, TISSU, and unified ISSU.
IaaS
a Layer 3 VPN service implemented using BGP/MPLS IP VPNs (RFC 4364)
J
Graphical Web browser interface to Junos OS on routing platforms. With the J-Web interface, you can monitor, configure, diagnose, and manage the routing platform from a PC or laptop that has Hypertext Transfer Protocol (HTTP) or HTTP over Secure Sockets Layer (HTTPS) enabled.
L
Link Aggregation Control Protocol. Mechanism for exchanging port and system information to create and maintain LAG bundles.
link aggregation group. Two or more network links bundled together to function as a single link. Distributes MAC clients across the Link Layer interface and collects traffic from the links to present to the MAC clients of the LAG. Also known as LAG bundle, 802.3ad link aggregation, EtherChannel.
link aggregation group bundle. Two or more network links bundled together to function as a single link. Distributes MAC clients across the Link Layer interface and collects traffic from the links to present to the MAC clients of the LAG. Also known as LAG bundle, 802.3ad link aggregation, EtherChannel.
Delay in the transmission of a packet through a network from beginning to end.
LAG. Two or more network links bundled together to function as a single link. Distributes MAC clients across the Link Layer interface and collects traffic from the links to present to the MAC clients of the LAG. Also known as LAG bundle, 802.3ad link aggregation, EtherChannel.
M
MAC address virtual routing and forwarding table. This is the Layer 2 forwarding table on a PE for an EVI.
The architecture for next-generation data centers that simplifies and accelerates the deployment and delivery of applications within and across multiple data center locations.
Multipoint to Multipoint.
N
Network as a Service
Network Equipment Building System. Set of guidelines originated by Bell Laboratories in the 1970s to assist equipment manufacturers in designing products that were compatible with the telecom environment.
NaaS
NFV. Standard IT virtualization technology that consolidates many network equipment types onto standard-architecture high-volume servers, switches, and storage. NFV involves designing, deploying, and managing network functions in software that can be moved to, or instantiated in, various locations in the network as required, without the need to install purpose-built hardware. Although NFV complements software-defined networking (SDN), NFV can be deployed without SDN and vice versa. See also SDN.
Each QFabric has one Network Node Group and up to eight physical Nodes can be configured to be part of the NWNG. The Routing Engines (RE) on the Nodes are disabled and the RE functionality is handled by the NWNG-VMs that are located on the Director devices.
Network Functions Virtualization. Standard IT virtualization technology that consolidates many network equipment types onto standard-architecture high-volume servers, switches, and storage. NFV involves designing, deploying, and managing network functions in software that can be moved to, or instantiated in, various locations in the network as required, without the need to install purpose-built hardware. Although NFV complements software-defined networking (SDN), NFV can be deployed without SDN and vice versa. See also SDN.
NSR. High availability feature that allows a routing platform with redundant Routing Engines to preserve routing information on the backup Routing Engine and switch over from the primary Routing Engine to the backup Routing Engine without alerting peer nodes that a change has occurred. NSR uses the graceful Routing Engine switchover (GRES) infrastructure to preserve interface, kernel, and routing information. Also known as nonstop routing (NSR).
Keeps the Layer 2 protocol state synchronized between the master and backup Routing Engines
NSSU. Software upgrade for switching platforms with redundant Routing Engines and for most Virtual Chassis or Virtual Chassis Fabric from one Junos OS release to another with no disruption on the control plane and with minimal disruption to network traffic. A switching architecture requires a different approach than the one for a routing architecture to preserve control plane information. See also ISSU, TISSU, and unified ISSU.
nonstop software upgrade. Software upgrade for switching platforms with redundant Routing Engines and for most Virtual Chassis or Virtual Chassis Fabric from one Junos OS release to another with no disruption on the control plane and with minimal disruption to network traffic. A switching architecture requires a different approach than the one for a routing architecture to preserve control plane information. See also ISSU, TISSU, and unified ISSU.
Network Virtualization using Generic Routing Encapsulation. A network virtualization overlay protocol that uses Generic Routing Encapsulation (GRE) to tunnel Layer 2 packets over Layer 3 networks.
O
A logical, separate network that runs on top of an existing physical infrastructure. In a data center, an overlay network is typically used to create a virtual network by encapsulating traffic between virtual switches and tunneling the traffic over the physical network.
Open Virtualization Format. Platform-independent virtual machines (VMs) packaging and distribution method. The OVF supports industry-standard content verification and integrity checking and provides a basic scheme for managing software licensing. As described by the standard, the OVF defines an open, secure, portable, efficient, and extensible format for the packaging and distribution of software to be run in virtual machines. An OVF package consists of several files placed in one directory. The Open Virtualization Archive (OVA) is an alternative method that uses a TAR file containing the OVF directory.
P
Point to Multipoint.
Technique to intercept and observe specified data network traffic by using a routing platform such as a monitoring station that is not participating in the network.
- priority-based flow control. Link-level flow control mechanism defined by IEEE 802.1Qbb that allows independent flow control for each class of service to ensure that no frame loss from congestion occurs in data center bridging networks. PFC is an enhancement of the Ethernet PAUSE mechanism, but PFC controls classes of flows, whereas Ethernet PAUSE indiscriminately pauses all of the traffic on a link. Also known as priority flow control. See also Ethernet PAUSE.
- Protocol Field Compression. Normally, PPP-encapsulated packets are transmitted with a two-byte protocol field. For example, IPv4 packets are transmitted with the protocol field set to 0x0021, and MPLS packets are transmitted with the protocol field set to 0x0281. For all protocols with identifiers from 0x0000 through 0x00ff, PFC enables routers to compress the protocol field to one byte, as defined in RFC 1661, The Point-to-Point Protocol (PPP). PFC allows you to conserve bandwidth by transmitting less data. See also ACFC.
Provider multicast service interface. A logical interface in a PE that is used to deliver multicast packets from a CE to remote PEs in the same VPN, destined to CEs.
Method by which a copy of an IPv4 or IPv6 packet is sent from the routing platform to an external host address or a packet analyzer for analysis. Also known as traffic mirroring, switch port analyzer (SPAN), and lawful intercept. See also packet mirroring.
A type of cloud implemented in a proprietary network or data center that uses cloud computing technologies to create a virtualized infrastructure operated solely for a single organization, whether it is managed internally or externally. See also public cloud.
A cloud type in which a hosting service provider makes resources such as applications, storage, and CPU usage available to the public. Public clouds must be based on a standard cloud computing model. See also private cloud.
Q
quality of service. Performance, such as transmission rates and error rates, of a communications channel or system. A suite of features that configure queuing and scheduling on the forwarding path of an E Series router. QoS provides a level of predictability and control beyond the best-effort delivery that the router provides by default. (Best-effort service provides packet transmission with no assurance of reliability, delay, jitter, or throughput.) See also CoS.
Quad (four-channel) small form-factor pluggable transceiver that provides support for optical or copper cables. QSFP transceivers are hot-insertable and hot-removable.
quad form-factor pluggable plus. Enhanced quad (four-channel) small form-factor pluggable transceiver that provides support for fiber-optic or copper cables. QSFP+ transceivers are hot-insertable and hot-removable.
IEEE 802.1Qau) – A congestion management mechanism that sends a congestion notification message through the network to the ultimate source of the congestion. Instead of pausing transmission from the connected peer (as PFC does), QCN tries to stop conges¬tion at its source—the network edge where the “end host” originates the congestion-causing flow. The idea is that instead of pushing a flow control message through the network one device at a time (like PFC), QCN tries to find the cause of congestion and stop the flow at the source.
R
An RSNG consists of two physical Nodes. The Routing Engines on the Nodes operate in an active/backup fashion (think of a Virtual Chassis with two member switches). You can configure multiple pairs of RSNGs within a QFabric system. These mostly connect to dual-NIC servers.
S
This is the default group and consists of one Node. Whenever a Node becomes part of a QFabric system, it comes up as an SNG. These mostly connect to servers that do not need any cross Node redundancy. The most common examples are servers that have only one NIC.
small form-factor pluggable transceiver. Provides support for optical or copper cables. SFP transceivers are hot-insertable and hot-removable. See also XFP.
small form-factor pluggable plus. Enhanced SFP transceiver that provides support for data rates up to 10 Gbps for optical or copper interfaces. SFP+ transceivers are hot-insertable and hot-removable.
SFP+. Enhanced SFP transceiver that provides support for data rates up to 10 Gbps for optical or copper interfaces. SFP+ transceivers are hot-insertable and hot-removable.
SDN. Approach to computer networking that uses methods of network abstraction, such as virtualization, to simplify and scale network components and uses software to define and manage network components. SDN separates the data plane, which forwards traffic, from the control plane, which manages traffic flow, and enables users to program network layers. SDN is often used with Network Functions Virtualization (NFV) to allow agile placement of networking services when and where they are needed. By enabling this level of programmability, SDN enables users to optimize their network resources, increase network agility, provide service innovation, accelerate service time-to-market, extract business intelligence, and ultimately enable dynamic, service-driven virtual networks. See also NFV.
Spanning Tree Protocol. Defined in the IEEE standard 802.1D, the Spanning Tree Protocol is an OSI Layer 2 protocol that ensures a loop-free topology for any bridged LAN. This protocol creates a spanning tree within a mesh network of connected Layer 2 bridges (typically Ethernet switches), and disables the links that are not part of that tree, leaving a single active path between any two network nodes.
T
topology-independent in-service software upgrade. Software upgrade for virtual machine and top-of-rack environments from one software image to another with no disruption to traffic transiting the device. In topology-independent virtual environments, devices are not linked by a hardware-based topology and such environments require a different approach for software upgrade than the one for hardware-based environments, which include routers and switches. See also ISSU, NSSU, and unified ISSU.
TISSU. Software upgrade for virtual machine and top-of-rack environments from one software image to another with no disruption to traffic transiting the device. In topology-independent virtual environments, devices are not linked by a hardware-based topology and such environments require a different approach for software upgrade than the one for hardware-based environments, which include routers and switches. See also ISSU, NSSU, and unified ISSU.
U
unified ISSU. Software upgrade for routing platforms from one Junos OS release to another with no disruption of the control plane and with minimal disruption of traffic. Unified ISSU is supported only on platforms with dual Routing Engines. A routing architecture requires a unified approach to preserve routing tables and control plane information. See also ISSU, NSSU, and TISSU.
unified in-service software upgrade. Software upgrade for routing platforms from one Junos OS release to another with no disruption of the control plane and with minimal disruption of traffic. Unified ISSU is supported only on platforms with dual Routing Engines. A routing architecture requires a unified approach to preserve routing tables and control plane information. See also ISSU, NSSU, and TISSU.
V
The VMware® vCenter server, formerly known as VMware VirtualCenter, that centrally manages VMware vSphere environments, allowing administrators control over the virtual environment. The vCenter provides centralized control and visibility at every level of the virtual infrastructure. It manages clusters of ESX/ESXi hosts, including their VMs, hypervisors, and other parts of the virtualized environment. The vGW Virtual Gateway connects to the vCenter for visibility into all VMs.
Virtual Chassis Fabric
A fault‐tolerant service provider and enterprise grade security solution purpose‐built for the virtualized environment. vGW Series delivers complete virtualization security for multitenant public and private clouds, and clouds that are a hybrid of the two. It maintains the highest levels of VM host capacity and performance while protecting virtualized environments.
Interconnected devices functioning as one logical device. Similar to a Virtual Switching System or a stack.
VCF. Evolution of the Virtual Chassis feature, which enables you to interconnect multiple devices into a single logical device, inside of a fabric architecture.
https://www.juniper.net/techpubs/en_US/junos13.2/topics/concept/vcf-components.html
https://www.juniper.net/techpubs/en_US/junos13.2/topics/concept/vcf-components.html
VMI. The vGW Virtual Gateway feature that gives a user a full view into all applications flowing between VMs and how they are used. VMI carries a complete VM and VM group inventory, including virtual network settings, and provides deep knowledge of each VM state, including installed applications, operating systems, patch levels, and registry values. The vGW Virtual Gateway incorporates VMI as part of its security policy definition and enforcement mechanism.
VRRP. On Fast Ethernet and Gigabit Ethernet interfaces, enables you to configure virtual default routers.
Technology that abstracts the physical characteristics of a machine, creating a logical version of it, including creating logical versions of entities such as operating systems and various network resources.
virtual machine. A simulation of a physical machine such as a workstation or a server that runs on a host that supports virtualization. Many VMs can run on the same host, sharing its resources. A VM has its own operating system that can be different from that of other VMs running on the same host.
Virtual Machine Introspection. The vGW Virtual Gateway feature that gives a user a full view into all applications flowing between VMs and how they are used. VMI carries a complete VM and VM group inventory, including virtual network settings, and provides deep knowledge of each VM state, including installed applications, operating systems, patch levels, and registry values. The vGW Virtual Gateway incorporates VMI as part of its security policy definition and enforcement mechanism.
VMware® technology that allows for transition of active, or live, virtual machines from one physical server to another, undetectable to the user, it allows VMware to migrate a "live" VM (that is, a VM that is still running with no downtime), from one ESXi host to another host on a different physical server. vMotion allows for system maintenance on hosts and offers improved performance if greater capacity is available on another host.
The vGW Virtual Gateway installation mode, formally referred to as VMSafe Firewall + Monitoring, that provides both firewall configuration support and virtual machine monitoring. In this mode, the vGW Virtual Gateway loads a kernel module into the VMware hypervisor on the ESX/ESXi host to be secured and manages it.
The vGW Virtual Gateway installation mode that provides both firewall configuration support and virtual machine monitoring. In this mode, the vGW Virtual Gateway loads a kernel module into the VMware® hypervisor on the ESX/ESXi host to be secured and manages it. This is the default and recommended installation mode. This mode is also referred to as VMSafe Firewall mode.
The vGW Virtual Gateway installation mode that is used for monitoring only. This mode is similar to the VMSafe Firewall + Monitoring mode except that no firewall policy is loaded on a VM. This mode allows you to deploy the vGW Virtual Gateway with the assurance that security policies do not block traffic.
A network virtualization platform that reproduces the entire network model in software, enabling virtual networks that can be programmatically provisioned and managed independently of the underlying hardware.
A VMware cloud operating system that can manage large pools of virtualized computing infrastructure, including software and hardware.
An application or software that administers VMware vSphere.
A virtualized network interface card that connects a VM to a vSwitch. A VM can have multiple vNICs. A vNIC presents the same media access control (MAC) interface that a physical interface provides.
A virtualized switch that resides on a physical server and directs traffic among VMs and their virtualized applications. Network activity between co‐located VMs transits it.
Virtual Extensible LAN. A network virtualization overlay protocol that encapsulates Ethernet frames in UDP packets.
VXLAN network identifier. In the VXLAN protocol, the 24-bit numeric ID that identifies a VXLAN segment.
VXLAN tunnel endpoint. In the VXLAN protocol, the entity that performs the encapsulation and decapsulation of VXLAN packets.