Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?


Packet Optical Network Use Cases and Technical Overview


Packet optical networking is useful in any network with a converged supercore that needs to transport traffic in as efficient and effective a manner as possible. Three key technologies make packet optical networking practical for these use cases.

Specifically, the key factors indicating use of packet optical networking are as follows:

  • Low latency is critical—Many applications, from streaming video and music to cloud-based data mining with many scatter-gather stages to active constant webpages built from elements stored in many data centers, need low latency (propagation and nodal processing delay) to be effective. Delays due to slow failovers are unacceptable to these users.

  • High bandwidth is required—In modern converged supercores, the same physical links could carry aggregated traffic from streaming applications, replicated database queries, and user views of information passed from a remote data center in content delivery networks. Yet enough bandwidth must be available at all times for these often variable workloads.

  • CoS/QoS is needed—Users often have service-level agreements in place that spell out the exact class of service (CoS) or quality of service (QoS) in terms of latency and reliability provided by the network. The more stringent the requirements, and the more burdensome the penalties, the more likely these needs can be met by a packet optical network.

  • Resends due to errors are not practical—Certain applications, such as streaming video and IP-based voice, cannot pause for resends. Errors show up as breaks in the bit stream or rebuffering hesitations. Packet optical networks include support for several forms of forward error correction (FEC) that help improve data quality when resends are not possible or practical.

When it comes to technology, packet optical networks and links depend on three key concepts:

  • Optical networking with wavelength multiplexing

  • Forward error correction (FEC)

  • MPLS fast reroute

Although all three of these technologies were developed independently, packet optical networks benefit greatly when they are all used together. When used in combination, these three technologies allow for unsurpassed path protection through a PTX Series core network using packet optical links and MPLS with fast reroute. This network configuration example shows how they all work together.

Multiplexing has been used since the early days of networking, and fiber optic links use it as well. Many optical fibers are engineered to do more than carry one very fast serial bit stream. The bandwidth available on these types of fiber optic cables, just as certain forms of copper cables (especially coaxial cables) can carry more than one serial bit stream. In copper networks, the various channels are distinguish by frequency, but in optical networks, it makes more sense to distinguish the channels by wavelength. Various wavelengths can be multiplexed onto a single strand of fiber—and demultiplexed at the opposite end of the link—with a process known as wavelength-division multiplexing. If the separation of wavelengths is narrow enough and the resulting channels are dense enough (and there are at least 8 channels on the fiber), the result is known as dense wavelength-division multiplexing (DWDM). In this system, “non-dense” WDM is known as coarse wavelength-division multiplexing (CWDM).

Today, DWDM is standardized for international use as part of the optical transport network (OTN) defined as G.709 by the International Telecommunications Union (ITU). Operation of these advanced fiber optical networks is tied to the use of FEC codes to raise BER performance to unprecedented levels.

Why are FECs needed? Because with this increased DWDM bit-carrying capacity comes an increased risk. A failed or marginally operating link can threaten not only the loss of one stream of bits, but many bit streams that flow on the same physical link. A failed link carrying many gigabits of information can be catastrophic for a network unless some method to compensate for these losses is used. These methods include not only FECs but fast traffic reroute. First, however, consider FEC codes.

When first encountered, FEC codes seem like some mathematical magic that could not possibly work. Yet they do, and very well. Most of them do not even double the number of bits sent, which is a simple scheme on its own (“just send everything twice”). Of course, the mathematics of a FEC is much more complex, and the “code space” is carefully chosen to allow certain patterns of bit errors, such as single bit errors or limited bursts, to not only be detected, but corrected without a retransmission.

FECs have been around for a long time, often as some form of what are called Hamming Codes. These codes are useful when it is slow and expensive to send bits, like with a deep space probe, and when pausing to acknowledge receipt of a message and when asking for a resend is impractical.

To mathematicians, of course, FEC codes are just what numbers constructed and used in a certain way do, like square roots. FECs would be a lot less mysterious if I said to you, before we communicated over a distance, “Okay, when I send a number to you between 0 and 99, I’ll send two numbers that have to add up to 200, and one of the numbers is 97.” If I receive 97 and then 102, I know that there was an error when sending the second number and that this number should be corrected to 103. In practice, naturally, real FECs are much more sophisticated.

The advantage of FEC is that its use allows for constant monitoring of the uncorrected BER on the link (that is, before the FEC is applied to correct errors). So if the uncorrected BER is steadily increasing, this could be a sign that the link might fail soon, possibly due to environmental conditions. Packet optical networks often allow a user to establish a threshold value for optical parameters such as the BER to signal link failure at the end devices before the link actually fails.

Why would a service provider want to fail a link before it actually breaks? Because this allows more control over the network, and lets operators establish a more structured process to invoke in case of a failure. But this also raises the question of what exactly should be done when a link fails. In packet optical networks, the answer is often tied in with MPLS and fast reroute.

MPLS is a technology that allows multiple hops between two routers to be seen at the IP layer as one hop because the label-switched path (LSP) tunnels created do not require any IP header processing between LSP source (ingress) and destination (egress). Also, MPLS labels are switched at intermediate nodes by a simple table lookup rather than requiring complete packet header processing and forwarding table lookup (and packet routing often requires multiple forwarding table lookups). In contrast to IP, MPLS is connection-oriented: a signaling protocol (or manual configuration) is needed to set up an MPLS tunnel between two routers.

The advantage of MPLS, besides this simple and fast one-hop connection, is that traffic on the MPLS path LSP follows the same sequence of nodes. So, not only is sequential delivery guaranteed (barring physical link loss), but the LSP connection makes a convenient place to establish CoS/QoS parameters that are otherwise hard to enforce in traditional best-effort connectionless IP routing.

But what if a link between two intermediate devices on an LSP fails? In most connection-oriented networks, the signaling protocol must start from scratch and establish a new path (LSP) to the destination for the traffic to follow. In the meantime, of course, all traffic from source to destination must be discarded (or thrown from the LSP network onto the best-effort routing network, where it is likely to be discarded).

In MPLS, fast reroute can be preemptive. In that case, other LSPs might be terminated to make way for the rerouted traffic, which has been given a higher priority on the network. Although possible, preemptive fast reroute does not play a role in the configuration in this network configuration example. In other words, the secondary LSP does not carry user traffic of low priority that needs to be preempted.