Overview
For decades, optical networks relied on Intensity Modulation Direct Detection (IMDD) where signaled bits were transmitted by pulsing a laser on and off. This signaling method eventually hit a ceiling as data demands exploded. To maximize network capacity and reach, optical networking evolved from simple direct-detection methods to coherent optics. This evolution makes it possible to manipulate light as an electromagnetic wave, using phase and amplitude to overcome the physical limitations of traditional on-off signaling.
Unlike traditional direct detection receivers that only measure the intensity of light, coherent detection introduces a Local Oscillator (LO) to mix with the incoming signal. This enables the receiver to capture a wave's amplitude, phase, and polarization rather than just its brightness. The detailed wave state enables coherent systems to achieve high data rates and extended reach required for modern network backbones.
Foundations of Coherent Transmission
The performance of a coherent link depends on three core components: the Digital Signal Processor (DSP), advanced modulation formats, and Forward Error Correction (FEC). While modulation defines how data is encoded onto the light wave, the DSP and FEC act as the system’s computational engine, mathematically reconstructing the signal and correcting errors caused by physical optical fiber impairments.
Digital Signal Processor
The Digital Signal Processor (DSP) is the core of a coherent transceiver. It enhances transmission distance, efficiency, and power consumption. In modern coherent modules (Juniper Coherent Optics [JCO] series), the DSP performs massive computational tasks in real time, including:
- Dispersion compensation: As light travels through optical fiber, different wavelengths travel at different speeds (chromatic dispersion) causing pulses to spread and overlap. The DSP applies inverse mathematical filters to reverse this distortion and eliminate the need for physical Dispersion Compensation Modules (DCMs) on the optical fiber line.
- Carrier recovery: The DSP tracks and corrects for frequency offsets between the transmitter and the local oscillator.
- Forward Error Correction (FEC): The DSP analyzes the statistical probability of bit values to correct errors that would otherwise result in data loss. This provides a significant coding gain, extending the reach of the signal.
Modulation Techniques
Modern coherent optics provide flexibility by using software to change modulation formats, allowing operators to trade capacity for reach.
- Dual Polarization Quadrature Phase Shift Keying
- Quadrature Amplitude Modulation
- Probabilistic Constellation Shaping
Dual Polarization Quadrature Phase Shift Keying
Dual Polarization Phase Shift Keying (DP-QPSK) is a modulation technique that encodes 2 bits per symbol. It is extremely robust against noise and is used for ultra-long-haul links (thousands of kilometers).
Quadrature Amplitude Modulation
Quadrature Amplitude Modulation (QAM) adds amplitude variation to phase states, allowing higher data rates. It doubles the data rate compared to QPSK but requires a higher Optical Signal-to-Noise Ratio (OSNR).
| Modulation | Bits per symbol | Symbol rate |
|---|---|---|
| 4QAM | 2 | 1/2 x bit rate |
| 8QAM | 3 | 1/3 x bit rate |
| 16QAM | 4 | 1/4 x bit rate |
Probabilistic Constellation Shaping
Probabilistic Constellation Shaping (PCS) is an advanced signal processing technique that optimizes high baud-rate transmission in 400G and 800G optical systems using probabilistic or geometric shaping.
-
Probabilistic shaping adjusts the frequency of symbol use on a standard QAM grid to favor noise-resilient, low-energy points.
-
Geometric shaping reconfigures the physical location of the constellation points to maximize the distance between symbols.
By favoring these noise-resilient symbols, the optical engine lowers the average signal power and minimizes the distortion caused by fiber non-linearities. This approach enhances OSNR tolerance and facilitates long-distance transmission across regional and long-haul spans that would otherwise require lower-capacity modulation. Consequently, PCS enables granular rate adaptation, allowing the hardware to maximize data throughput by tuning the information rate in small increments to match the specific link budget of a fiber span.
Forward Error Correction
In coherent networking, coherent signals use complex modulation to pack more bits into every pulse and are more susceptible to noise. Forward Error Correction (FEC) ensures that these high-speed signals remain reliable.
FEC is a digital signal processing method used to detect and correct bit errors without the need for retransmission. The transmitter adds redundant data (overhead) to the signal before it is sent. The receiver then uses this extra information to mathematically reconstruct any bits that were corrupted during transit.
FEC Algorithms
FECs are implemented through FEC algorithms. FEC algorithms are specific mathematical techniques or coding schemes. FEC algorithms detect and correct errors in transmitted data without requiring retransmission. The FEC process involves two steps:
-
Encoding (at the Tx or Transmitter)—The FEC algorithm processes the original data and adds redundant bits or parity bits based on a specific mathematical rule. The encoded data is then transmitted over the communication channel.
-
Decoding (at the Rx or Receiver)—The receiver uses the FEC algorithm to analyze the received data, including the redundant bits. If errors are detected, the algorithm attempts to correct them based on the redundancy.
The error correction capability of FEC depends on the specific algorithm used and the amount of redundancy added. Some of the commonly used FEC algorithms include:
-
Reed-Solomon (RS) FEC
-
Soft-Decision FEC (SD-FEC)
-
Low-Density Parity-Check (LDPC) Codes
-
Bose-Chaudhuri-Hocquenghem (BCH) Codes
-
Concatenated FEC (CFEC)
The choice of FEC algorithm depends on the specific requirements of the communication system:
-
Data rate—High-speed systems require more efficient algorithms. This could be LDPC or turbo codes.
-
Error characteristics—Burst errors are better handled by block codes such as Reed-Solomon.
-
Latency—Real-time applications such as video streaming require low-latency algorithms.
-
Power and complexity—Systems with limited computational resources may use simpler codes like Hamming or BCH.
Optical Signal-to-Noise Ratio Tolerance
Optical Signal-to-Noise Ratio (OSNR) is the ratio between the power of the light signal and the noise floor of the optical link. OSNR tolerance is the minimum OSNR a receiver can handle before it can no longer correct errors. A stronger FEC (like OFEC) has a low OSNR threshold and can retrieve a clean signal out of a much noisier environment. The improvement in OSNR achieved using FEC is measured as Net Coding Gain (NCG).
FEC Types
The pluggable coherent optics support two types of FEC methods:
-
Concatenated forward error correction (CFEC): The CFEC is an FEC method used in ZR optics. CFEC concatenates two FEC codes with each other, an inner soft-decision Hamming (128,119) code and an outer Staircase BCH (255, 239) hard-decision outer code. The concatenation helps to obtain the optical performance as specified in the OIF ZR implementation agreement. CFEC uses hard-decision decoding, which is fast and results in sub-microsecond latency.
-
Open forward error correction (OFEC): The OFEC is an FEC method used in OpenZR+ optics. OFEC consists of a Turbo Product Code (TPC) using an Extended BCH (256, 239) code with 3-iteration soft-decision decoding. The improved performance of OFEC compared to CFEC is critical to achieve the higher optical performance as specified in the OpenZR+ MSA. OFEC uses soft-decision iterative decoding, where the DSP checks the data multiple times to ensure accuracy. This significantly increases the processing time (latency).