News / News/Industrial News

Application Trend of DSP in Data Center

Release time:2021-10-21 04:08:22 Hits:1217

100G has already begun to be used on a large scale in data centers, the next generation of 400G began commercial use gradually in 2020. For 400G applications, the biggest difference is the introduction of a new modulation format PAM-4, which has achieved the effect of doubling transmission rate at the same baud rate (device bandwidth), such as DR4 for transmission below 500 meters, single wavelength rate needs to reach 100Gbps. In order to realize this kind of rate application, data center began to introduce DSP chips based on digital signal processing to replace clock recovery chips in the past for optical transceivers to solve the sensitivity problem caused by insufficient bandwidth of optical devices. Can DSP become a broad solution for data center applications in the future as expected by the industry? We must learn what problems DSP can solve, its architecture, the future cost and power consumption trends to answer this question.每日大赛|大象传媒2024年隐藏人口|严雨霏 张婉莹 罗智莹

 Problems solved by DSP

In the field of physical layer transmission, DSP was first applied in wireless communication for three reasons: Firstly, wireless spectrum is a scarce resource, and the demand for transmission rate has been increasing, improving spectrum efficiency is a fundamental requirement for wireless communication, so it must be needed DSP which supports various complex and high-efficiency modulation methods. Secondly, the transmission equation of the wireless channel is very complicated, multipath effect and Doppler effect in high speed movement. Traditional analog compensation cannot meet the compensation requirements of the wireless channel, while DSP can use various mathematical models to compensate the channel transmission equation well. Thirdly, wireless channel SNR is often relatively low, and error correction codes are needed to improve the sensitivity of the receiver.

In the field of optical communication, DSP is first commercially used in long-distance coherent transmission systems above 100G, the reason is similar to wireless communication, in long-distance transmission, the cost of optical fiber is very high, it is a inevitably demand for operators to improve spectrum efficiency in order to achieve higher transmission on a single optical fiber. Therefore, after WDM technology application, the use of coherent technology supported by DSP has become an inevitable choice. The second reason is that it can be easily compensated by using a DSP chip which dispersion effect, nonlinear effect caused by transmitting and receiving device and the fiber itself, phase noise introduced by the transmitting and receiving device in long-distance coherent transmission system, no need to place dispersion compensation fiber (DCF) in the link as in the past. Lastly, in long-distance transmission, due to the attenuation effect of optical fiber, usually need erbium doped optical fiber amplifiers (EDFA) to amplify the signal once every 80 kilometers to reach more than 1000 kilometers transmission distance, each times amplification will introduce noise to the signal and reduce signal-to-noise ratio of the signal, therefore, it is necessary to introduce forward error correction (FEC) to improve the receiver's receiving ability during long-distance transmission.

In summary, DSP solves three problems: 1. That It supports high order modulation format o improve spectrum efficiency; 2. Device and channel transmission effects; 3. Signal-to-noise ratio problem. Therefore, whether there are similar requirements in data centers becomes an important reference for us to judge whether we should introduce DSP.

About spectrum efficiency, whether data centers need to improve spectrum efficiency? The answer is yes, but the difference from insufficient wireless spectrum resources and insufficient optical fiber resources of transmission network is the reason for improving spectrum efficiency in data centers is insufficient bandwidth of electrical/optical devices and insufficient wavelength division/parallel channels (limited by package volume of optical transceivers), so we must rely on increasing the single-wave rate to meet the needs of future applications above 400G. For single-wave applications above 100G, the current electric drive chips of transmitting end and optical devices cannot reach the bandwidth above 50GHz, so it is equivalent to introducing a digital signal processing unit at the transmitting end. Applications in data centers, the digital signal processing unit is still relatively simple, for example, for 100G PAM-4 applications, the transmitter mainly completes the spectrum compression of transmitted signal, non-linear compensation, FEC coding(optional), uses signal adaptively, filters compensate the signal and digital domain CDR(Need independent external crystal oscillator support). In the digital signal processing unit, the FIR filter is generally used to compensate the signal. The tap number of the FIR filter and design of decision function directly determine the performance and power consumption of the compensated DSP. It needs to be pointed out that DSP applications in the field of optical communications face a large number of parallel computing problems, the main reason is the huge difference between ADC sampling frequency (tens or even hundreds of Gs/s) and working frequency of the digital circuit (~ several hundred MHz). Digital circuit needs to convert the serial 100Gs/s signal into hundreds of parallel digital signals before processing in order to support ADC with a sampling rate of 100Gs/s. It is conceivable that the FIR filter only adds 1 Tap design, the actual situation is that more hundreds of Taps is needed to achieve. So how to deal with the balance of performance and power consumption in the digital signal processing unit is a decisive factor that determines the quality of DSP design. In addition, in data centers, the optical transceivers must meet the prerequisites for intercommunication. In practical applications, the transmission performance of the link depends on the comprehensive performance of the Tx DSP + emulate/optical device and the Rx DSP + emulate/optical device. How to design reasonable standards to correctly evaluate the performance of the Tx and Rx is also a difficult point. When DSP supports FEC function of the physical layer, how to synchronize optical transceivers FEC function of transmitting and receiving also increases the difficulty of data centers testing. Therefore, until now, the coherent transmission systems are intercommunication between manufacturers' internal equipments, and does not require intercommunication between different manufacturers. The performance evaluation method of TDECQ for PAM-4 is proposed in 802.3.

For power consumption, due to the introduction of DAC/ADC and algorithms for DSP, its power consumption must be higher than traditional analog technology CDR chips, and DSP has limited power consumption reduction methods, mainly relying on the improvement of tape-out technology, for example, power consumption can achieve a 65% reduction by upgrading current 16nm to 7nm technology. Current power consumption of 400G QSFP/QSFP-DD design based on 16nm DSP solution is about 12W, which is a huge challenge for thermal design of optical transceivers itself or front panel of the switch in the future. So it may solve 400G DSP problem by 7nm technology.
      Price is always a topic of concern for data centers. Unlike traditional optical devices, DSP chips can expect a larger chip cost reduction space with the support of massive applications because of mature semiconductor technology. Another advantage of DSP in future applications of data centers is flexibility. It can meet the application requirements of different rate and scenarios by adjusting the DSP configuration under the same optical device configuration.


Online Service Contact Us Qr Code

Phone:

0755-23771517

Scan nd watch us