Solution / Data center and cloud computing

Optical Interconnect Solution for Cloud Data Center

Browse:634

The cloud data centers are the infrastructure of cloud computing networks. The continuous penetration of cloud computing business has stimulated the construction of super data centers. The cloud infrastructure is mainly composed of switches and servers. Fiber optic cables and optical transceivers, or active optical cables and direct attach cables are used for the connection. The large scale of cloud data centers will greatly increase the usage of optical transceivers and the requirements to transmission distance, which will increase the usage ratio of single mode optical transceivers.

In the cloud data centers, the explosive growth of the traffic is driving the data rate of optical transceivers to escalate and accelerate. It has taken 5 years from 10G to 40G, then 4 years from 40G to 100G, and may likely take only 3 years from 100G to 400G. All the export data in the future data centers need to go through the internal mass operation (especially the rising internal and export flow of AI, VR/AR, UHD video, and so on). The flow of the east-west direction in the data center is turbulent, and the flat data center architecture makes the 100G optical transceiver market continue to grow at a high speed. 5月电子厂高级商场公厕|张婉莹视频暑假作业|91吃瓜

According to the report of third party institution, the number of global super large data centers will be over 500 by the end of 2019. Then 83% of the public cloud servers and 86% of the public cloud load will be loaded in the super data centers, the ratio of deployment of super data center servers will rise from 21% to 47%, the ratio of processing capacity will be increased from 39% to 68%, and the ratio of traffic will be increased from 34% to 53%.

 

The direction of the main data flow of traditional 3-tier data center is from top to bottom or from south to north, while the direction of the main data flow of flat spine-leaf data center is from east to west.

Here is a data center optical interconnect application case of optical transceivers and AOC. The network architecture of a Cloud Data Center is pided into Spine Core, Edge Core, and ToR (Top of Rack). The 10G SFP+ AOC is used for the interconnection between the ToR access switches and the server NICs. The 40G QSFP+ SR4 optical transceivers and MTP/MPO cables are used for the interconnection between the ToR access switches and the Edge Core switches. The 100G QSFP28 CWDM4 optical transceivers and duplex LC cables are used for the interconnection between the Edge Core switches and the Spine Core switches.

 

The trend of the port bandwidth upgrades for Cloud Data Centers is from 10G to 25G, and then from 25G to 100G.

                    

Upgrade Path 2008–2014 2013–2019 2017–2021 2019~
Data Center Campus 40G-LR4 40G-LR4
100G-CWDM4
100G-CWDM4 400G-FR4
Intra-Building 40G-eSR4
4x10G-SR
40G-eSR4 100G-SR4 400G-DR4
Intra-Rack CAT6 10G AOC 25G AOC 100G AOC
Sever Data Rate 1G 10G 25G 100G



§ Shorter iteration period. The rapid growth of data center traffic is driving the upgrading of optical transceivers in acceleration. The iteration period of data center hardware devices including optical transceivers is about 3 years, while the iterative period of telecommunication optical transceivers is usually over 6-7 years.According to the difference in the rate of increase in flow, network architecture, reliability requirements and the environment of the machine room compared to telecommunication networks, the demand for optical transceivers of cloud data centers has the following characteristics: shorter iteration period, higher speed, higher density, lower power consumption and use by mass.

§ Higher speed. Because of the explosive growth of data center traffic, the technology iteration of optical transceivers can not catch up with the demand, and almost all the most advanced technologies are applied to data centers. For higher speed optical transceivers, there is always a demand for data centers, and the key question is whether the technology is mature or not.

§ Higher density. The high density core is to improve the transmission capacity of the switches and the single boards of the servers, in essence, to meet the demand of high speed increasing flow. At the same time, the higher the density is, the less switches are needed be deployed, and the resources of the machine room can be saved.

§ Lower power consumption. The power consumption of the data center is very large. Lower power consumption is to save energy and ensure better heat dissipation. Because there are full of optical transceivers on the backboards of the data centers, if the heat dissipation problem can not be properly solved, the performance and density of the optical transceivers will be affected.

Coptolink's cloud data center solution includes optical transceivers and active optical cables for 10G/25G/40G/100G/200G/400G networks.

Optical Transceivers Solution Applications Maximum Connection Distance
400G QSFP-DD SR8 400GE, 2x200GE 100m
200G QSFP-DD SR8 2x100GE 100m
200G QSFP-DD PSM8 2x100GE 2km-10km
200G QSFP56 SR4 200GE 100m
100G QSFP28 SR4 100GE, OTU4, 128GFC/4x32GFC 100m-300m
100G QSFP28 PSM4 100GE 2km-10km
100G QSFP28 CWDM4 100GE 2km-10km
100G QSFP28 CLR4 100GE 2km-10km
100G QSFP28 4WDM-10 100GE 10km
100G QSFP28 LR4 100GE, OTU4 10km-20km
100G QSFP28 4WDM-40 100GE 30km-40km
100G QSFP28 ER4 Lite 100GE, OTU4 30km-40km
50G SFP56 SR 50GE 100m
40G QSFP+ SR4 40GE, OTU3 400m
40G QSFP+ PSM4 40GE, OTU3 2km-10km
40G QSFP+ LR4 40GE, OTU3 2km-10km
40G QSFP+ ER4 40GE, OTU3 40km
Active Optical Cables Solution Applications Maximum Connection Distance
400G QSFP-DD AOC 400GE, 2x200GE 100m
200G QSFP-DD/QSFP56 AOC 2x100GE, 200GE 100m
100G QSFP28 AOC 100GE, 128GFC/4x32GFC 100m-300m
50G SFP56 AOC 50GE 100m
40G/56G QSFP AOC 40GE, 4x16GFC, 2x25GE, 2x32GFC 150m-300m
25G SFP28 AOC 25GE, 32GFC 100m-300m
10G SFP+ AOC 10GE 300m



Online Service Contact Us Qr Code

Phone:

0755-23771517

Scan nd watch us