By Casimer DeCusatis
Is it possible to think of something without knowing a word to describe it ? According to some studies, humans understand the concept of numbers and counting before knowing the words for those numbers. Perhaps that means that this month’s blog will be easier for everyone to understand, since I’ve found that it’s hard to describe the emerging market for 40 – 100 Gbit/s optical links without using an awful lot of numbers (I needed to use 65 numbers in this post alone, including that last one).
Although most of the Ethernet market is still running around 1 – 10 Gbit/second, there is a small but growing interest (currently about 10% of the market) in significantly higher data rates (40-100 Gbit/s). While early applications for higher data rate Ethernet are focused on high performance computing and high traffic areas such as the network core or for disaster recovery in the WAN, historically a higher incremental data rate for Ethernet will being to see high volume deployments within about 5-6 years (in other words, 40G should be going mainstream any time now, given the approval dates for IEEE 802.3ba).
Parallel optical links are promising
While there are serial optical links available at these data rates (particularly for multi-data center connectivity or telecommunication systems), parallel optical links are a promising candidate for cost effective, higher data rate links within the data center. At shorter distances, such as within a data center rack or between adjacent racks (perhaps up to 100 meters), parallel optics needs to be cost competitive with copper links. Thus, rather than using single-mode fiber, some version of laser optimized multi-mode fiber is often employed. Since an active optical cable allows for tradeoffs between the transmitter, receiver, and fiber parameters, in principle we can reduce manufacturing costs by improving yield (for example, pairing slightly weaker laser transmitters with slightly more sensitive receivers). We can also leverage existing 10 Gbit/s serial link technology and volumes by combining either 4 links to create a 40G channel, or 10 links to create a 100G channel.
The connector most often used for multi-fiber links is the so-called MPO (multi-fiber push-on connector), also known by its most common vendor branded version, the MTP connector (which is a registered trademark of US Conec corporation). For various historical reasons, these connectors were standardized in rows of 12 fibers each; this isn’t a good match for data communication systems, which are typically based on 8 or 10 bits of data per byte, or 16 to 20 bits for a 2-byte wide interface. The 40G interface is based on parallel striping of 10 Gbit/s serial channels. If we held up a standard 1 x 12 fiber MPO connector and looked back into the cable, the 4 leftmost fibers are used to transmit data, the middle 4 fibers are left unused, and the 4 rightmost fibers are used to receive data. Thus, we have a bidirectional interface with 4 x 10G in each direction. We can do something similar to concatenate 10 Gbit/s links into a duplex 100 Gbit/s channel, but we need an MPO connector with 2 rows of 12 fibers each. We leave the outermost fibers on either end of the rows vacant, and use the remaining 10 fibers in the upper row to transmit data, and the remaining 10 fibers in the lower row to receive data.
Since we are leveraging 10 Gbit/s building block technologies, both the 40G and 100G parallel interfaces described above should span about 150 meters using OM4 grade multimode fiber (or perhaps 100 meters using a lower grade multimode fiber such as OM3). As data rates increase, link loss budgets and insertion loss typically decrease, so if you have plans to re-connectorize your installed fiber cable plant, be sure the infrastructure is adequate for this application. You’ll also need adapters to fan out the MPO connection into simplex connections which are compatible with existing test and measurement equipment, to verify link performance. Researchers have already begun to contemplate how these parallel interfaces might extend to densely aligned multimode optical waveguides used for computer backplanes. There have been many papers at OFC on related topics, including a recent article on OSA Optics InfoBase which studies crosstalk due to mode coupling in parallel optical waveguides.
The need to extend optics to longer distances inside the data center in a cost effective manner was also recently discussed at the OSA Executive Forum, held each year in conjunction with OFC. This event features C-level executives in a panel discussion concerning the latest issues facing the industry in an informal, uncensored setting. The Director of Network Engineering at Facebook recently told this panel that his definition of a data center can now span 4 buildings on 10-20 acres of land; this requires an optical interface which can span at least 2 km distances over in-plant cables. The use of low cost VCSEL based laser transmitters is appealing for this application, but VCSELs require parallel fibers as discussed earlier. It’s not clear whether the lower laser cost would be offset by higher fiber costs. Conversely, a serial optical link using only 2 fibers for transmit and receive paths would require a more expensive DFB type laser. The distance-bandwidth product optimization for warehouse-scale data centers remains a hot topic for discussion at this coming year’s OFC meeting.
Serial Optics – Not An Easy Solution
Serial optics aren’t an easy solution to this problem, since there are so many competing link standards. For example, the IEEE has proposed a 40 Gbit/s Ethernet link with an actual line rate of 41.25 Gbps (the technical requirements for 40G serial optics are available online (pdf)). However, the ITU has defined several options near 40G as well, including ITU-T G.707 (39.81 Gbit/s), ITU-T G.709 (43.108 Gbit/s), and ITU-T Sup. 43 (44.58 Gbit/s). Many serial optical transceiver vendors have developed multi-rate modules capable of spanning the range 39-45 Gbit/s in order to provide a common part for these and other applications (including Ethernet, SONET/SDH, and OTN).
As the industry moves towards 100 Gbit/s adoption, we can expect similar issues related to multi-rate support and fiber vs transceiver cost tradeoffs. To get a feel for what early adopters are using for different applications, mark your calendar now for OFC 2015. If you have your own ideas about how these interfaces should be deployed, don’t forget to drop me a line on Twitter (@Dr_Casimer) so we can continue the discussion; maybe I’ll use your proposals in a future blog.
Posted: 25 June 2014 by
| with 0 comments