22 January 2014 10:59:55 AM
Twitter | LinkedIn | Facebook | Reddit
Optoelectronics is always a hot topic at OFC, and this year promises to be no exception. Where else can all the major suppliers, consumers, and start-up companies in the optical transceiver business come together to debate the next big thing in optics? As you listen to the latest industry roadmaps from the Optoelectronics Industry Association and the National Photonics Initiative, take a moment to think about the impact that high data rate transceivers and cable assembles are having on other industry mega-trends like cloud computing. By any measure, this is a huge market opportunity; some analysts claim that public cloud is a $130 B market, with a 17% compound annual growth rate. Within the past year, there were over 300 new cloud services launched, and nearly all the major telecommunication service providers worldwide now offer some form of cloud service. Within the data center, over 70% of large enterprises have deployed some form of private cloud). So any optical technology that impacts the design of a cloud data center is bound to attract attention and controversy.
Consider how optical link design fundamentally affects the layout of modern cloud data centers. Connections between servers and their first-hop switches are currently in the 10 Gbit/s to 40 Gbit/s range, with inter-switch links of 40 Gbit/s and higher becoming commonplace. At the same time, many cloud computing data centers are implementing architectures with switches placed strategically among their server racks, either at the end of row (EOR) or middle of row (MOR). This provides a high degree of flexibility and convenience, since the first-hop switch can interconnect an entire row of server racks; as each server is either leased by a new cloud tenant or returned from an old tenant, there is only one switch to reconfigure.
However, in recent years this architecture has run into a basic limitation on optical link bandwidth. Shorter, first-hop distances within a data center row traditionally use multi-mode fiber due to its lower cost. To accommodate EOR and MOR architectures, switch links need to span first-hop distances between one hundred and several hundred meters. This wasn’t a problem when the predominant link data rates were 1 Gbit/s, since industry standard links would reach up to 550 meters at this speed. As data rates increase to 10 – 40 Gbit/s, the maximum achievable distance is reduced to around 300 meters, sometimes much less depending on the quality of fiber and number of connections or patch panels. Furthermore, some companies have begun to introduce 100 Gbit/s links, which reduces the working distance even further to around 100 m or less. Thus, very high data rate multi-mode links can’t necessarily span the distances required by EOR or MOR cloud data center architectures.
The solution to this problem is the subject of some debate. Most agree that re-architecting the cloud data center isn’t practical, and the benefits of a modular EOR/MOR design should be maintained. Simply converting to single-mode fiber and transceivers would address this problem, but at a relatively high cost. Both the fiber installation and transceivers are less expensive for multi-mode fiber compared with single-mode fiber, but the big difference is in the transceiver cost – a single-mode transceiver can cost 2-3 times as much as a comparable data rate multimode transceiver. Combined with the inconvenience of re-cabling the data center, this has left many companies searching for alternatives.
Many companies are working on the solution to running higher data rates over multi-mode fiber. Several novel approaches to running higher data rates over multi-mode fiber were announced within the past year.
In November 2013, as part of Cisco’s announcement of the Application Centric Infrastructure (ACI) a new proprietary bi-directional optical link was introduced, which reuses installed multi-mode fiber but enables higher data rates without affecting the link distance. There is a potential savings of thousands of dollars per link resulting from re-use of existing multi-mode fiber.
Another solution, announced the same week by Arista, introduced the “40G LRL4”, a low cost QSFP (quad small form factor pluggable) transceiver which is interoperable with the standardized 40G LR4 links, but supports up to 1 km distances on single-mode fiber. This can apparently be realized at a cost point competitive with multimode links such as the 40G XR4 and XSR4. Furthermore, the 40G LRL4 link uses only a pair of single-mode fibers, rather than a multi-mode fiber ribbon with up to 8 fibers as in most conventional QSFP designs. While this approach requires conversion to single-mode fiber, it offers a significant potential cost reduction in the optical transceiver, which is historically the most expensive component in a data center optical link.
Since these products were announced so recently, it remains to be seen whether any of them will capture a significant share of the data center market (after all, volumes tend to drive cost of an optical transceiver at least as much as new technology). Although the fundamental bandwidth-distance product of optical links is an important consideration, there are also many other factors besides transceiver and cable design which are important to data center networks, including port density and energy consumption.
Many workshops at OFC will be devoted to optoelectronics packaging issues, including the types of fiber used for connectivity and the design of future low cost transceivers. What are the right tradeoffs between reliability, passive vs active component alignment, and integration with CMOS or other technologies? You can expect a lively debate as OFC tries to chart a course to cost-effective 100 Gbit/s links and beyond; make sure your voice is heard and join the discussion this March in San Francisco. Or, if you simply can’t wait, drop me a line on Twitter (@Dr_Casimer).