By Casimer DeCusatis
One of the hot topics at this year’s OFC conference is the current state of the optical transceiver industry. Over the past decade or so, the cost of high speed optical transceivers has fallen significantly, reaching under $1 per Gigabit. Despite this, optics has not yet displaced copper cables within the data center, and many servers continue to prefer copper to optical interconnects (despite the fact that most data center switches have migrated to fiber-based interconnects). The state of optics for server interconnects, and the push for under $0.25 per Gigabit solutions, is the subject of the always lively rump session at this year’s OFC conference.
The single largest factor affecting optical transceiver cost is volume.
While bleeding edge technologies like Terabit Ethernet may require the development of technically challenging hardware, new alignment and test methods, and improved quality control, a higher data rate transceiver will inevitably achieve lower cost when manufacturing volumes increase. Likewise, some have argued that active optical cables
should be inherently less expensive than separate transceiver/cable combinations. They eliminate the technical issues posed by a disconnectable interface and significantly reduce testing costs by allowing the manufacturer to optimize the pairing of transceivers and cables end-to-end along the entire link. However, alleviating these technical issues has not caused active optical cables to displace the transceiver industry yet. It seems that low cost comes with high volumes, and the difficulty of the underlying technology is a secondary factor at best.
At first glance, this shouldn’t be an obstacle for the server market, which offers high volume potential if most of the existing copper interconnects on servers could be converted to optical interfaces.
The cost of data center optics might be able to reach parity with copper links (at the elusive $0.25 per Gigabit level), enabling a disruptive change in data center network architectures. However, these volumes are only achievable with a standardized optical form factor, and the industry currently isn’t moving in that direction. The low cost of 10 Gbit/s optics was largely made possible by standardizing on a few form factors for SR and LR options.
Currently, an emphasis on optimizing application performance is fragmenting the optics market by proliferating a wide range of different form factors, specifications, and multi-source agreements. Combined with technical issues such as achieving lower power dissipation for packages with increasing port density, this trend has allowed copper interconnects to continue as the preferred form of server interconnect much longer than some had predicted.
Interestingly, cost parity with copper may not be the limiting factor for optical links to displace copper cables.
The optics used in data center switch links, for example, are still having trouble achieving $1 per Gigabit levels, and yet many switches have adopted optical interfaces almost exclusively. This may be due to the many benefits of optical links in the data center network backbone, particularly extended reach compared with copper at the same bandwidth. Yet this example demonstrates that in some cases, other application requirements take precedence over cost per gigabit. OFC will look at both the technologies required to achieve very low cost optical links, and the balance of economics and application requirements needed to bring optics to the server.
Data center architectures are another complicating factor in the drive to make server interconnects predominantly optical.
There are a wide variety of switch-to-server data center architectures
, including end of row (which moves servers farther apart from the switches), top or bottom of rack (which moves them closer together) and even multiple servers/switches on a single card. Depending on the reach required for server to switch links, copper cables might become obsolete in a few technology generations simply because optics presents the only viable way to solve the interconnect problem. If the industry were to adopt a common reference architecture, with well-defined reach and application requirements, it might be possible for optics to displace copper even if they aren’t exactly at cost parity. Lacking such a reference design, the application requirements for communication links remain fragmented and it’s difficult for optics to muster the volume required to achieve dramatically lower cost.
Finally, any discussion of optics volumes has to include the cloud service providers, who have emerged as some of the highest volume customers in the industry.
Cloud providers have a different approach than traditional telecom or data center operators; because of the sheer scale of their operations, they need the highest possible data rates as soon as possible, and sometimes can’t wait for high volumes to drive down costs. Yet, cloud providers also insist on the lowest possible pricing for optical links, precisely because they need to buy so many of them for hyperscale converged infrastructure. It’s not clear if most cloud providers would choose to pay a premium for higher performance optics, or if they prefer to stay at lower data rates for awhile in order to save cost. The answer can have a profound impact on the optical transceiver market, and may affect how quickly new technologies such as silicon photonics are implemented. Further, many cloud providers cite power consumption as their most critical requirement, yet low power multimode optical links based on VCSEL lasers are often bypassed in favor of higher bandwidth single-mode fiber at the cloud network core
. Is power consumption a possible path for optics to displace copper in the long run?
These and other issues will determine the course of the high speed optics industry in the years to come, and you won’t find a better place to discuss all of these issues than OFC. Drop me a line on Twitter (@Dr_Casimer) and let me know how much you’d pay for a gigabit of bandwidth, and why; I’ll consider using the best answers in a future column.
Posted: 1 March 2017 by
| with 0 comments