By Casimer DeCusatis
OFC provides a venue to discuss breakthrough technologies, disruptive business trends, and the future of the networking industry. It’s hardly a surprise to find the discussion turning to higher data rates and the elimination of copper switches, but can these technologies really provide value in an increasingly competitive marketplace? At last year’s OFC, we saw an early demonstration of 400 Gbit/s technologies, and this year optical cross-connects will be discussed at OFC. Let’s take a closer look at these technologies, and discuss some of the challenges they face.
While many long haul networks have adopted 100 Gbit/s technologies (commonly known as 100 G), the next higher data rate has met with a bit more uncertainty. There are some 200 G applications, but most companies seem to feel that the next logical bandwidth increment is 400 G. At these rates, coherent signaling becomes an even more viable option, leading many companies to consider new chipsets capable of high speed coherent digital signal processing. Many startups have also begun to look at 400 G chipsets.
There’s a proposed 400 G standard for short reach (10 km and below) which is expected to be ratified by the end of 2017. Meanwhile, anticipating strong interest in this market, many optics providers have begun to release pre-standard hardware. At shorter distances, it’s arguable that coherent processing isn’t a strict requirement. However, cloud and telecom providers would like to increase working distances up to 80 km, where coherent technology provides a demonstrable benefit. This performance comes at a cost premium, however, which may slow adoption of pre-standard hardware. Traditionally, telecom providers would be among the first to take advantage of this new technology, although some have been slow committing to a 400 G path despite an interest in long haul submarine cables. As noted in a prior blog of mine, it’s not clear if most cloud providers would choose to pay a premium for higher performance optics, or if they prefer to stay at lower data rates in order to save cost. The same argument applies to coherent 400 G, which required additional chips and sophisticated signal processing algorithms. Further, the additional hardware for 400 G increases power dissipation even more than one might expect from the raw bandwidth increase. Many cloud providers continue to cite power consumption as their most critical requirement, which will be a challenge for 400 G.
While it’s apparent that 100 G will be around for a long time, speculation on 400 G continues to run high in the marketplace. Would higher bandwidths enable other disruptive technologies, such as all-optical interconnects? There’s been a good deal of speculation that major cloud service providers like Amazon, Google, and Facebook are considering optical cross-connects for their future infrastructure. Competing technologies have also been proposed, such as wavelength-based routing with a switchless architecture which are being developed by some well-funded startups. However, such approaches are trying to displace relatively low cost, highly reliable technologies. For example, to cite an extreme case, we can reconfigure an optical data center network with a manual or automated patch panel at a cost of about $40-50 per port, independent of data rate. Optical cross-connects or wavelength routing solutions are about an order of magnitude more expensive. While high volumes might drive down these costs (as discussed in my prior blog on high speed optics), there are less than a handful of viable optical cross-connect companies in the market today, making them unlikely to compete on price alone. You can see many of them on display at OFC this year, and hear the perspective of Google and other cloud providers on these technologies. It’s also true that optical cross connects or wavelength routing should offer higher reconfiguration speeds than manual patch panels. Despite this, an all-optical scheme is still likely to be slower than a packet-based router; reconfigurable optical interconnects working at sub-microsecond rates remains a significant technical challenge. This has led some industry observers to classify all-optical switches as strictly research technology for the near future.
While a 400 G all-optical switch isn’t likely to displace more conventional 100 G solutions anytime soon, it’s important to follow these trends and have a roadmap for how your organization will take advantage of these services when they become available. Which of the leading edge technologies discussed at OFC do you think is going to be the next big thing ? Drop me a line on Twitter (@Dr_Casimer) and let’s discuss.
Posted: 16 March 2017 by
| with 0 comments