• A Hybrid Conference – In-Person and Virtual Presentations
  • Technical Conference:  24 – 28 March 2024
  • Exhibition: 26 – 28 March 2024
  • San Diego Convention Center, San Diego, California, USA

Faster, Higher, Farther – Network Speeds Go Nowhere But Up

By Casimer DeCusatis, Ph.D.


I feel the need….the need for speed! (quote from Maverick, one of the leading thespians of our time, in the movie Top Gun)

Time seems to be going by faster and faster – it seems like we just got done celebrating the new year, and we’re already getting ready for OFC/NFOEC in March.  And it’s certain that faster speeds will also be the subject of much discussion at this year’s conference.  I’d like to slow things down a bit, though, and reflect on what’s been going on in the industry to drive the most recent push towards higher and higher data rates over longer and longer optical networks. 

Faster speeds are nothing new, of course; I still have the graphs that I made over 10 years ago for another optical conference showing an exponential Moore’s Law type growth in I/O bandwidth and data rates for computer systems.  If anything, current growth seems to be outpacing these projections.  Way back in 2007, most people thought that 40G Ethernet wouldn’t be a significant market force until 2017, and perhaps by 2020 we’d begin to see 100G links in reasonable volumes.  Today, we have 40G enabled switches within the data center, even if most of these links are being used for inter-switch stacking (for example, www.redbooks.ibm.com/abstracts/tips0815.html) and many people feel that 100G is more likely to be a significant force closer to 2015.  The accelerated interest in higher data rates can be attributed to a number of factors, including the demands of warehouse-scale cloud data centers (which need to use optical links at these data rates to cover hundreds of meters or more).  Server virtualization has meant that we’re now able to host tens to hundreds of VMs on a single physical computer, and more bandwidth is needed to migrate virtual machines between physical devices. 

And of course there’s the incessant growth of storage.  According to some fairly recent data from LightCounting (www.lightcounting.com/), the industry shipped a whopping 73 petabits/s worth of Ethernet bandwidth in 2011, but that’s still less than the 84 petabits/s worth of Fibre Channel links for storage area networks.  Most of the SAN traffic was carried on 8G links, although the industry is shipping 16G links in rapidly growing volumes.  All of this is attempting to keep up with nearly 60% annual growth in the information we generate (according to Gartner Group (www.gartner.com/technology/home.jsp).  We’re running out of prefixes to describe this tidal wave of data; by 2015, the world will contain nearly 8 zettabytes of information.  A lot of that data is being handled by large, Fortune 1000 companies (who routinely handle hundreds of terabytes, and sometimes many petabytes of data), though cloud services are important too.  Combine this with the demand to access your data from anywhere, and perform analytics on huge data sets, and you quickly need to start re-defining the requirements of the underlying data network. 

It’s interesting that fiber optic data rates are now high enough to be a disruptive force across many segments of the industry.  The first impact is being seen in storage, where 10 to 40 Gbit/s Ethernet data rates (combined with lossless Ethernet standards) are encroaching on 8 to 16 Gbit/s Fibre Channel.  While Fibre Channel isn’t going away anytime soon, even the proposed next generation 32 Gbit/s SAN links will have trouble keeping up with 40Gbit/s Ethernet and FCoE solutions.  But storage isn’t the only network being impacted; 40 Gbit/s Ethernet is also challenging 4X InfiniBand link bandwidths, leading to interest in RDMA over Ethernet solutions.  The industry’s commitment to Ethernet only increases as we start to implement 100G links, and industry standards bodies are even thinking about 400G Ethernet on the horizon (Terabit Ethernet, anyone ?)  At speeds of 100 Gbit/s or higher, Ethernet beings to outpace even the PCIe bus commonly used on computer backplanes for I/O attachment.  Within the next 4-6 years, we may see Ethernet being to eclipse performance of even 16x PCIe, a disruption that could fundamentally change the way we design server I/O subsystems. 

Of course, getting to 100G isn’t as easy as you might think.  Today, there are some early examples on the market, including some vendor proprietary inter-switch links (www.networkworld.com/news/2012/091212-brocade-100g-262291.html).  100G is also well established in the metropolitan area network, where you can expect several vendors to be showing off their latest technology at OFC (for example, www.advaoptical.com/en/innovation/100g-transport/100g-metro.aspx and www.ciena.com/technology/wavelogic3/40G-100G/ to name only two).  Some of these solutions will use multi-source optical transceivers like the CFP (www.cfp-msa.org/)  form factor (where the letter c was chosen to express the Latin number 100 or centum as a reminder of the data rate being targeted by this standard).  There are still significant problems to be addressed, which will surely be on the agenda at OFC/NFOEC.  For example, while standards such as IEEE 802.3ba specify 100G interfaces for access networks, and OIF standards cover long haul systems, there is a gap in standards coverage around typical MAN distances of 100-300 km.  Addressing optical networks for these distances, and bridging the gap between existing standards, will be a significant challenge for next generation 100G in the metro. 

Even if we fix this problem, running 100G in the MAN isn’t the same as making the technology work inside a data center.  When used between buildings at long distances, 100G links are optimized for distance and performance (which makes sense in an environment where fiber is scarce and relatively expensive).  Designers spend more effort on the link endpoints, in an effort to maximize the spectral efficiency of long distance wavelength multiplexed optical links (hence the recent interest in coherent transmission improvements, which we’re sure to see demonstrated at OFC this year).  But inside the walls of a data center, it’s a different story.  Fiber is relatively plentiful inside a data center, so the network design emphasis shifts to lower cost transceivers.  Spectral efficiency tends to  be traded for lower power, reduced cost, and path diversity, as discussed in a recent post from some Google network designers (www.laserfocusworld.com/articles/print/volume-48/issue-12/features/optical-technologies-scale-the-datacenter.html).  In order to succeed in a data center, 100G links need to be more than 10 times less expensive than ten links at 10G each.  This is an opportunity for innovative component designers to get in early on the 100G market, and for new technologies to emerge.  I’m sure you were excited to hear that Intel is teaming with Facebook to commercialize silicon photonics in their next generation designs (newsroom.intel.com/community/intel_newsroom/blog/2013/01/16/intel-facebook-collaborate-on-future-data-center-rack-technologies), bringing optics much closer to the heart of the computer data center.  This may only be the first step as the industry continues to leverage the benefits of optics at higher speeds. 

If all this talk about speed has got you worried that you haven’t been fast enough to register for OFC/NFOEC, don’t worry.  You still have time to submit a postdeadline paper before March 17.  In fact, if you hurry, you can still get a discount on OFC short courses by registering before February 19.   There won’t be a better opportunity this year to find out what your peers (and your competitors) are planning for high speed optical interconnects.  I hope to see you there, but if I’m in too much of a hurry rushing to the next session, you can always catch up with me on Twitter (@Dr_Casimer) with your questions or comments.

Disclaimer: Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by IBM.

Casimer DeCusatis, Ph.D. Distinguished Engineer IBM System Networking, CTO Strategic Alliances Member, IBM Academy of Technology IBM Corporation.

Posted: 24 January 2013 by Casimer DeCusatis, Ph.D. | with 0 comments

Comments
Blog post currently doesn't have any comments.
 Security code


The views expressed in this blog are those of the authors and do not necessarily reflect the views or policies of The Optical Fiber Communication Conference and Exposition (OFC)  or its sponsors.