• A Hybrid Conference – In-Person and Virtual Presentations
  • Technical Conference:  24 – 28 March 2024
  • Exhibition: 26 – 28 March 2024
  • San Diego Convention Center, San Diego, California, USA

Photonics Sheds Light on Modern Data Centers

By Casimer DeCusatis, Ph.D.


The annual OFC/NFOEC meeting is less than a month away, and it couldn’t come at a better time for those of us working on cloud computing and data center networking.  More than ever before, photonics is poised to play a critical role in this industry, leading a significant revolution in the way that computer systems are interconnected.  In this blog, I’ll explore the issue in more detail and give you a peek at some of the hot topics coming up in March at OFC.   
 
Photonic components such as fiber optic communication systems already play a critical role in large enterprise data centers.  Over three-quarters of the physical infrastructure used in a modern mainframe computer (http://www-03.ibm.com/systems/z/index.html) is devoted to I/O communications, and 90% of that is optical fiber.  But that’s just the beginning.  New cloud computing data centers, many the sizes of large warehouses, are hosting an increasing fraction of the world’s compute and storage resources.  Consider a data center with over 100,000 servers, each interconnected with redundant 10 gigabit per second links; the resulting network fabric requires an aggregate bandwidth of well over a petabyte per second.  While most people think of cloud computing as connecting remote users worldwide with large data centers, in reality most of the data traffic occurs within these data center, not between the data center and the end user.  You can find out more about speakers on cloud computing by visiting the OFC cloud/datacom landing page.
 
The bandwidth-distance products required from these interconnects can only be met using optical fiber technology.  Relatively inexpensive, low power vertical cavity surface emitting lasers (VCSELs) and multimode fiber already play an important role in these data networks.  For some applications, such as Google-scale data warehouses, single-mode fiber is the only interconnect option, though it commands a premium price due to the tight tolerances and smaller dimensions associated with single-mode interconnects.  More expensive distributed feedback (DFB) laser sources are also required for distances much beyond a few hundred meters.  Emerging novel quaternary materials such as indium gallium aluminum arsenide/indium phosphide (InGaAlAs/InP) that exhibit better high-temperature performance and higher speeds can form the basis for future single-mode transmitters (http://www.laserfocusworld.com/articles/print/volume-48/issue-12/features/optical-technologies-scale-the-datacenter.htm).  New DFB laser structures such as short-cavity and lens-integrated surface-emitting DFB lasers also have potential to provider a narrower spectrum and better distance performance than previous generations of technology.
 
User demand for high performance applications and on-demand self-service (cloud environments) is transforming the way information technology leverages existing investments, procures new infrastructure and services, and architects future data centers.  Virtualization of the network is the next big thing driving this transformation.  Last year  marked the fourth consecutive time that increased use of  virtualization was in the top three most important IT priorities identified by major companies.  This shows that virtualization continues to guide businesses interested in leveraging their data centers as the foundation for a more dynamic private cloud computing environment. As these cloud environments emerge, it is clear that photonic networks will play an increasingly important role.  Legacy network architectures designed for client/server environments are not the best choice to support new cloud computing models. The industry is aware of this gap and is developing software-defined network (SDN) technology to enable the network to keep pace with the rapid scale and dynamic cloud environments.  This has been a very hot topic recently (https://www.etouches.com/ehome/53138) and we’re sure to hear more about this during the cloud computing sessions at the OFC cloud sessions.  Don’t miss the keynote on cloud economics or the IEEE panel on advanced optical solutions in cloud computing.
 
Active optical cables have potential to reach cost per gigabit levels sufficient to displace copper links for distances less than tens of meters at data rates in the 10-40 gigabit per second range.  This is because allowing the manufacturer to control both ends of the link provides an opportunity to mix and match transmitters and receivers with different characteristics, resulting in higher yield than conventional optical physical layer specifications which must be met assuming the far end of the link is operating under worst case conditions.  Further, active cables eliminate the optical connector interface between the transceiver and cable, resulting in relaxed manufacturing tolerances and lower testing costs.  A special case is parallel optical interconnects which can serve as extensions of a switch backplane, as used in modern switches such as Juniper’s Qfabric (http://www.juniper.net/us/en/products-services/switching/qfx-series/).  Active cables also isolate the optical interface from the end user, meaning they have the flexibility to support multiple protocols in a single link.  The OFC panel on high speed pluggable optics will address this and other related issues; be sure not to miss it !    
 
Some of the most significant developments, however, are taking place in silicon photonics.  In the past decade, there has been noteworthy progress in devices with increased energy efficiency, higher bandwidth, and lower cost than conventional interconnects.  Low loss silicon waveguides for wavelengths above 1000 nm have been demonstrated, and are demonstrating better performance with each generation of technology.  Although not the material of choice for lasers due to its indirect bandgap, silicon has good thermal conductivity, transparency at the traditional telecom wavelengths, low noise for avalanche multiplication (from high electron/hole impact ionization ratio), and, most importantly, can leverage existing Si CMOS fabrication processes developed for the electronics industry.  Recently, IBM announced the availability of a 90 nm CMOS fab for this technology, which could lead to increased commercial use (http://www.research.ibm.com/photonics). 
 
There is also an effort under way to build a supply chain for silicon photonics in Europe (http://www.photonics.com/Article.aspx?AID=53109).  Some of this is motivated by recent interest from datacom companies such as Cisco and Intel, and clients such as Facebook, in adopting this approach.  As part of their plans to enable nondisruptive upgrades to server chips in their data centers, Facebook has recently partnered with Intel and released a reference architecture for photonic data communications (http://newsroom.intel.com/community/intel_newsroom/blog/2013/01/16/intel-facebook-collaborate-on-future-data-center-rack-technologies).  By using multi-fiber optical cables and waveguides, it is possible to locate active optical components very close to the microprocessor, enabling a modular, pluggable architecture for multi-rack servers.  While details remain vague, the documentation release so far by the Open Computing Project (http://www.opencompute.org/) suggests that different photonic technologies can be used to optimize the overall system performance.  Will this new technology impact the drive for higher data rates being championed by the Ethernet Alliance?
 
Photonics is finding new applications in big data analytics (usually considered to be data sets exceeding 30 TB, with a mixture of structured and unstructured data).  The worldwide volume of stored information is growing by 50-60% annually according to some estimates, surpassing 2.7 Zettabytes in 2012 and well on its way to over 8 Zettabytes by 2015.  Over half of datacom organizations manage databases of 500 TB or more, and the top 20% handle over 10 Petabytes each.  This is being driven by applications such as financial transactions, email, image processing, and web/Internet traffic, among others.  The data center network for big data applications must contend with  unique requirements such as bulk data transfers between peer servers (east-west traffic) that can overwhelm traditional networks, and high bandwidth requirements to exchange aggregated data between a large number of geographically distributed servers.  In an oversubscribed network, aggregation & shuffling patterns can become bottlenecks to performance, and demand application tuning to compensate.  One potential solution involves a combination of network virtualization and centralized control planes abstracted from the switch hardware.  This approach, known as software-defined networking (SDN), has been discussed in my previous blog entries and will be the subject of a special session at OFC
 
OFC/NFOEC certainly has come a long way from the days when optical networking implied a telecommunication network (although issues like packet-optical convergence are still open to discussion).  In traditional telecom applications, the approach was to spend more at the link end points to maximize the spectral efficiency of long-distance fiber links, and spectral efficiency tends to be traded for lower power, cheaper transceiver cost and a network fabric with abundant path diversity.  Within the datacenter, fiber resources are much more abundant, cheap, and easy to deploy. Thus, the transceiver cost must be aggressively reduced so as not to dominate the fabric cost.  Data centers are aggressively transforming to better serve the needs of commercial users, which means having underlying photonics technology that can rapidly adapt to changing market dynamics and business requirements. To achieve this goal, the network has become much more virtualized and cloud like applications have become a key aspect of data network design.. Optical component manufacturers are working to develop new performance and cost metrics which are more meaningful in this environment; some of this will be discussed at the OIDA meeting on the Sunday preceding OFC.   I’ll also be addressing some of the issues discussed above at my tutorial on optics for the data center, scheduled for Tuesday afternoon.  I plan to blog from OFC/NFOEC on a regular basis during my trip, so check back here often for the latest updates.  And if you’d like to track me down and say hello during the conference, drop me a line on my Twitter feed (@Dr_Casimer) and maybe we can get together to talk more about how photonics is changing the data center landscape.  I look forward to seeing you there! 

Disclaimer: Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by IBM.

Casimer DeCusatis, Ph.D. Distinguished Engineer IBM System Networking, CTO Strategic Alliances Member, IBM Academy of Technology IBM Corporation. - See more at: http://www.ofcnfoec.org/home/about-ofc-nfoec/ofc-nfoec-blog/january-2013/faster,-higher,-farther-%E2%80%93-network-speeds-go-nowher/#sthash.lhCEQ3HB.dpuf
Disclaimer: Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by IBM.

Casimer DeCusatis, Ph.D. Distinguished Engineer IBM System Networking, CTO Strategic Alliances Member, IBM Academy of Technology IBM Corporation.
 

Posted: 8 March 2013 by Casimer DeCusatis, Ph.D. | with 0 comments

Comments
Blog post currently doesn't have any comments.
 Security code


The views expressed in this blog are those of the authors and do not necessarily reflect the views or policies of The Optical Fiber Communication Conference and Exposition (OFC)  or its sponsors.