By Casimer DeCusatis
With OFC 2016 just around the corner, you’re probably starting to select your favorite topics and plan your visit. I’ve been using the official OFC app
on my Android phone to help keep track of the relevant sessions I’d like to see every day. One of this year’s hot topics will be the design of components and architectures for next generation data centers, which will be featured at the OFC Data Center Summit
. Speakers from IBM, Microsoft, Oracle, and many others unite to bring you this exciting event, showcasing breakthrough technologies that are reshaping modern data centers.
Path to 100G/400G/600G
A key topic for discussion is the path for optics to reach 100, 400, and 600 Gigabit/second data rates and beyond. The need for ever increasing bandwidth is clear; it’s been estimated that every minute we send about 98,000 tweets, 11 million instant messages, and 168 million emails. This adds up to a whopping 1.8 Terabytes of data created and stored every day, straining the capacity of existing 10 – 40 Gigabit/s data links. The path to 100 Gigabit/second has led to many options for the data center architect, which pose different tradeoffs in bandwidth, distance, and technology.
The IEEE 802.3bm proposal introduced the 100GBase SR4 solution, which specifies optical links up to 100 meters. This approach has since been adopted by the Ethernet Alliance and other groups, and employs a 4 lane wide interface, running at 25 Gigabit/s/lane over multimode fiber (it should be noted that options to increase the distance using a different physical media dependent (PMD) interface were also proposed). The standard also includes 100GBase LR4, which uses a similar approach to reach up to 10 km, although this was primarily developed for telecommunication applications and may be too costly for many data center deployments. Some issues with these designs include equalization to compensate for the bandwidth limitations of VCSELs and other components, use of forward error correction (FEC) to maintain bit error rates while increasing distance, and management of mode partition noise and retiming considerations. Another proposed parallel lane design is 100G C-AUI (attachment unit interface), which attempts to achieve cost, power, and link density tradeoffs. Parallel link solutions also need to develop special electronics (commonly known as a gearbox) to convert between the data rates used on the interface with those employed by the serializer/deserializer (SERDES) chips
Alternatives to the parallel optics approach
Alternatives to the parallel optics approach include the use of different serial modulation schemes and coarse wavelength division multiplexed (CWDM) transceivers, each of which bring sits own unique set of challenges. One option proposed to address the distance gap between 100G SLR and LX4 modules is the 100G CLR4 proposal, which uses a QSFP form factor transceiver and is optimized for distances up to 2 km over a duplex single-mode fiber link within large data centers using a coarse wavelength multiplexed interface. Approaches such as this continue to challenge pre-conceived notions regarding distance, power, cost, and density tradeoffs in 100 G optical networks.
OFC Data Center Summit
The OFC Data Center Summit will address some of the most pressing issues in this field, including key differences and market drives between enterprise and cloud scale data centers, the role of industry standards vs open consortiums and proprietary solutions, and the impact on both traditional and emerging startup companies in the optical transceiver marketplace. The summit will also address emerging data center network architectures, which are being redesigned to accommodate workload drives such as mobile, social, cloud, and big data analytics. Consideration will be given to both warehouse-scale data centers and clusters of smaller data centers interconnected over a geographic region. Optical trends in the data center impact not only transceivers and cables, but many facets of the network infrastructure. System level effects such as power consumption and power per rack density issues will become important as optical networks scale to address the needs of these next generation designs. Emerging drivers such as the so-called Internet of Things (IoT) are changing the distribution of traffic within large data centers, such that traditional models for long, high bandwidth flows in the network core are giving way to many shorter duration flows near the network edge. Further, the proliferation of edge devices in the IoT may drive the need for additional layers in the network infrastructure, running counter to the trend of flattening networks in the enterprise data center using software defined networking (SDN). High volume, low cost optics manufacturing may be ill prepared to meet the challenges of interconnecting 50 billion devices within the next four years. A panel of industry and academic experts will discuss the future of these data centers, and the summit will include time for audience participation and questions. If you’re wondering how your data center is going to take advantage of the latest networking technologies on a limited budget, then you definitely don’t want to miss this session !
The world may be tweeting 98,000 times per second, but I’ll settle for a small fraction of that during OFC. Drop me a line @Dr_Casimer and let me know the issues you’re grappling with as you design new data centers; I just might use your ideas in a future blog.
Posted: 8 March 2016 by
| with 0 comments