By Casimer DeCusatis Ph.D
OFC 2014 is in full swing by now, and as usual I’m feeling a bit overwhelmed by the sheer number of events running in parallel. It’s just not possible for one person to cover the entire conference, but I think you’ll find the highlights that I’ve selected for today’s blog to be both interesting and controversial.
I’ll begin with a focus on high speed connectivity. The industry continues to discuss a range of options for 100G connectivity in the metropolitan area network (roughly 100-300 km distances). For example, optical transceiver options include re-design of existing LH modules, as well as coherent and non-coherent pluggable transceivers. Multi-rate, multi-format coherent modulation techniques are being developed, but does one size fit all applications? There’s ongoing debate over whether these links can best be realized using open industry standards or vendor proprietary implementations. Design factors include low manufacturing cost, high reliability, low energy consumption, and potential enablement of SDN management. All of the hardware aspects for 100G high volume adoption were reviewed at today’s sessions, but the discussion doesn’t stop there. Another path to higher throughput involves using software to virtualize many existing links, which have untapped excess capacity.
Various symposia on switching architectures and network virtualization were of particular interest today, including a wide range of discussions about optical transport options above 100G data rates (for example, Flexible Rate OTU for Beyond 100G). Some companies argue that optical circuit switching can relieve congestion within server clusters inside a data center network. Others prefer optical packet switching as a common technology between the LAN, MAN, and WAN. Optical burst switching promises significant energy efficiencies compared with traditional WDM (at least in the network core). Power consumption in packet-switched routers continues to be dominated by IP header processing and forwarding functions, which provides an opportunity for optical burst switching to reduce energy consumption by minimizing the control channel overhead (this is also one of the arguments against use of SDN down to the packet processing layer). Burst traffic schedulers were discussed, as well as the requirements for so-called “cloud bursting” applications (temporarily sending a high burst of traffic between a private and public data center to relieve workload congestion). This in turn leads to a discussion on dynamic re-provisioning in the data center and telecom network, which can be implemented using either physical or virtual flow control techniques.
Within the past year, the industry’s first 100G Optical Transport Network (OTN) devices were introduced. The debate continues over placement of optical switching and cross-connects in the physical layer, or in the transport layer core. Most experts agree that optical transmission media is preferred for packet-based MAN and WAN applications, however the need for all-optical reconfigurable switching continues to spark heated debate. One alternative to growing the physical infrastructure is virtualization, an approach which has already demonstrated significant cost advantages and performance efficiencies on server hardware. An assortment of industry standard and vendor proprietary network virtualization solutions were presented, including several impressive demonstrates on the trade show floor at OFC.
Why do we need all this bandwidth? Traditionally, voice traffic was among the leading drivers, and optical networks focused on telecommunication solutions for metro, long haul, and submarine applications. While this remains important, more recently video and data transport have grown dramatically in importance. Fiber now extends to the curb and even into the homes of residential subscribers, and cloud computing has driven optical networking into the core of regional access systems. Combined with with the accelerating wireless growth and the global awareness of the importance of getting more broadband to people, the fiber optics industry is having to meet an unprecedented broadband demand. Optical transport appears to be the only medium that can keep pace with the accelerating global demand for broadband. OFC symposia and short courses address the requirements and expectations of these markets, particularly related to cost, power consumption, footprint, reliability, optical performance, and interoperability. Sessions were devoted to practical design issues of 100G line-cards, and critical reviews of the availability and performance delivered by the key building block technologies. This included technologies needed to implement different modulation formats, the corresponding trade-off between complexity/cost of line-card implementations and achievable fiber transmission distance, and future technologies beyond 100Gb/s.
For a day devoted to high speed transmission, the time seems to have passed very quickly. I wonder if you’ve had a chance to figure out the theme I’ve been using in the titles of my daily blog posts from OFC. Tweet me (@Dr_Casimer) and I’ll acknowledge the correct answers with an honorary posting on the global internet, insuring your everlasting fame. If you need another clue, stop by tomorrow for my next daily blog post from OFC.
Posted: 11 March 2014 by
Casimer DeCusatis Ph.D
| with 0 comments