By Casimer DeCausitus
One of the hot topics at this year’s OFC annual meeting was the impact of 100 Gbit/s data rates on metro and long haul networks. OFC ran a panel discussion sponsored by the IEEE Photonics Society, IEEE Communications Society, and OSA to consider how current market dynamics, combined with shifting traffic patterns in the MAN and WAN, would impact ongoing efforts to grow 100G installations. While some people have claimed 2015 may be the year that metro 100G takes off, others have questioned whether 100G really solves any meaningful problems in the MAN.
Network operators traditionally have two basic options for deploying high speed photonic network architectures, namely passive and active solutions. A passive solution based on fixed wavelength filters has the advantage of simplicity and low cost, however such designs lack flexibility, dynamic provisioning, and automation capabilities. Most passive solutions need to be designed with future growth in mind, since a fixed filter architecture can limit the total capacity and connectivity options of the network. This type of over-design was common before the development of dynamic solutions based on reconfigurable optical add-drop multiplexers (ROADMs). An active design based on ROADMs can be more easily reconfigured into an any-to-any mesh, and allows wavelength capacity to be added on demand. When combined with software defined controllers, this option provides agile, scalable networks which are much simpler to operate and maintain than passive solutions. This flexibility comes at a cost, however; specifically the high initial installation cost of ROADMs can make it hard to justify a business case, especially for access and metro deployments.
The use of ROADMs is relevant to the adoption of 100G because using current generation technology, 100G requires its own wavelength (as opposed to lower data rate channels, which can be multiplexed into a single, higher data rate wavelength channel). Given this limitation, metro capacity needs may be satisfied with designs that add/drop a single 100G ROADM channel. This relatively simple active design also avoids any concerns with stranded bandwidth in the metro, since an entire wavelength is being added or dropped as a single entity; this property is especially helpful for more complex mesh networks.
In addition to the architectural issues, there are concerns with interoperability between 100G equipment from different vendors. Concerns about vendor lock-in, combined with the relatively high installation cost of ROADM solutions, have made some service providers cautious about making a large investment in 100G technology. While there have always been interoperability issues when a new data rate is introduced (just look at early experiences with 10G adoption), lower data rates have been able to stitch together 10G links by using intermediate OTN switches or SDH mid-span huts when provisioning higher data rates in the network core. Metro 100G solutions might not have these options when operating over MAN distances. Technology issues include the use of non-standard forward error correction (FEC), which is required at 100G to avoid significant operational issues, the use of new, high performance coherent systems, and the role of network slicing on multi-tenant bandwidth allocation in 100G applications. The current lack of multi-vendor, contiguous 100G network installations will have to be addressed before 100G can experience any substantial growth in metro networks.
Data Center Communications Networks
One potential application for 100G is data center communication networks, either within warehouse-scale cloud computing facilities or between metropolitan area data centers. Communication between data centers has traditionally used over-provisioned, dumb pipes for point-to-point connectivity, with the assumption that excess bandwidth will be consumed by normal traffic growth before the next equipment installation cycle. While SDN in the metro has begun to change this approach to purchasing metro capacity, as noted during the 2015 Internet 2 Global Summit, there are still plenty of opportunities for static, high bandwidth connections to service the steadily growing volumes of data center traffic. Although most servers don’t offer native 100G ports today, it’s a simple matter to concatenate multiple 10G data streams (either electronically or optically) in the same way that a carrier would combine traffic in the backhaul core. Perhaps optical cross-connects in the data center would offer architectural advantages when switching large numbers of 10G links into a few 100G links at fairly low latency. Many data communication applications wouldn’t require optical amplifiers, and the data center leaf-spine architectures could accommodate the reduced distances on 100G links (relative to 10G link distances). Data communication networks also have short-term applications for value-added features in 100G networks, such as end-to-end encryption as part of a layered security policy.
Within telecommunication networks, carriers such as Verizon have launched major 100G metro initiatives in recent years, following the broad adoption of 100G in the network core. In particular, Cisco and Ciena are expected to share about $200M between them for the Verizon implementation alone. This deal has attracted a lot of attention as a potential bellwether for wholesale replacement of older metro systems with more efficient 100G packet-optical platforms. Verizon’s move into this area is significant because traditional service providers have historically been resistant to making major changes in their networks to deploy more high capacity equipment. Verizon also seems well suited to addressing the specific challenges which make it harder to introduce 100G into the metro market compared with the long haul or core, including the stranded bandwidth issues mentioned earlier and the need for traffic grooming at the network endpoints, which support data rates of 10G or lower. Despite this, the overall growth projections for 100G coherent optical transceivers remain fairly modest, with only about a 25% increase in revenue over the next few years. It doesn’t appear that 40G is on its way out anytime soon, and ongoing research and development improvements in 10G technology will be required for the foreseeable future. Don’t expect this controversy to go away anytime soon – I’m sure we’ll still be debating 100G at next year’s OFC meeting in Los Angeles.
Do you think 100G is poised to be the next metro breakout technology, or will the adoption inhibitors discussed earlier keep us waiting for a few years more? Drop me a line and let me know what you think (@Dr_Casimer), and I’ll use the most interesting comments in a future blog.
Posted: 29 June 2015 by
| with 0 comments