S. Chandrasekhar, Nokia Bell Labs, USA; Robert Killey, Univ. College London, UK
Today several systems have been deployed worldwide with advanced modulation formats that exploit the full electric field of light for encoding, using coherent technology aided by electronic digital signal processing. In addition, there have been several field trials conducted based on both currently available technologies as well as potential next generation technologies. Field trials have covered metro, terrestrial, and submarine networks.
Nevertheless, the performance of systems in the field are significantly below those demonstrated in laboratory experiments. The metrics include reach, spectral efficiency, impairment mitigation, and paucity of formats. Does this imply that field performance is a “make or break” scenario for novel technologies? This workshop will try to address the shortcomings and challenges encountered in the process of taking a value proposition from the laboratory to a deployable field solution. It is expected that there will be participation from both researchers as well as industry technocrats, addressing the range of topics below:
Reach: Do lab demonstrations exaggerate potential reach without any considerations to margins? Or did the industry demand too much safety buffer for “always-on best-in-class service” slogans? Can we use the “software-defined” transponders with clever DSP to adapt to changing field environments? Is there a path to recover lost reach yet claim the slogans?
Spectral Efficiency / Formats: The best SE that is known to have been deployed is around 5-b/s/Hz (200G on 37.5-GHz using 16QAM). Researchers have shown well over 11-b/s/Hz in the lab. More recently, in field trials, numbers in the 7-9 b/s/Hz have been demonstrated with innovative technologies such as probabillistic shaping. It appears either research was way ahead of being practical or a lot got dumped along the way of implementation. Are some concepts too complex to implement in a piece of Silicon? Challenges in transferring algorithms from MatLab to Silicon? Is the return of investment poor? Is probabilistic shaping easy to implement?
Impairment mitigation: Are lab transmitters and receivers too perfect, that we cannot replicate their performances in the line card? (e.g., ADC BW of real time scope vs BW of CMOS) Do our researchers splurge on infinite DSP resources that ASIC designers don’t have the luxury, resulting is less than optimal performance? (e.g., using RRC roll-off factors of 0.001 with more than 100 tap equalizers). Is lab hardware (fibers, amplifiers, DSO, ...) too good that they do not reflect the realities of the field? Can resource-efficient clever DSP absorb some of the shortcomings in the line card performances or even transmission related impairments? Will we see the dawn of “self-correcting” line systems or even “self-healing” transponders?