Hussam Batschon, NEC Laboratories America Inc., USA
Zuqing Zhu, Univ. of Science & Technology of China, China
Machine Learning (ML) is playing an increasingly important role in many areas of optical communications research. It is crucial in cases where clear analytical solutions may not be available or are computationally prohibitive (e.g., system modeling and quality-of-transmission (QoT)/ performance estimation). Moreover, in the past couple of years ML has also shown promising performance in other parts of the field, such as network monitoring and failure detection and correction. However, to bring ML from research into real-world applications there are many technical questions to be answered and commercial challenges to be considered. For instance, one question would be on whether traditional QoT/performance estimation schemes based on deterministic models and algorithms are still needed, or they should be replaced with more adaptive and smart algorithms based on ML. Meanwhile, commercial challenges may include finding sources and sufficient access to training data and dealing with the different laws and regulations to name a few. Other technical obstacles may include functionality and reliability, in addition to model size and hardware requirements.
In this workshop, we invite experts on ML and optical networking, to discuss the future role of ML in optical networks. The topics that will be covered in this workshop include, but are not limited to:
- What are the steps needed to commercialize ML in optical transmission applications?
- How long will it take before making it a reality?
- What are the obstacles that may prevent translation from research to commercial products?
- How to develop a practical ML-based QoT/performance estimation technique that can be put into production networks?
- What are the prerequisites to deploy a ML-based QoT/performance estimation technique?
- How to generate standard and reliable data sets to train and test ML-based QoT/performance estimation models?
- Will scalability and universality be an issue for ML-based QoT/performance estimation technique?
- Can we trust and operate fully autonomously with ML-based QoT/performance estimation? What are the negative cases, and will there be any vulnerability to careless errors and intentional attacks?
- If we only want to partially replace the traditional QoT/performance estimation scheme with an ML-based one, what is the proper approach to take?
- Will the advances on telemetry-based network monitoring promote the application of ML-based QoT/performance estimation?
To be determined.