The Optical Networking and Communication
Conference & Exhibition

San Diego Convention Center,
San Diego, California, USA

Panel III: Optical Interconnect and Computing for Scaling Machine Learning Systems

Tuesday, 10 March
14:30 - 16:00
Expo Theater I

Moderator: Ryohei Urata, Technical Lead/Manager, Google, USA

Panel Description:

Until recently, the architectures and systems for executing Machine Learning (ML) workloads were based on traditional optical interconnects used for datacenter networking or high-performance computing. With the rapid rise in ML workloads and the fact that ML network architectures/protocols and computation requirements are different from traditional datacenter architectures and compute, leading cloud operators, component/system vendors, and a number of startups are exploring optical technologies for more efficient and scalable ML systems. These fall into two categories: (a) Making higher performance (bandwidth, latency) optical interconnects while improving power, cost, and density and (b) Exploring optics for computation, by leveraging the unique characteristics of ML systems for both axes.  

This panel will introduce the views of several industry leaders in this area, followed by discussion among panelists and the floor. 

Speakers:

Paolo Costa, Principal Researcher, Microsoft, UK
Optical Networking for Machine Learning: The Light at the End of the Tunnel?

Emerging workloads such as large-scale machine-learning training and inference pose challenging requirements in terms of bandwidth and latency, which are hard to satisfy with the current network infrastructure. Optical interconnects have the potential to address this issue by drastically reducing overall power consumption and providing ultra-low latency. Fully unleashing these benefits in practice, however, requires a deep rethinking of the current stack and a cross-layer design of the network, hardware, and software infrastructure. In this talk, I will review some of the opportunities and challenges as we embark on this journey.

Nicholas Harris, CEO, Lightmatter, Inc., USA
Artificial Intelligence Acceleration with Silicon Photonics

The development of computing hardware designed to address the rapidly growing need for computational power to accelerate artificial intelligence applications has prompted investigations into both new devices and new computing architectures. Silicon photonics is typically viewed as a communications platform. Here, we will discuss a brief history of optical computing, what's changed, and how silicon photonics can be applied to the problem of accelerating artificial intelligence algorithms.

Benny Koren, VP Architecture, Mellanox Technologies, Israel
In Networking Computing
Connecting compute cores via traditional network is considered inefficient compared to multi-core CPUs or GPUs. However, why not make it work for you with off-loads in the network? If the network can do part of the processing, going through a network is not a waste, but a value added.

In the talk we will discuss methods to accelerate ML using In-network computing.

Mitchell Nahmias, CTO and Co-Founder, Luminous Computing
Addressing the Bottlenecks of Artificial Intelligence with Photonic Computing

In this panel, we describe how photonic computing can lead to staggering improvements in the efficiency of artificial intelligence algorithms by addressing its two main bottlenecks: data movement and performing matrix multiplications. I outline the ingredients that make this possible, covering advances in photonic large-scale manufacturing, and compare photonics with both digital and analog electronics.

Robert (Ted) Weverka, Senior Optical Physicist and IP lead, Fathom Computing, USA
Scalable Interconnects for Neural Networks

Rent’s rule expresses a power law for the number of interconnects crossing a boundary enclosing a number of logic gates in a portion of a system.  Systems that scale to large size have natural limits to the power law that can be realized, given by the geometry of the interconnects.  Rent exponents greater than 0.5 require multilayer interconnects and serializer-deserializers, with the number of layers and bandwidth multipliers growing with system size.

High Rent exponents are suggested by the geometry of human and animal brains where the gray matter at the surface is largely connected by white matter throughout the volume.  Achieving this kind of connectivity which scales to ever larger system sizes at finite connection bandwidth requires interconnects that utilize surface normal communication, rather than die edge connections.  We explore systems that utilize optoelectronics on silicon to achieve this scaling.


Biographies:

Paolo Costa, Principal Researcher, Microsoft, UK

Paolo is a Principal Researcher in the Cloud Infrastructure Group in Microsoft Research and an Honorary Lecturer with the Department of Computing of Imperial College London. His current research brings together hardware, optics, networking, and application-level expertise to take a cross-stack view towards developing optical technologies for next-generation data-center networks.


 

Nicholas Harris, CEO, Lightmatter, Inc., USA

Nick is the President and CEO of Lightmatter. Before founding Lightmatter, he was a postdoctoral fellow at the Massachusetts Institute of Technology, where he received his PhD in Electrical Engineering and Computer Science. His doctoral thesis is titled “Programmable nanophotonics for quantum information processing and artificial intelligence”. Nick has authored 59 academic articles and 7 patents. He was awarded an Intelligence Community Postdoctoral Fellowship for his work on post-Moore’s Law computing technologies and his graduate studies were supported by the National Science Foundation Graduate Research Fellowship. He was previously an R&D engineer working on DRAM and NAND circuits and device physics at Micron Technologies.
 

Benny Koren, VP Architecture, Mellanox Technologies, Israel

Benny Koren , Mellanox VP for architecture, re-joined Mellanox in 2010 and is responsible for Mellanox's switches and Physical layer products. Mr. Koren graduated Cum Laude with a B.Sc. in Electrical Engineering from the Technion, Israel Institute of Technology.

 

 

Mitchell Nahmias, CTO and Co-Founder, Luminous Computing

Dr. Mitchell Nahmias is the Chief Technology Officer and Co-Founder of Luminous Computing - a moonshot photonic computing company backed by Bill Gates developing a 1000x improvement over state-of-the-art AI chips. During his Ph. D. at Princeton, he helped create the field of Neuromorphic Photonics. Mitch has 60+ publications and 1000+ citations to his name, and was a National Science Foundation Fellow.



Ryohei Urata, Technical Lead/Manager, Google, USA

Dr. Ryohei Urata is currently a technical lead/manager in the Platforms Optics Group, responsible for Google's datacenter optical technologies and corresponding roadmap. Prior to joining Google, he was a research specialist at NTT Laboratories. He has over 135 patents/publications in the areas of optical interconnect, switching, and networking. He received his Ph.D. in electrical engineering from Stanford University.


 

Robert (Ted) Weverka, Senior Optical Physicist and IP lead, Fathom Computing, USA

Ted Weverka started work in systems and devices for optical computing in the early 80’s, developing adaptive neural networks and radar signal processing systems. These analog systems grew to utilize volume holographic adaptive weights for large-scale high-speed systems. Ted founded Network Photonics, developing WDM digital communication systems for metro area optical networks.  He is currently developing a pioneering optoelectronic computer for artificial neural networks at Fathom Computing. Ted is a member of the graduate faculty at the University of Colorado, Boulder and on the editorial board of Fiber and Integrated Optics.

Sponsored by: