01 Jan 0001
00:00 - 00:00
The explosive growth of GPU-centric AI, especially training large language models (LLMs), is driving massive bandwidth demands within data centers (DCs), across DC interconnects (DCIs), and over wide area networks. As LLM training expands beyond single sites, distributed training will further increase networking requirements. Machine-to-machine traffic now exceeds user traffic, ushering in an era of networks built for AI.
This panel will examine optical networking challenges and solutions to scale networks for AI in different regions: low-latency scale-out DC networks, optical switching, spatial division multiplexing to scale DCI beyond fiber Shannon limits, and distributed AI inference at the edge.
The key questions to address in this panel are:
- Will the AI-driven bandwidth explosion continue?
- What AI workloads dominate traffic for now and the future?
- How will network architectures adapt to machine-driven traffic?
- What scaling solutions fit each network region?
- How will optical interconnects and spatial multiplexing and switching evolve?
The panel will be divided into two topics.
Session I, Scaling Inside DC (Scale out)
Session II, Scaling Between DC (DCI)
Organizers
-
Ashwin Gumaste
Microsoft, USA
-
Behnam Shariati
Fraunhofer Inst Nachricht Henrich-Hertz, Germany
-
Jesse Simsarian
Nokia, USA
-
Anbin Wang
Alibaba Group, China
-
Kang Ping Zhong
Hong Kong Polytechnic University, HongKong