As AI workloads grow in complexity and scale, data center networks face mounting pressure to deliver ultra-low latency, massive bandwidth, and efficient interconnect topologies. This panel will examine the key challenges in scaling up and scaling out AI clusters, including bottlenecks in current data center fabrics, the evolving role of optical and co-packaged interconnects, and the trade-offs between centralized and distributed architectures. Experts from academia and industry will share perspectives on emerging technologies and design innovations driving the next generation of scalable, energy-efficient AI infrastructure.
Moderator
-
Di Liang
Professor, University of Michigan