• A Hybrid Conference – In-Person and Virtual Presentations
  • Technical Conference:  24 – 28 March 2024
  • Exhibition: 26 – 28 March 2024
  • San Diego Convention Center, San Diego, California, USA

Networking from a High-Performance Computing Perspective

By OFC


We have known since the dawn of the Computer Age that one thing you can almost always count on, like glorious weather in Los Angeles, is that computer processors tend to get faster, cheaper and smaller— a trend enshrined in Moore's Law.
 
High-performance computers (HPC) are no exception to this trend. They have been around since the 1960s, continue to play a significant role in all fields of scientific research, and they get faster every year.
 
The current generation of cutting-edge supercomputers are operating at speeds in the tens of petaFLOPS, measured in quadrillions of floating point operations-per-second. This brings us within striking distance of even more powerful computers with speeds exceeding 1 ExaFLOPS, which would be capable of performing more than a billion, billion calculations per second. We may well achieve such exascale computing before the end of the decade. But what will it take to get there?
 
From the network perspective, the road to exascale computers is paved with cost, according to Cyriel Minkenberg of IBM Research–Zurich, who gave a talk Wednesday afternoon on the subject titled, "HPC Networks: Challenges and the Role of Optics." IBM is one of the companies currently pursuing an exascale machine.
 
About 20-years ago, the approach to supercomputing shifted from using faster and faster processors to relying on more and more parallel processors, now in the tens of thousands. Connecting those separate processors certainly affects performance, and it also affects cost. In his talk, Minkenberg explored these networking costs.
 
Minkenberg's showed a simple yet powerful mathematical model that estimated the cost as a function of three variables: peak computing rate, the communication-to-computation ratio and the aggregate price performance. The point was, not so much to plug-in some numbers in order to spit out a single bottom line, as it was to reveal what role each of the different cost variables play in determining that bottom line.
 
He detailed at length all the cost drivers and several different options that could be approached to address each one of them. In the end, this allowed him to estimate things like the total network cost and the total power, which he demonstrated is not the main issue.
 
"Cost is really the primary constraint and not power— it's not even close," he said. He detailed some of the things that may drive down costs. Higher data rates would mean fewer cables, and having a smaller footprint would mean higher bandwidth for instance.
 
In the end, he concluded, "Improving the price performance is what would really change the game in these systems."
 

Posted: 26 March 2015 by OFC | with 0 comments

Comments
Blog post currently doesn't have any comments.
 Security code


The views expressed in this blog are those of the authors and do not necessarily reflect the views or policies of The Optical Fiber Communication Conference and Exposition (OFC)  or its sponsors.