• A Hybrid Conference – In-Person and Virtual Presentations
  • Technical Conference:  24 – 28 March 2024
  • Exhibition: 26 – 28 March 2024
  • San Diego Convention Center, San Diego, California, USA

A Side of Google Most People Will Never See

By OFC


Google is synonymous with internet search. The company also has a suite of well-known applications like Gmail, Google Maps, the Android operating system and the Chrome browser. Google drive and its associated calendar, word processing and spreadsheet apps are ubiquitous. And above all, Google continues to dominate online searches with its eponymous engine—and made it perhaps the only company ever whose name entered the English language as a verb.
 
(Google it and you will even find a web site that shows you how to Google it.)
 
That's the Google that most people know—the company that makes successful, consumer-facing products. But one side of the company few will ever see is the back-end infrastructure that keeps all those Google searches returning hits: the power, cable, port and speed demands of the company's massive datacenters. This was the subject of short course #SC359 on Sunday, titled: "Datacenter Networking 101."
 
Led by Cedric Lam and Hong Liu of Google, the course introduced and explored the concept of warehouse-scale computers—their massive scale and the unique challenges of speed, power and cost brought by computing at that scale. The course was well attended and filled with questions throughout as attendees asked about what Google does (and perhaps wondered what they plan to do) with regard to these massive data centers.
 
Scaling within their data centers happens every few years, Lam said, "to meet the ever increasing demand." What he meant is that as they transition from one generation to the next, making use of new technologies in switches, for instance, the bandwidth of which grows at a rate of a "decade" every 5 years, they also increase the number of ports at the same time.
 
Lam started the course by showing what he called the "Great Wall of Cables," a component rack with a mind-numbing number of wires spilling out, as a way of discussing the problem of cable management. Power consumption is obviously a major issue for computing at this scale because there is a limit to the amount of heat you can remove, but since the rack requires handling 131,072 separate cable links, managing the cables themselves can become a limiting problem.
 
He touched on the issue of power consumption throughout the presentation. In one interesting example, he discussed how on a chip, perhaps half of the pins will be devoted to power or ground, making them unavailable for I/O.
 
For the most part, he took the audience through a quick tour of the various components of warehouse-scale computers and discussed the challenges and tradeoffs of the many competing technologies every step along the way.
 
For instance, he discussed port counts and the tradeoff between counts and port bandwidth. He brought up optical switching and showed comparisons of switching capacity, size, power consumption range and other functionality of competing products. "When you are thinking about building optical switches, these are the metrics that you should be thinking about," he said.
 
He touched on the merits of monolithic switch boxes where all the switches rely purely on optical links versus integrated architectures where pods of linked cores are connected with optical links.
 
One thing that was clear, perhaps the elephant in the room, was that as massive as warehouse-scale computers are today, the one certainty is that they will not remain static. They will continue to grow as consumer demand continues to grow—though datacenters are a side of Google the consumer may never see.
 
 

Posted: 23 March 2015 by OFC | with 0 comments

Comments
Blog post currently doesn't have any comments.
 Security code


The views expressed in this blog are those of the authors and do not necessarily reflect the views or policies of The Optical Fiber Communication Conference and Exposition (OFC)  or its sponsors.