13 February 2012 12:57:46 PM
Twitter | LinkedIn | Facebook | Reddit
When optical is the only transmission medium that can stay ahead of the accelerating global demand for broadband, there are going to be new challenges. This is an exciting theme that is being explored at OFCNFOEC 2012.
Many of us are familiar with the traditional drivers of growth. In the past, increasing voice traffic was the leader and it existed largely in the telecom realm. The fiber optic network focus was on expanding metro, long haul and submarine capabilities.
However more recently data and video have come to trump voice traffic and fiber has gone where fiber has never gone before—namely to the home, the townhome, to the condo and to the apartment.
Together with the accelerating wireless growth and the global awareness of the importance of getting more broadband to people, the fiber optics industry is having to meet an unprecedented broadband demand.
“The whole interconnect fabric is blowing up,” says Arlon Martin, vice president of sales and marketing at Kotura, a silicon photonics company based in California. While data center growth as seen by the Googles, Facebooks and Yahoos is of this order, Martin sees a similar trend with what we have known to be the traditional central offices operated by service providers. In fact, he sees any kind of clear differentiation between data centers and COs as starting to break down.
As though these accelerating network drivers aren't enough, there are more coming, including the potential impact of cloud computing. OFCNFOEC has always been good at spotting new drivers in their nascent stages—I recall attending an OFC short course on FTTH by Paul Shumate a decade before FTTH reached any appreciable commerciable levels—and cloud computing is no exception. George Clapp and Doug Freimuth will be addressing this topic in a special short course on Tuesday, March 6 from 9 am to noon titled “Cloud Computing and Dynamic Networking.”
The impact of cloud computing on the network is actually one that the research and engineering community has been studying for some time and that has some critical mass, says Clapp. For example, the Global Lambda Integrated Facility (GLIF) identifies itself as a “virtual organization promoting the paradigm of lambda networking.” GLIF suggests that such virtualization is best carried via wavelengths. Another organization, terena.org, focuses on the integration of high-bandwidth use such as 4k, 3D HD and DVTSplus with 40 Gbps and 100 Gbps transmission systems.
In short, Clapp believes cloud computing will have a substantial impact on the network. “A key aspect of virtual machines is that the bandwidth requirements can be quite large and many can be provisioned in a very short time,” he observes. “A library of virtual machines could require rapid deployment.”
Coupled to that is the human tolerance for delay. Optimally, the images would be provisioned in milliseconds rather than full seconds. It is another topic that Clapp and colleagues are scrutinizing.
The near term industry solution for both the accelerating amount of data in general and the virtualization portion of that are rows of blade servers operating at high data rates.
This already has begun to occur. “A data center may have 100,000 server blades or even a million server blades with 40 percent, maybe 60 percent, of those having 10 gigabit transceivers,” says Martin. This again suggests the potential need for 100 Gbps gear, although the prices at this point are seen as being too high. Not surprisingly, the industry workhorse has become 10 Gbps. Martin suggests there are currently more than 10 million 10 Gbps units being produced annually.
But this is a temporary solution and many believe that if we are going to get to the exascale computer stage that Greg Papadopoulos will discuss in his plenary talk at OFCNFOEC 2012 there are going to have to be fundamental changes in the current network. A goal of DARPA’s CORONET program, for example, is to “revolutionize the operation, performance, security and survivability of the United States critical internet working system by leveraging technology developed in DARPA's photonics component and secure networking programs.” CORONET stands for Dynamic Multi-terabit Core Optical Networks.
Martin and a growing number of optical engineers, and telecom engineers in general, believe this will only fully occur as the photon continues to replace more of the tasks the electron now performs. This includes the need to develop an onboard optical engine that can speed data flow within and between computers without being bogged down by the bulkier electron. Martin contrasts having a 100 Gbps system on a chip able to be performed simply and cheaply by one or two companies to the current approach of having to assemble multiple parts from various companies for each 100 Gbps transceiver that goes out the door.
It is through these advances that optical networking will again be able to stay ahead of the ocean of data that continues to grow in unrelenting fashion. The dialogue continues in force at OFCNFOEC 2012.