By Casimer DeCusatis
When it comes to optically interconnected network routers, do you prefer your boxes white or black? No, you haven’t accidentally stumbled across the Martha Stewart blog by mistake; we’re not talking about the actual color of the equipment, or whether it clashes with the microwave. We’re discussing whether you believe in “bare metal” hardware that requires you to install your own operating system (white box), or heavily pre-integrated hardware and software solutions (black box). This intriguing question will be examined in depth during the always popular OFC rump session on Tuesday night.
Admittedly, the terminology can be confusing. Generally speaking, white box switches refer to the use of generic, off the shelf hardware, often in the forwarding plane of a software defined network (SDN). The term refers to using “blank” hardware, which can be purchased from any vendor as a commodity part (hence without a vendor logo, another reason for the name white box) and customized with software from a different source. While the term commonly refers to hardware underlays in an SDN network , some companies also use the term in reference to switches they build themselves from commodity chips, and configure using more traditional distributed networking architectures. Software testers may recognize the term white box testing, which refers to exercising the internal structure of a program to insure that every code path is executed at least once. Similarly, white box routers and switches require testing under a software control plane of some sort, and are often used in disaggregated equipment designs.
The term black box is a bit more familiar, referring to any device whose internal workings are not visible to the end users. Black box routers contain all the hardware and software necessary to deliver their functionality, tightly integrated or bundled by a single supplier.
Disaggregated compute architectures have proven to be successful in addressing a variety of data center design issues, and are now being considered for optical data center networks. They promise a reduction in capital expense through the use of open source software running on inexpensive commodity hardware, and a departure from single vendor, proprietary equipment. In principle, this approach fosters broad interoperability, a highly virtualized network, and shorter development cycles. But can the industry deliver on these promises, or will the reality of white box implementation fall short of expectations? Some feel that white box networking is too strongly inhibited by traditional black box incumbent vendors. Others point out that even black box solutions have drawbacks, including recent issues promoting NFV functionality. Can optical interfaces even tell the difference between these two environments, or make any impact on evolving business models in this area?
These and many other issues will be discussed in what is sure to be a lively rump session at OFC. Following a brief introductory presentation by the session organizers, audience participation is strongly encouraged in this session. Anyone interested in the topic is invited to come prepared to present their point of view; the rump session will consist of slide presentations alternating between black box and white box advocates.
I don’t suppose anyone wants to hear my gray box theory…but if you do, then drop me a line @Dr_Casimer and let’s get together at OFC 2016!
Posted: 17 March 2016 by
| with 0 comments