The Optical Networking and Communication
Conference & Exhibition

Los Angeles Convention Center,
Los Angeles, California, USA

Software-Defined Networking: The Next Frontier

Software-Defined Networking: The Next Frontier

By Casimer DeCusatis, Ph.D. Distinguished Engineer IBM System Networking, CTO Strategic Alliances Member, IBM Academy of Technology IBM Corporation | Posted: 26 November 2012 7:39:39 AM
  Twitter |   LinkedIn |   Facebook | 

“May you live in interesting times” – reputed translation of an ancient Chinese curse and proverb, though its true origin remains in doubt (see )

I can honestly say that this is the most interesting time to be involved in network engineering for at least the past 20 years. In fact, we’re going through a major discontinuity in the networking industry, perhaps the largest since Ethernet was first invented One of the major drives of this change is software-defined networking (SDN), a different approach to Ethernet design that overturns much of what we’ve taken for granted about how data flows should be manipulated.  There will certainly be a lot of discussion about this during OFC/NFOEC 2013, so in this blog I’ll give you a small sample of what to expect.

At the heart of SDN is the distinction between distributed and centralized network control. Since its inception, Ethernet has taken a distributed approach to network management; each switch builds its own traffic forwarding tables, has its own management interface, and generally makes its own decisions about how packets will flow to the next hop in the network, until finally reaching their destination. We often envision switches as having a high speed data plane (for carrying traffic) and a lower speed control plane (for management and provisioning). The distributed approach has worked well for many years. It’s good for automatically building routing tables as packets flow across the network, provides quick and easy setup for new switches, and has proven scalability and reliability.

So why should anybody want to change things? One reason is that distributed networks make it difficult to try out new protocols or network architectures. If you invent a new way to handle traffic flow, for example, it’s only meaningful if you can test it on a large scale distributed network, representative of what people use in the real world. Very few of us have access to such a network for development purposes, so network innovation was held back for many years. Eventually, a group of researchers for Stanford / Berkeley proposed that if we instead used a centralized controller for the network, it would become possible to “carve off” a section of the data network for development purposes, without affecting production traffic on the rest of the network.

A centralized approach to network management offers other advantages. Because a network controller would have visibility to the entire topology and traffic matrix, it would be possible to optimize end-to-end traffic flows. In fact, the controller might have access to data that the switches would never see, for example information about the applications and services running on the network. This could be used to create “application aware” or “service aware” networks which deliver more value to the end users. Centralized control also improves network convergence, makes tunnel placement more predictable, facilitates traffic engineering and bandwidth or quality of service management, and by some estimates enables up to 30% more traffic to be supported over the same installed network capacity

A conventional network switch control plane implements a lot of complex networking protocols, each of which requires millions of lines of code. Each protocol may be thought of as a programming language, with its own usage rules. As with any language, the proper context and meaning can only be understood by someone familiar with both the vocabulary (syntax) and the grammar (semantics). The typical operation of a networking device is analogous to the complexity of learning to speak multiple languages. Furthermore, since each networking vendor builds their own proprietary operating system, the availability of new features and functions on these devices is limited by the development priorities of whoever built your equipment.; For these reasons, there are advantages to introducing an open networking language and an open switch programming model, known as software-defined networking, similar to the use of Linux as an alternative to vendor proprietary server operating systems.

As you might expect, distributed architectures tend to be good at the same things which are weaknesses for centralized architectures, and vice versa. But the centralized approach is gaining momentum because its strengths align so well with highly virtualized cloud data centers and next generation Ethernet exchanges (which are poised to replace conventional SONET/ATM telco exchanges). Starting on the data center side, the widespread adoption of server and storage virtualization has led to some new requirements for the data network, including the following:

  • Huge number of endpoints. Today physical hosts can effectively run tens of virtual machines, each with its own networking requirements. In a few years, a single physical machine will be able to host 100 or more virtual machines.
  • Large number of tenants fully isolated from each other. Scalable multi-tenancy support requires a large number of networks that have address space isolation, management isolation, and configuration independence.  Combined with a large number of endpoints, these factors will make multi-tenancy at the physical server level an important requirement for data centers in the near future.
  • Dynamic network and network endpoints. Server virtualization technology allows for dynamic and automatic creation, deletion and migration of virtual machines. Networks must support this function in a transparent fashion, without imposing restrictions (for example, due to delays in network provisioning or IP subnet requirements)
  • A decoupling of the current tight binding between the networking requirements of virtual machines and the underlying physical network

Rather than treat virtual networks simply as an extension of physical networks, these requirements can best be addressed by adopting a new approach based on a centralized SDN controller.

In the telco/service provider market, SDN allows for rapid, low cost deployment of tiered quality of service, which creates meaningful value to end users and thus increases revenue.  This has been attempted in the past using either expensive private networks or packet labeling protocols like MPLS, but a centralized controller offers new and unique benefits. For example, if you’re a service provide who does a lot of video serving, you probably want your customers to get a better experience watching video on your network than on your competition’s network. SDN allows you to take into account factors which were previously not considered when you serve a video file, such as the type of file encoding used at the host server, the state of the network at the time this file is being downloaded, the instantaneous bandwidth of the client connection, and more. In this way, you can optimize TCP performance end-to-end and provide clearer videos with fewer glitches. Similarly, there’s room to optimize lots of network services, including virtual machine migration, multi-tenant clouds, real time video games, and more.

For many applications, SDN implies using a combination of a flow control protocol (such as the OpenFlow industry standard , which has been deployed by Google and other networking giants) with an overlay network that abstracts Layer 2 or 3 functions (your choices include VXLAN, NVGRE, DOVE, and more ). All of this interfaces with cloud middleware, such as OpenStack This past year has seen SDN startups acquired for over a billion dollars, endorsements of SDN as part of a larger networking roadmap by many different companies , and leading analysts predicting this field will shortly grow to over a $2 billion market Network virtualization is the next big frontier, and OFC/NFOEC is the place to hear the latest about this exciting new technology. I hope to see you there, or we can chat sooner if you drop me a line on Twitter @Dr_Casimer.

Disclaimer: Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by IBM.

Posted: 26 November 2012 by Casimer DeCusatis, Ph.D. Distinguished Engineer IBM System Networking, CTO Strategic Alliances Member, IBM Academy of Technology IBM Corporation | with 0 comments

Blog post currently doesn't have any comments.

The views expressed in this blog are those of the authors and do not necessarily reflect the views or policies of The Optical Fiber Communication Conference and Exposition (OFC)  or its sponsors.

Sponsored by: