By Lionel Snell, Freelance Editor
Speaking at NetEvents APAC press Summit in February 2013, Rick Bauer, Technical Programs Manager of the Open Networking Foundation (ONF), introduced the concept of Software Defined Networking (SDN) by saying:
“SDN is a network architecture. It’s not a protocol; it’s not a software product…. In the most simplistic way that we can explain it, it decouples the data plane from the control plane and allows the control plane to be abstracted either in the hardware of a particular interconnect or switch or across a server that can manage many of those networking devices across the enterprise or the telecom environment.”
What does this mean? Why did he go on to describe SDN as “a phenomenal wave of change”? And why is Gartner predicting a $2 billion SDN market by 2016?
SDN – decoupling the control plane
Data networks are subject to increasing challenges as a result of virtualization, datacenter consolidation and the surge in mobile devices on the network. New techniques must be developed to meet these challenges, but who would want to run experiments across a large working network, say trying out new protocols or load balancing techniques? The potential disruption during set-up and consequent risk that the experiment might unbalance such a complex system and lead to unforeseen consequences – let alone the manual labor of reconfiguring all those switches and routers – would be unacceptable.
One solution is to provide a control plane distinct from the data plane carrying the data traffic – one that can broadcast configuration instructions from a central controller. If the network elements could receive and act on those instructions, then what had been a rigid network structure becomes a “software defined network” – dynamic and programmable to meet today’s evolving business needs.
Given such flexibility, the stage is set for a whole new “network programming” industry, creating specialist applications that could be loaded into the central controller. Effectively, the once fixed network structure has been virtualized. It could even be programmed to reconfigure in real time, optimizing traffic flow under all operating conditions.
ONF and the OpenFlow protocol
The NetEvents keynote speaker was Rick Bauer, Technical Programs Manager, of ONF. Launched in 2011 by Deutsche Telekom, Facebook, Google, Microsoft, Verizon, and Yahoo!, ONF is a nonprofit and user-driven organization dedicated to the promotion and adoption of SDN through open standards development. ONF emphasizes an open, collaborative development process that is driven from the end-user perspective. In particular, ONF aims to accelerate the delivery and use of SDN technologies and standards, while fostering a vibrant market of products, services, applications, customers, and users.
The OpenFlow protocol is vendor-agnostic and can be added as a feature to commercial Ethernet switches, routers and wireless access points, making them “OpenFlow enabled”. The OpenFlow protocol has been accepted as a standard by a growing number of vendors, and their incorporating these OpenFlow-enabled switches allows the easy deployment of innovative routing and switching protocols across the network – not only to optimize performance but also to address specific issues such as network flexibility to support virtual machine mobility, high security networking and next generation IP-based mobile networks.
ONF has done work to develop and promote the OpenFlow protocol, that as Rick Bauer pointed out: “There are those who confuse the Open Networking Foundation, as only a promoter of the OpenFlow protocol. While we are passionate about the OpenFlow protocol, we think it is only one of what will become a large family of interconnecting protocols to take advantage of the promise of SDN.” He went on to say that ONF member companies are working on a whole range of SDN architectures to suit telco applications, cloud, mobility, data centre architecture and others.
SDN in practice
The OpenFlow protocol, and current SDN thinking, had its origins in Stanford University before being taken up by the newly formed ONF. This has led to the misunderstanding that today’s SDN is some sort of academic exercise, still a long way from practical application.
This is far from the truth – significant numbers of vendors have already implemented OpenFlow in their switches sufficient to conduct interoperability tests, and a growing number of open-source SDN controllers are already available. Leading carriers are already implementing SDN and the OpenFlow protocol – AT&T and IBM announced in October 2012 their secure SDN cloud services enabling fast and highly secure shared cloud storage and cloud services.
Google has been using the OpenFlow protocol since 2010 to control traffic in a WAN connecting datacenters across North America, Europe and the APAC region and, according to Jim Wanderer, Google’s Director of Engineering, Platforms Networking: “Google couldn’t have achieved the results it has without SDN. We could have used other approaches, but there’s no way that the results would have been as effective. As a result of this success, Google is now committed to further SDN deployments.”
According to Dan Pitt, executive director of ONF: “Over the next few years, enterprises can look forward to a growing choice of SDN-enabled capabilities – from new hybrid cloud services to orchestration tools that enable full-blown network virtualization. But there’s no need to wait for tomorrow’s shrink-wrapped solutions: go-ahead enterprises can begin exploiting the benefits of SDN right now. IT shops that already tweak their networks by writing scripts to vendors’ APIs will find it easy to program OpenFlow-enabled switches. Using open-source controllers and as little as 500 lines of code, implementers have already automated configuration tasks across products from multiple vendors and gained more control and visibility over their network traffic.”
Dustin Kehoe, Associate Research Director, Telecommunications ANZ, for analysts IDC, chaired an SDN debate at the same NetEvents and he commented on the potential market disruption caused by SDN saying: “We forecast the SDN Market could represent 35 percent of datacentre switching by 2016. In real terms, excluding Japan, that’s $750m that the incumbent switching vendors stand to lose. We see the market wide open to software vendors like Oracle, virtualisation providers like VMware, and over the top providers and carriers. It’s a wide open playing field from our perspective.”
Speaking at that debate, Ed Chapman, VP Business Development and Alliances, Arista Networks, emphasized the adoption of SDN by tier one service providers and over-the-top operators: “They’re doing it to reduce cost, to reduce complexity, being able to provision services very fast. I don’t see a lot of enterprise deployments, yet. I think they just don’t have the same requirements that the service providers have, or the over-the-top providers, for this sort of architecture.”
Kash Shaikh, Senior Director, Product & Technical Marketing, HP, had a different opinion: “We are seeing more interest than we expected from the enterprises regarding our cloud network application. Even within enterprises, a lot of discussion has been around data centres. We are seeing a lot of interest from the enterprises in the campus deployments. It’s really about the applications if you tell the enterprises what they can do – they’re looking for a solution to solve their business challenges. There is a lot of interest, and it’s across the board.”
Summing up, Rick Bauer was asked if he saw a bright future for SDN and he replied: “Very much so! I think the innovation that’s going on, the extent of the collaboration across networking companies, software companies, telcos and cloud providers is unprecedented in these time scales. It’s hard to believe that in only 18 months we have gone from promise and research to the exciting announcements being made today.”