Network virtualisation and VMware's quest for the holy grail: Page 2 of 3
By Edwin Yapp August 12, 2014
Turning traditional networking on its head
So exactly what is the relationship between OpenFlow, network virtualisation and SDN?
Traditionally, data flow on a network is directed by a switch using specialised software running on top of very expensive, dedicated hardware. Additionally, the control plane that organises the flow of the data and the actual data flow in the switch are often designed to work together in a highly proprietary fashion.
This conventional networking topology has been the basis of how network switches have been designed for the past three decades, and is the very reason why industry leaders such as Cisco Systems and Juniper Networks are so profitable.
With network virtualisation however, the data and control plane are effectively decoupled from each other, thereby making it possible for the control logic of individual switches to be moved to a controller to be handled centrally. This concept is akin to how physical servers are being virtualised by the use of virtual machines.
Managing the communications between the data and control planes is achieved via the OpenFlow communication protocol.
The decoupling of the data and control plans means that all the physical switch is responsible for is to route packets of data within a network, while the intelligence and control of the network resides in the software. This makes it easy for users to deploy and manage underlying network resources.
Amongst the key advantages touted by network virtualisation are its ability to utilise network resources more efficiently compared with traditional networks, and to configure switches in hours instead of weeks.
It also reduces the cost in the administration of the network and capital expenditure spending for enterprises, as hardware switches and routers can be swapped out in favour of commodity x86-based servers.
This is the reason why network virtualisation, and by extension SDN, could turn traditional networking technology on its head. This is also the reason why VMware has made a one billion dollar bet on Nicira, knowing that its core product, server virtualisation software, is beginning to slowdown as more data centres get virtualised.
The technology is also part of the basis for VMware's long-term strategy, embodied in its next wave of growth into the software-defined data centre.
Meanwhile, industry stalwarts such as Hewlett-Packard, Cisco and Juniper – which are themselves making huge investments into these technologies – are also concerned about upstarts such as Nicira and other players such as Big Switch Networks, Extreme Networks, and Plexxi, to name a few.
Such is the potential of this technology that InformationWeek editor-at-large Charles Babcock wrote that Nicira is tackling one of the largest remaining problems in virtualising the data centre as it tries to give the enterprise network the same flexibility and ‘mouldability’ as the virtualised servers it's attached to.
Similarly, Sigma Solutions chief technology officer Elias Khnaser said in a blog that “the network is the last bastion of resistance, yet it's also a critical component as the industry strives for global workload mobility.”
Speed and security
According to VMware's Casado, the two most significant advantages network virtualisation brings to the industry today are speed of implementation and advanced security features.
With network virtualisation, he says the time to provisioning can be reduced to virtually nothing and this is one of the most desirable features enterprises are looking for today.
“With traditional networking, the time to provision to provide a server is a month or two, and the time to set up an app that requires its own network services topology could take up to six months.
“With server and network virtualisation in place, we can reduce that time equivalent to the time it takes to provision virtual machines,” Casado claims.
The chief technology officer of VMware’s networking arm also says the technology can enhance security like nothing before.
Arguing that 80% of today’s security spending lies in the perimeter of the data centre, Casado says this spending is disproportionate as only about 20% of the traffic goes through the perimeter of the network.
“Because you have more workloads in the data centre, the majority of the traffic, say 80%, never leaves,” he says. “In network virtualisation, we can apply security controls within the data centre through the use of ‘micro segmentation.’
“This means that every application gets its own micro segment so in the event of a security breach or compromise, that attack is confined within that segment.”
Casado argues that this is the very reason why the financial services industry (FSI), the public sector and large enterprises are looking to adopt network virtualisation as they can have a better security posture and that the technology “can reduce attacks as it can be used to insert security services deep within the data centre.”
Asked what the next inflection point for network virtualisation is, Casado says the next level would be to provide the network with much more additional contextual information that can be pulled from the edge of the network.
“Because the software layer lives on the server and is very close to the virtual machines and the apps, you can actually obtain information from the apps, something that traditional networking gear has never been able to do before.
“For instance, we can know is who is a host, what apps are running, how much data needs to be sent, and how much resources are being used – networking had never had such information before as traditionally, all it saw was packets of data.
“This is a paradigm shift as we can do networking services at a higher layer and we have semantically meaningful policies and security services – all of which are the holy grail of networking,” Casado says.
Roy Illsley, principal analyst with Ovum, concurs with the notion that the No 1 benefit of SDN is its ability to ‘action changes’ in the network to support variable demand from applications.
Speaking to DNA via email, he says that this reduces the number of administrators needed and reduces the time to implement changes, from weeks to seconds (in the most automated cases).
“The second benefit of using commodity hardware is now less of an objective mainly because of refresh cycles. This will become more significant as the technology and standards evolve, but in the short term organisations are seeing this as a secondary benefit not the primary one,” says Illsley.
Next Page: Hurdles ahead, and more from Ovum