SDN is an umbrella concept that disrupts many aspects within the traditional networking architecture. SDN's fundamental design philosophy was to decouple the control plane from the forwarding plane within the overall routing architecture. However, there was also an additional goal of making the hardware less purpose-built. This deviation from purpose-built hardware also meant that you could use any hardware to perform any functions exhibited by a network element.
Therefore in the SDN environment, this decoupled control plane becomes the SDN controller. The routers and other network elements are deployed virtually on general-purpose hardware, configured by the SDN controller. While this may seem like a simple client-server interaction, there is more to what makes the SDN tick.
This post is all about exploring the underlying technologies that realize the vision of SDN. We explore various pieces of technological innovation that play a pivotal role in orchestrating an end-to-end SDN deployment. We have categorized them under three heads as:
- 1Hardware & OS: The hardware, firmware, and OS-level innovation, contributing to the adoption of SDN.
- 2Protocols: The protocols that drive an SDN based architecture.
- 3Application software: The application software components that make the SDN work in a real-world environment.
Hardware and OS Innovations
The choice between purpose-built vs. general-purpose hardware is a tricky one. It is possible for a layer of software over general-purpose hardware to act as any network element. However, such an arrangement cannot beat the throughput requirements of a busy network. Purpose-built hardware delivers well on that. Therefore, success of SDN depends on innovations that optimize the general-purpose hardware's cost, efficiency, and performance.
1. Commercial Off The Shelf (COTS) Hardware
When applied to hardware, the concept of COTS can be inferred as a commonly available hardware configuration. It is a general-purpose computer hardware architecture suitable for most applications. The principle behind COTS hardware evolved from the original IBM PC specification. IBM PC was one of the early personal computers designed by IBM, with a standardized interface. It fostered an open architecture that allowed third-party vendors to build peripheral devices and components based on the standard hardware specification. It led to the personal computing revolution leading to the mass-market adoption of microcomputers, creating a standard package for computer hardware available off the shelf. Most importantly, it drove the costs down and made personal computers affordable.
When used as the physical appliance representing a network element, a COTS hardware becomes the underlying platform for hosting a software-based network function. This is the first step in achieving interoperability for different vendors when using SDN.
2. Kernel Virtual Machine (KVM)
The kernel virtual machine (KVM) adds hypervisor capabilities directly in the operating system's kernel. Initially implemented for Linux, KVM allows a Linux based host system to act as a type 1 hypervisor. This feature enables system administrators to partition a Linux host system into multiple, isolated virtual machines that run as a guest operating system.
KVM is important for SDN because it is the enabler for deploying virtualized network functions ( VNF). A VNF is a software-based component of a network that performs a well-defined function. Instead of using proprietary physical appliances for this purpose, a bundled network element consisting of a VNF software on top of a COTS hardware is far cheaper. Further, deploying such a virtualized setup is more efficient, both from resource and time point of view. It utilizes the underlying compute and memory resources more judiciously. It gets installed in seconds, compared to an appliance, which takes weeks to complete the whole cycle from ordering, shipping, and final deployment.
3. Data Plane Acceleration Protocols
By using COTS to deploy VNFs, network operators have an immediate advantage in terms of CAPEX. Combined with SDN, this benefit also extends to reduced OPEX subsequently. But there are trade-offs.
A COTS-based computer usually consists of PICe interfaces for connecting hardware peripherals such as Ethernet network interfaces, graphics, and other peripherals. This is under the control of the OS kernel. Therefore, all the data packets passing to and from these interfaces are throttled by the OS's scheduling. When used as a network element serving a core network function that handles petabytes of data bandwidth, these interfaces fall way too short of the required packet processing ability.
Purpose-built network hardware appliances always rely on Application Specific Integrated Circuit (ASIC) processors to enable packet processing at a phenomenal rate. Depending upon which segment of the network it is deployed for, these appliances can handle packet throughput that is at par with the most demanding network throughput experienced by a core Internet router. Replacing such custom-built hardware with a COTS-based system reduces the network segment's packet processing ability by over 70%, creating a bottleneck for downstream network elements.
To circumvent this problem, newer hardware and software standards have been developed and are evolving over the last few years. One of the significant developments in this direction is Single Root, Input/Output Virtualization (SR IOV). SR IOV allows a physical PCIe port to be virtualized as multiple virtual interfaces that can be mapped to the virtual guest machines running over the OS, with the help of the hypervisor. Data Plane Development Kit (DPDK) has been developed as a software solution to solve this problem along similar lines. It creates a fast-path route for handling the packets through the network interface cards. This approach bypasses the OS scheduling and the kernel's network layer processing to accelerate packet processing. DPDK is part of the Linux Foundation and supports all the commonly available processor architectures such as x86 and ARM.
The container is the next revolution in the granularization of a computing entity. Unlike virtual machines, which emulates the hardware, a container is like a miniature partition on the OS. It is a self-contained entity, having its file system and configuration. It is much quicker to spawn off containers than virtual machines. Therefore they are much more agile. Containers are also lightweight compared to VMs as they do not need an additional hypervisor layer to manage the guest OSs on the VMs.
ETSI also recommends a container-based, cloud-native deployment as the preferred option for VNFs.
SDN introduces a level of distributed computing within the network architecture. With an SDN controller configuring and querying the routers' forwarding tables, there is a need for coordination of all these interactions. A few protocols play a crucial role in smooth orchestration between the controller, the routers, and all the VNFs under the controller's purview.
1. Transport Protocols
OpenFlow is the protocol between the SDN controller and the forwarding plane entities. It was the first protocol that enabled SDN with the capabilities for remote administration of routers, switches, and other VNFs. However, OpenFlow hasn't seen active development in the last few years because SDN vendors have their proprietary protocols.
VxLAN has emerged as yet another technology that plays a vital role in SDN. It is a network overlay technology that enables network administrators to build virtual layer two pathways over a network. It allows the SDN controllers and the routers to communicate over the virtual overlay network, thereby creating a virtual L2 adjacency.
2. Security Protocols
SDN opens up a few vulnerabilities within the network. The protocol interaction between the controller and the edge devices introduces newer security holes and creates spoofing opportunities.
Part of this problem can be solved using the existing IPSec standard that has been developed for TCP/IP over the years. It facilitates authentication and encryption of data packets using the encapsulated security payload (ESP).
However, to make the protocol interactions between the controller and switches absolutely full proof, additional layers of authentication and authorization is required. These mechanisms are part of the software enablers that we are going to address next.
Above the hardware, OS, and the protocols lie the software components. The SDN entities, consisting of controllers, routers, switches, and the VNFs, and their interactions are bound together using the following application software components.
1. Authentication Key Management
All the protocol interactions between the SDN controller and the edge sites have to support a robust authentication mechanism. This is an important software pillar, based on existing technologies, that protect the entire infrastructure from potential fraud, attacks, and malware planting.
Zero trust security is one of the popular security paradigms being adopted by network operators these days. This concept also builds upon the existing mechanisms around authentication, key management, and user access management. However, the fundamental philosophy behind zero trust security is to assume that nothing is trustworthy in the first place. Instead of using default security policies that establish mutual trust between paired entities or systems within a perimeter, a zero-trust network architecture mandates a total cut-off of all intersystem interaction until the trust is established.
This is also applicable for SDN, wherein all communication between the SDN controller and the edge devices follows a zero-trust policy.
2. License Management
License management is one of the most overlooked aspects of SDN deployment. It is a crucial element that governs the deployment of VNF resources across the network. Given the flexibility with which the service provider can deploy VNFs on the fly, the debate is about choosing the right sized license configuration that balances the CAPEX and OPEX based on the demand and throughput.
The choice lies between subscription vs. perpetual. A subscription license affords the network operator to start small by choosing a limited configuration with the option of "Pay as you grow" as they gradually serve more demand. Perpetual licenses come at a huge upfront cost but allow complete freedom to network operators.
In the case of SDN, a vendor can selectively control many aspects of the network resources using a licensing model. The licenses can be configured based on:
3. Operations Support System
Operations support systems (OSS) are the software applications that bind all the network resources, assets, customer workflows, and application configurations to enable a cohesive administration and management interface for the network operator.
OSS has undergone huge improvements over the decades. Also, many aspects of its functioning are standardized as part of the TM Forum.
In the SDN world, the fixed assets and static network configurations are replaced by a general-purpose cloud computing infrastructure, atop which any kind of virtual infrastructure can be set up. Traditional OSS systems were never designed to address this radical shift in network architecture. Therefore, the modern approach to OSS involves building an orchestration layer that binds all the network's physical resources and virtual assets together. This is realized by using cloud management software.
New generation OSS (NGOSS) systems have built-in support for cloud orchestration to manage and tweak the various compute, memory, and network configurations of the network architecture. Further, all this can be achieved on the fly. Openstack is one such popular option for deploying a cloud orchestration system for SDN.