Читать книгу Software Networks - Guy Pujolle - Страница 26

2.2. The ONF architecture

Оглавление

In order for this new world of SDN to have a chance of being successful, it has to be standardized. This standardization was carried out by the ONF (Open Networking Foundation), which was set up under the auspices of large companies in the USA, following the proposal of this architecture by Stanford University and Nicira.

The architecture proposed by the ONF is shown in Figure 2.5. It comprises three layers. The bottom layer is an abstraction layer, which decouples the hardware from the software, and is responsible for data transport. This level describes the protocols and algorithms that enable IP packets to advance through the network to their destination. This is called the infrastructure plane. The second layer is the control plane. This plane contains the controllers providing control data to the data plane so that the data are channeled as effectively as possible. The ONF’s vision is to centralize control in order to facilitate the recovery of a great deal of information on all the clients. The centralized controller enables obtaining a sort of intelligence. The infrastructure to be managed is distributed between the controllers. Of course, we need to take account of the problems caused by a centralized environment, and therefore duplicate the decision elements.

Controllers carry out different functions, such as the provision of infrastructure or the distribution of loads on different network devices to optimize performances or reduce energy consumption. The controller is also in charge of the configuration of network equipment such as firewalls, authentication servers and, more generally, all servers necessary for the proper operation of the network. These different machines must be put in the most appropriate places in order to enhance the overall network functioning.

Finally, the uppermost layer, the application plane, is responsible for the applications needed by the clients and storage, computation, network, security and management applications. This layer introduces the programmability of the applications, and sends the controller all of the necessary information to allow the opening of the software networks meeting the needs of the applications. This layer also includes control, orchestration and management applications that are vital to the good functioning of the company’s computing system. The application plan must be able to channel the information required to open up the network that corresponds to the application towards the controller. Any new service can be introduced quickly, and will give rise to a specific network if it cannot be embedded on a pre-existing network.

The ONF architecture is shown in Figure 2.5, with its three layers: the application layer and programmability, the control layer with centralized intelligence, and abstraction at the infrastructure layer. We will come back to look at the interfaces between these layers, which are important for the compatibility of products from different vendors. The ONF has standardized the intermediary layer and the interfaces. Certain parts of the architecture are taken up by other standardization organizations so as to conform to the legal standards.


Figure 2.5. The ONF architecture

The ONF’s general architecture can actually be more detailed, as shown in Figure 2.6. Once again, we see the infrastructure layer, but it is expanded into two planes: the physical plane and the logical plane. The physical plane is in charge of all the hardware, and more generally, the physical infrastructure. The logical plane corresponds to the establishment of the software networks constructed on the basis of virtual machines, sharing the physical infrastructure in accordance with the rules deriving from the higher layers. This vision of the architecture enables us to clearly discern the hardware and the networks that exist in companies from the software, which is added to offer the necessary flexibility. This architecture requires datacenters ranging in size from very small to very large, depending on the size of the company and on the resources distribution to the periphery. Telecom operators have not missed this opportunity, and have entered into the market as Cloud providers. Companies such as Amazon and Google have gone directly for the goal, putting in place the infrastructure necessary to become major players in the world of telecommunications.


Figure 2.6. The SDN architecture

In the architecture shown in Figure 2.6, we see the control layer and the application layer with the northbound and southbound APIs (Application Programming Interfaces) between those layers, and the eastbound and westbound APIs with other controllers. The northbound interface facilitates communication between the application level and the controller. Its purpose is to describe the needs of the application and to pass along the commands to orchestrate the network. Later on, we will describe the current standards governing this interface. The southbound interface describes the signaling necessary between the control plane and the virtualization layer. With this aim in mind, the controller must be able to determine the elements that will make up the software network to set up. In the other direction, the current network resource consumption must be fed back so that the controller has as full a view as possible of the usage of the resources. The bandwidth necessary for the feeding back of these statistics may represent a few percent of the network’s capacity, but this is crucial for optimization which will improve performance by much more than a few percent.

In addition to the two interfaces described above, there are also the eastbound and westbound interfaces. The eastbound interface enables two controllers of the same type to communicate with one another and make decisions together. The westbound interface must also facilitate communication between two controllers, but ones which belong to different sub-networks. The two controllers may be compatible but they may also be incompatible and, in this case, a signaling gateway is needed.

Figure 2.7 shows a number of important open source programs that have been developed to handle a layer or an interface. Starting from the bottom, in the virtualization layer, network virtual machines were standardized by the ETSI in a working group called NFV (Network Functions Virtualization), which we will revisit in detail later on. Here, let us simply note that the aim of NFV is to standardize all network functions with a view to virtualizing them and facilitating their execution in different places from the original physical machine. To complete this standardization, an open source software code has been developed which allows full compatibility between virtual machines.

The control plane includes the controllers. One of the best known is OpenDaylight – an open source controller developed collaboratively by numerous companies. This controller, as we will see later on, contains a large number of modules, often developed in the interest of the particular company that carried out that work. Today, OpenDaylight is one of the major pieces in the Cisco architecture, as well as that of other manufacturers. Later on, we will detail most of the functions of OpenDaylight. Of course, there are many other controllers – open source ones as well – such as OpenContrail, ONOS, Flood Light, etc.

The uppermost layer represents the Cloud management systems. It is roughly equivalent to the operating system on a computer. It includes OpenStack, which was the system which was most favored by the developers, but many other products exist, both open source and proprietary.

The southbound interface is often known through its standard naming from the ONF: OpenFlow. OpenFlow is a signaling system between the infrastructure and the controller. This protocol was designed by Nicira and has led to a de facto standard from the ONF. OpenFlow transports information that properly defines the stream in question to open, modify or close the associated path. OpenFlow also determines the actions to be executed in one direction or the other over the interface. Finally, OpenFlow facilitates the feeding back of measurement information performed over the different communication ports, so that the controller has a very precise view of the network.

The northbound and southbound interfaces have been standardized by the ONF to facilitate compatibility between Cloud providers, the control software and the physical and virtual infrastructure. Most manufacturers conform to these standards to a greater or lesser degree, depending on the interests in the range of hardware already operating. Indeed, one of the objectives is to allow companies that have an extensive range to be able to upgrade to the next generation of SDN without having to purchase all new infrastructure. A transitional period is needed, during which we may see one of two scenarios:

 – the company adapts the environment of the SDN to its existing infrastructure. This is possible because the software layer is normally independent of the physical layer. The company’s machines must be compatible with the manufacturer’s hypervisor or container products. However, it is important to add to or update the infrastructure so that its power increases by at least 10%, but preferably 20 or 30%, to be able to handle numerous logical networks;

 – the company implements the SDN architecture on a new part of its network, and increases it little by little. This solution means that both the old generation of the network and the new need to be capable of handling the demand.


Figure 2.7. Example of open source developments

Now let us go into detail about the different layers. Once again, we will start with the bottom layer. The physical layer is quintessential and is designed around the most generic possible hardware devices in order to obtain the best cost/benefit ratio, but these devices are still suited for networking, i.e. with the necessary communication cards and the appropriate capacity to host the intended software networks. It is clear that performance is of crucial importance, and that the physical infrastructure has to be able to cope with what is at stake. One of the priorities, though, is to minimize the physical layer to avoid consuming too much in the way of energy resources. With this goal in mind, the best thing to do is to have an algorithm to appropriately place the virtual machines, with a view to putting the highest possible number of physical machines on standby outside of peak hours. Urbanization is becoming a buzzword in these next-generation networks. Unfortunately, urbanization algorithms are still in the very early stages of development, and are not capable of dealing with multi-criteria objectives: only one criterion is taken into account – e.g. performance. Algorithms can be executed based on the criteria of load balancing. It can also take into account the energy consumption by doing the opposite of load balancing – i.e. channeling the data streams along common paths in order to be able to place a maximum number of physical machines on standby. The difficulty in the latter case is being able to turn the resources back on as the workload of the software networks increases again. Certain machines, such as virtual Wi-Fi access points, are difficult to wake from standby mode when external devices wish to connect. First, we need electromagnetic sensors that are capable of detecting these mobile terminals, and second, we need to send an Ethernet frame on a physical wire to the Wi-Fi access point with the function Wake-on-LAN.

Software Networks

Подняться наверх