Читать книгу Software Networks - Guy Pujolle - Страница 13
I.3. “Cloudification” of networks
ОглавлениеFigure I.8 shows the rise of infrastructure costs in time. We can see that a speed increase implies a rise in infrastructure costs whereas the income of telecommunication operators stagnates, partly due to very high competition to acquire new markets. It is therefore absolutely necessary to find ways to reduce the gap between costs and income. Among other reasons, two aspects are essential to start a new generation of networks: network automation using autopilot and the choice of open source software in order to decrease the number of network engineers and to avoid license costs for commercial software. Let us examine these two aspects before studying the reasons to turn to this new software network solution.
The automation of the network pilot is the very first reason for the new generation. The concept of the autopilot created here is similar to that of a plane’s autopilot. However, unlike a plane, a network is very much a distributed system. To achieve autopilot, we must gather all knowledge about the network – which means contextualized information – in all nodes if we want to distribute this autopilot or in a single node if we want to centralize this autopilot. Centralization was chosen for obvious reasons: simplicity and network congestion by packets with knowledge. This is the most important paradigm of this new generation of networks: centralization. This way, the network is no longer a decentralized system. It becomes centralized. It will be necessary to pay attention to the center’s security by doubling or tripling the controller, which is the name given to this central system.
The controller is the control device that must contain all knowledge about users, applications, nodes and network connections. From there, smart systems will be able to pilot packets in the infrastructure for the best possible service quality for all the clients using the network. As we will see later on, the promising autopilot for the 2020s is being finalized: the open source ONAP (Open Networking Automation Platform).
The second important aspect of the new generation of networks is the open source software. The rise of these open source software always comes from a need to reduce costs, and also to implement standards that can easily be followed by companies. The Linux Foundation is one of the major organizations in this area, and most of the software shaping future networks come from this Foundation, among which is the OPNFV (Open Platform Network Functions Virtualization) platform. This is the most important one since it gathers open source software that will act as a basic frame.
This tendency towards open source software raises questions such as: what will become of network and telecom suppliers since everything comes from open source software? Is security ensured with these thousands of thousands of coding lines in which bugs will occur? And so on. We will answer these questions in the Chapter 4, on open source software.
The rise of this new generation of networks, based on datacenters, has an impact on energy consumption in the world of ICT. This consumption is estimated in 2019 to account for 7% of the total carbon footprint. However, this proportion is increasing very quickly with the rapid rollout of datacenters and antennas for mobile networks. By way of example, a datacenter containing a million servers consumes approximately 100 MW. A Cloud provider with 10 such datacenters would consume 1 GW, which is the equivalent of a sector in a nuclear power plant. This total number of servers has already been achieved or surpassed by 10 well-known major companies. Similarly, the number of 2G/3G/4G antennas in the world is already more than 10 million. Given that, on average, consumption is 1500 W per antenna (2000 W for 3G/4G antennas but significantly less for 2G antennas), this represents around 15 GW worldwide.
Continuing in the same vein, the carbon footprint produced by energy consumption in the world of ICT is projected to reach 20% by 2025 if nothing is done to control the current growth. Therefore, it is absolutely crucial to find solutions to offset this rise. We will come back to this in the last chapter of this book, but there are solutions that already exist and are beginning to be used. Virtualization represents a good solution, whereby multiple virtual machines are hosted on a common physical machine, and a large number of servers are placed in standby mode (low power) when not in use. Processors also need to have the ability to drop to very low speeds of operation whenever necessary. Indeed, the power consumption is strongly proportional to processor speed. When the processor has nothing to do, it should almost stop, and speed up again when the workload increases.
Mobility is also another argument in favor of adopting a new form of network architecture. Figure I.8 shows that in 2020, the average speed of wireless solutions will be of several dozens of Mbit/s on average. Therefore, we need to manage the mobility problem. Thus, the first order of business is the management of multi-homing – i.e. being able to connect a terminal to several networks simultaneously. The word “multi-homing” stems from the fact that the terminal receives several IP addresses, assigned by the different networks it is connected to. These multiple addresses are complex to manage, and the task requires specific functionalities. Mobility must also make it possible to handle simultaneous connections to several networks. On the basis of certain criteria (to be determined), the packets of the same message can be separated and sent via different networks. Thus, they need to be re-ordered when they arrive at their destination, which can cause numerous problems.
Figure I.8. Speed of terminals based on the network used
Mobility also raises the issues of addressing and identification. If we use the IP address, it can be interpreted in two different ways: for identification purposes, to determine who the user is, and also for localization purposes, to determine the user’s position. The difficulty lies in dealing with these two functionalities simultaneously. Thus, when a customer moves sufficiently far to go beyond the sub-network with which he/she is registered, it is necessary to assign a new IP address to the device. This is fairly complex from the point of view of identification. One possible solution, as we can see, is to give two IP addresses to the same user: one reflecting his/her identity and the other the location.
Another revolution that is currently under way pertains to the “Internet of Things” (IoT): billions of things will be connected within the next few years. The prediction is that 50 billion will be connected to the IoT by 2020. In other words, the number of connections will likely increase tenfold in the space of only a few years. The “things” belong to a variety of domains: 1) domestic, with household electrical goods, home health care, domotics, etc.; 2) medicine, with all sorts of sensors both on and in the body to measure, analyze and perform actions; 3) business, with light level sensors, temperature sensors, security sensors, etc. Numerous problems arise in this new universe, such as identity management and the security of communications with the sensors. The price of identification is often set at $40 per object, which is absolutely incompatible with the cost of a sensor which is often less than $1. Security is also a complex factor, because the sensor has very little power, and is incapable of performing sufficiently sophisticated encryption to ensure the confidentiality of the transmissions.
Finally, there is one last reason to favor migration to a new network: security. Security requires a precise view and understanding of the problems at hand, which range from physical security to computer security, with the need to lay contingency plans for attacks that are sometimes entirely unforeseeable. The world of the Internet today is like a bicycle tire which is made up entirely of patches (having been punctured and repaired numerous times). Every time an attack succeeds, a new patch is added. Such a tire is still roadworthy at the moment, but there is a danger that it will burst if no new solution is envisaged in the next few years. Near the end of this book, in Chapter 15, we will look at the secure Cloud, whereby, in a datacenter, a whole set of solutions is built around specialized virtual machines to provide new elements, the aim of which is to enhance the security of the applications and networks.
An effective security mechanism must include a physical element: a safe box to protect the important elements of the arsenal, necessary to ensure confidentiality, authentication, etc. Software security is a reality, and to a certain extent, may be sufficient for numerous applications. However, secure elements can always be circumvented when all of the defenses are software-based. This means that, for new generations, there must be a physical element, either local or remote. This hardware element is a secure microprocessor known as a “secure element”. A classic example of this type of device is the smartcard, used particularly prevalently by telecom operators and banks.
Depending on whether it belongs to the world of business or public electronics, the secure element may be found in the terminal, near to it or far away from the terminal. We will examine the different solutions in the subsequent chapters of this book.
Virtualization also has an impact on security: the Cloud, with specialized virtual machines, means that attackers have remarkable striking force at their disposal. In the last few years, hackers’ ability to break encryption algorithms has increased by a factor of 106.
Another important point that absolutely must be integrated in networks is “intelligence”. So-called “intelligent networks” have had their day, but the intelligence in this case was not really what we mean by “intelligence” in this field. Rather, it was a set of automatic mechanisms, used to deal with problems perfectly defined in advance, such as a signaling protocol for providing additional features in the telephone system. In the new generation of networks, intelligence pertains to learning mechanisms and intelligent decisions based on the network status and user requests. The network needs to become an intelligent system, which is capable of making decisions on its own. One solution to help move in this direction was introduced by IBM in the early 2000s: “autonomic”. “Autonomic” means autonomous and spontaneous – autonomous in the sense that every device in the network must be able to independently make decisions with knowledge of the situated view, i.e. the state of the nodes surrounding it within a certain number of hops. The solutions that have been put forward to increase the smartness of the networks are influenced by Cloud technology. We will discuss them in detail in the chapter about MEC (Mobile Edge Computing) and more generally about “smart edge” (Chapter 5).
Finally, one last point, which could be viewed as the fourth revolution, is concretization – i.e. the opposite of virtualization. Indeed, the problem with virtualization is a significant reduction in performance, stemming from the replacement of hardware with software. There is a variety of solutions that have been put forward to regain the performance: software accelerators and, in particular, the replacement of software with hardware, in the step of concretization. The software is replaced by reconfigurable hardware, which can transform depending on the software needing to be executed. This approach is likely to create morphware networks, which will be described in Chapter 16.