Читать книгу Understanding Infrastructure Edge Computing - Alex Marcham - Страница 61

3.11 Network Transport and Transit

Оглавление

Ideally, an infrastructure edge computing network, which we will define here as the combination of infrastructure edge computing data centres combined with their supporting network infrastructure within a specific area such as a city, will serve as much of the traffic entering it from interconnected access networks as possible. Serving the traffic in this context refers to the ability of these resources to respond satisfactorily to the needs of the traffic and not have to send that traffic to a destination that is off of the infrastructure edge computing network, such as a regional data centre via backhaul.

Network transport is the ability of a network to move data from one endpoint to another, such as from its source to its destination. This is the core functionality of all communications networks. It does not matter what scale the network is operating at in terms of geographical coverage or the number of endpoints; ultimately it is required to provide transport from one endpoint to another using however many endpoints or links between endpoints are required to achieve this single goal.

Network transit is the ability of a network to function as a bridge between two other networks. In the context of infrastructure edge computing, consider the diagram in Figure 3.6, which shows how the infrastructure edge computing network can provide transit services between an access network on the left and a backhaul network on the right. In this example, the infrastructure edge computing network is not serving any of the traffic which comes in from the access network and is instead simply passing it through to another destination, which is accessible via its own backhaul network.


Figure 3.6 Infrastructure edge computing network providing transit services.

A typical infrastructure edge computing network will aim to minimise the amount of network transit it provides; although it is essential that the infrastructure edge computing network is able to provide transit for traffic that the network itself is unable to serve, this capability should not be seen as the main use of the infrastructure edge computing network. Although it is beneficial to the operator of the network if as great a proportion of the access layer traffic flows through the infrastructure edge computing network as possible, because this provides the greatest opportunity to serve traffic using resources at the infrastructure edge, if the bulk of this traffic cannot be served by the infrastructure edge due to the tenants present or other factors, the network is just joining access back to backhaul. This does not utilise the full capability of the infrastructure edge computing network for applications.

Of course, the physical data centre and network infrastructure of the infrastructure edge computing network is not enough alone to satisfy traffic; first, the right networks must be interconnected at the infrastructure edge to enable traffic to be exchanged efficiently without transporting it all the way to the IX and back, and second, the resources that an endpoint is trying to access must also be located at the infrastructure edge. These resources may include streaming video services, cloud instances, or any other network accessible resources, including new use cases such as IoT command and control.

Data Centre Interconnect (DCI) typically refers to the physical network infrastructure that is used to connect one data centre to another, regardless of the scale of the two facilities, combined with a protocol set used to facilitate inter‐data centre communication. This connectivity between facilities may be used to provide both transport and transit services; for example, in the context of several infrastructure edge data centres deployed within a single area such as a city, where a resource is not available in one data centre, it may be in another where that data centre is connected to directly or indirectly. In this case, the traffic can be sent to that serving data centre and can be served while still remaining on the same infrastructure edge computing network, providing some latency advantages.

Although in the ideal case traffic is served by the first infrastructure edge data centre that it enters, as long as the connectivity between infrastructure edge data centres is sufficient to provide a lower latency and cost of data transportation than sending the traffic back to another destination over a backhaul network, this process can still provide a better user experience than is otherwise possible.

Physically, the network connectivity between data centres regardless of the scale of these facilities is typically implemented using high‐capacity fibre optic networks. These networks provide far greater capacity than any other currently used transmission medium and economically are capable of the lowest cost per bit of transmitted data by far when compared to alternative technologies such as copper or wireless networks. As data centre facilities are not physically moving, they do not require the mobility advantages of wireless technologies, and fibre exceeds the capacity possible in copper.

Additionally, many entities ranging from telecommunications network operators to municipalities have gone to considerable expense to lay fibre optic cabling throughout many urban and even some rural areas. Locations where this fibre happens to aggregate, such as at tower sites used for cellular networks, make ideal locations for infrastructure edge data centre deployments due to the ability to access existing fibre networks and minimise the expense of deploying the infrastructure edge itself.

Understanding Infrastructure Edge Computing

Подняться наверх