Читать книгу Understanding Infrastructure Edge Computing - Alex Marcham - Страница 55

3.6.3.1 Switching

Оглавление

Compared to routing, switching uses only layer 2 information in order to direct network traffic to its destination. In the example of Ethernet, as described previously, the Ethernet frame header itself features a pair of MAC addresses, which are the source and destination addresses of the traffic. This section will focus primarily on Ethernet as a data link layer protocol used to perform switching, just as the previous section focused on IPv4 and IPv6 as network layer protocols used to perform routing.

In this section, much as devices that perform routing on network traffic are referred to as routers, a device which performs switching on network traffic will be referred to as a switch. Although these terms are often used interchangeably in the networking industry, in this book routing refers only to the layer 3 process of directing traffic from its source to its destination and switching refers in turn only to the layer 2 process that is used to achieve this same end goal. As will be seen in an upcoming section, these two processes are typically used together to operate a modern network.

Switching is generally used on networks which are local in scope. This is due to both the nature and limitations of link layer endpoint addressing but also the operational characteristics of switching as a process. Unlike IP addresses, Ethernet MAC addresses are not assigned to endpoints or interfaces of endpoints by the network administrator whether manually or via automation. Instead Ethernet MAC addresses are assigned to an endpoint or endpoint interface at the factory where they are produced and are not intended to be changed during normal operation. This does not allow the network to be arranged in an hierarchical or summarised fashion as is possible using a layer 3 protocol such as IP and so makes organising and scaling the network more difficult due to a few key operational factors.

Much like the routing table described in the previous section, all switches maintain a switching table that operates in a similar fashion; once populated, it is a record of the local interfaces of that switch via which a particular destination MAC address can be reached. This table is then used by the switch to forward traffic to those destinations as it is received. Each switch makes a forwarding decision by using the contents of its switching table, much like the hop‐by‐hop routing process described earlier.

A key difference between routing and switching is in how the routing table and switching table are built. As previously described, a routing protocol exchanges information between routers across the network and in some cases even with external networks in order to create its routing table. However, in the case of switching there is no such protocol; the switching table is created over time by using flooding, learning, and forwarding. This is a simpler, albeit far less scalable method of determining where in the network a particular destination is located. When a switch receives traffic without a corresponding entry in its switching table, it floods a request for the destination of that traffic from all of its interfaces besides the one it received the traffic on. The switch then expects an endpoint within the network to reply that it knows or is the destination of the traffic; the interface that this reply is received on is marked in the switching table as the path to reach the destination, and the original traffic plus any subsequent traffic to the same destination is then sent using this interface.

Unlike routing, due to the comparatively simple methods by which the locations of endpoints are learnt when using switching, a switch does not have a sophisticated view of the cost of the paths which are available or of the overall network topology. Although protocols such as the Spanning Tree Protocol (STP) and other similar protocols provide some level of intelligence to a switching network, primarily to enhance network resiliency by eliminating broadcast loops or enabling the network to utilise multiple layer 2 paths without creating these loops, switching decisions are inherently simpler than those which are possible with routing and so require simpler networks.

Without the ability to summarise layer 2 addresses due to how they are randomly distributed across a network or networks because they are statically assigned in the factory to an endpoint, switching would not scale to a network the size of the internet. The size of the switching tables required and the impact of the flooding process on the network would quickly become untenable. However, layer 2 networks are valuable in their simplicity and speed for many use cases, and for them to be used in a variety of different ways, new capabilities have been added to them over time.

One of the most important concepts in this regard for layer 2 networks is the virtual local area network (VLAN). VLANs represent a simple example of network overlays, where the physical local area network (LAN) network itself is used to support a number of virtual networks which operate on top of it and are all logically distinct from one another despite utilising the same physical resources such as switches, endpoints, and links.

Network virtualisation is a key topic which will be referenced in later chapters, and VLANs provide us with a framework upon which we can build our understanding of this topic. Consider the following example of a single LAN which is connecting multiple endpoints by using a common set of physical resources. However, each of these endpoints is owned and is operated by a different department within the same company, each of whom has very strict requirements for who has access to their network and how their network operates. The network architecture team now has two choices:

1 Construct a separate physical network for each department at excessive cost and complexity.

2 Use a single physical network and logically split it into virtual networks for each department.

As you might imagine, the second option is far more attractive from a cost standpoint as long as the underlying physical network is capable of supporting the combined requirements of all of the logical or virtual networks used by each department. This is an example of network virtualisation making a single, common physical network capable of multi‐tenant operation, rather than just single tenant. This is itself another key trend which will be returned to throughout later chapters, as it has enabled many telecoms networks and data centres to become increasingly economically viable worldwide. Without supporting multi‐tenant operation, investing in these physical pieces of infrastructure is a greater challenge, and so the ability of the industry to provide ubiquitous services is greatly reduced.

For an example of how this works from a technical perspective, consider the following traffic flow:

1 An endpoint on VLAN 1 sends traffic to another endpoint on VLAN 1. That destination endpoint happens to be across the network, with traffic passing through a single switch.

2 As the switch receives the traffic, it recognises that it is from an endpoint that is assigned to VLAN 1. This may occur because the traffic was already tagged (using the VLAN tag field, which the 802.1Q standard by the Institute of Electrical and Electronics Engineers (IEEE) added to the standard Ethernet header), or it may occur because the switch has been configured to recognise that all endpoints on an interface are in a specific VLAN, and so when it receives standard untagged traffic, it tags them itself.

3 The switch consults its switching table, which as well as being a record of where destinations for traffic are located across the network is also a record of which interfaces and endpoints reside in which VLANs. This means that even if traffic arrives from an endpoint on VLAN 1 and the switch does not know the location of its destination, it will flood a request for that destination out of interfaces it knows are assigned to VLAN 1 only. This keeps each VLAN operating separately from one another. The switching table in this example does know the location of the destination, and so the traffic is sent out of the corresponding interface.

4 The destination endpoint receives the traffic, and no endpoints on any other VLAN were aware of what was sent or if anything was sent because each VLAN operates separately.

The trend of network virtualisation follows similar trends that have been seen at the server level in the past decade, where virtualisation tools such as virtual machines (VMs) and containers have been instrumental in allowing multiple applications or instances of operating systems (OSs) to operate in harmony alongside one another atop the same piece of physical server hardware. Much like in our VLAN example, these separate logical entities are unaware of each other despite operating on the same physical resources. This is vital as it allows entire companies who may be competitors to be capable of operating on the same physical infrastructure, as long as they are logically separated.

VLANs are a very common example of this trend as applied to networks, but they are far from the only example. Network virtualisation and isolation between users of the same physical underlying infrastructure can be achieved at layer 3 as well, using technologies such as virtual routing and forwarding (VRF). With VRF, a router operates with multiple instances of a routing table at the same time; these routing tables do not share routes, and so they operate in much the same way as VLANs do, with traffic being handled by each independent routing instance depending on the interface the traffic was received on, or other tagging criteria applied to that traffic to direct it to a specific table.

The ability of these and other technologies to enable a piece of physical infrastructure to support multiple users while concurrently isolating their activities from one another is, as briefly mentioned above, a key consideration; the ideal infrastructure edge computing system is itself multi‐tenant and so requires this type of isolated multi‐user operation at many levels throughout the entire system to be as attractive as possible economically to both its customers and its operator, spanning from the network infrastructure required to support it through to the distributed data centres themselves.

Understanding Infrastructure Edge Computing

Подняться наверх