Читать книгу CompTIA Cloud+ Study Guide - Ben Piper, David Higby Clinton - Страница 69

Compute Pools

Оглавление

In the cloud provider's data center, there are numerous physical servers that each run a hypervisor. When you provision a VM in the cloud, the provider's cloud orchestration platform selects an available physical server to host your VM. The important point here is that it doesn't matter which particular host you get. Your VM will run the same regardless of the host. In fact, if you were to stop a running VM and then restart it, it would likely run on a completely different host, and you wouldn't be able to tell the difference. The physical servers that the provider offers up for your VMs to run on are part of a singular compute pool. By pooling physical servers in this way, cloud providers can flexibly dispense computing power to multiple customers. As more customers enter the cloud, the provider just has to add more servers to keep up with demand.

Drilling down a bit into the technical details, the hypervisor on each host will virtualize the physical server resources and make them available to the VM for consumption. Multiple VMs can run on a single physical host, so one of the hypervisor's jobs is to allow the VMs to share the host's resources while remaining isolated from one another so that one VM can't read the memory used by another VM, for instance. Figure 1.14 shows this relationship between the virtual machines and the hardware resources.


FIGURE 1.14 Shared resource pooling

The hypervisor's job is to virtualize the host's CPUs, memory, network interfaces, and—if applicable—storage (more on storage in a moment). Let's briefly go over how the hypervisor virtualizes CPU, memory, and network interfaces.

On any given host, there's a good chance that the number of VMs will greatly outnumber the host's physical CPU cores. Therefore, the hypervisor must coordinate or schedule the various threads run by each VM on the host. If the VMs all need to use the processor simultaneously, the hypervisor will figure out how to dole out the scarce CPU resources to the VMs contending for it. This process is automatic, and you'll probably never have to think about it. But you should know about another “affinity” term that's easy to confuse with hypervisor affinity: CPU affinity. CPU affinity is the ability to assign a processing thread to a core instead of having the hypervisor dynamically allocate it. A VM can have CPU affinity enabled, and when a processing thread is received by the hypervisor, it will be assigned to the CPU it originally ran on. You're not likely to see it come up on the exam, but just be aware that CPU affinity can and often does result in suboptimal performance, so it's generally best to disable it. A particular CPU assigned to a VM can have a very high utilization rate while another CPU might be sitting idle; affinity will override the hypervisor's CPU selection algorithms and be forced to use the saturated CPU instead of the underutilized one.

Now let's talk about random access memory (RAM). Just as there are a limited number of CPU cores, so there is a finite amount of RAM installed in the physical server. This RAM is virtualized by the hypervisor software into memory pools and allocated to virtual machines. When you provision a VM, you choose the amount of RAM to allocate to it. Unlike the CPU, which is shared, RAM is not. Whatever RAM you allocate to a VM is dedicated to that VM, and no other VM can access it. When a VM consumes all of its allocated RAM, it will begin to swap the contents of some of its RAM to storage. This swap file, as it is called, will be used as virtual RAM. When configuring a VM, be sure to allocate enough storage space for the swap file, and keep in mind that the storage latency of the swap file will have a negative impact on the performance of the VM.

Thus far, we've discussed compute pools from the perspective of an IaaS model. But how do compute pools work with PaaS or SaaS models? Behind the scenes, almost everything's the same. What's different is that in the PaaS and SaaS models, the cloud provider runs a user-friendly interface atop the underlying compute infrastructure. For example, if the cloud provider is offering hosted email as a service, that email system gets its computing power from the same compute pools that power the IaaS infrastructure. In fact, every service under the PaaS or SaaS model that the provider offers probably runs directly on the same IaaS infrastructure that we've been discussing. In other words, cloud providers don't reinvent the wheel for every service that they provide. They build the compute infrastructure to provide the compute pools, and everything else uses those.

CompTIA Cloud+ Study Guide

Подняться наверх