Читать книгу Mastering VMware vSphere 6 - Marshall Nick - Страница 19
Chapter 2
Planning and Installing VMware ESXi
Deploying VMware ESXi
ОглавлениеOnce you’ve established the basics of your vSphere design, you must decide exactly how you will deploy ESXi. You have three options:
• Interactive installation of ESXi
• Unattended (scripted) installation of ESXi
• Automated provisioning of ESXi
Of these, the simplest is an interactive installation of ESXi. The most complex – but perhaps the most powerful, depending on your needs and your environment – is automated provisioning of ESXi. In the following sections, we’ll describe all three of these methods for deploying ESXi in your environment.
Let’s start with the simplest method first: interactively installing ESXi.
Installing VMware ESXi Interactively
VMware has done a great job of making the interactive installation of ESXi as simple and straightforward as possible. It takes just minutes to install, so let’s walk through the process.
Perform the following steps to interactively install ESXi:
1. Ensure that your server hardware is configured to boot from the CD-ROM drive.
This will vary from manufacturer to manufacturer and will also depend on whether you are installing locally or remotely via an IP-based keyboard, video, mouse (KVM) or other remote management facility.
2. Ensure that VMware ESXi installation media are available to the server.
Again, this will vary based on a local installation (which involves simply inserting the VMware ESXi installation CD into the optical drive) or a remote installation (which typically involves mapping an image of the installation media, known as an ISO image, to a virtual optical drive).
Obtaining vmware esxi installation media
You can download the installation files from VMware’s website at www.vmware.com/download/.
Physical boxed copies of VMware products are no longer sold, but if you hold a valid license all products can be downloaded directly from VMware. These files are typically ISO files that you can mount to a server or burn to a physical CD or DVD.
3. Power on the server.
Once it boots from the installation media, the initial boot menu screen appears, as shown in Figure 2.2.
4. Press Enter to boot the ESXi installer.
The installer will boot the vSphere hypervisor and eventually stop at a welcome message. Press Enter to continue.
5. At the End User License Agreement (EULA) screen, press F11 to accept the EULA and continue with the installation.
6. Next, the installer will display a list of available disks on which you can install or upgrade ESXi.
Potential devices are identified as either local devices or remote devices. Figure 2.3 and Figure 2.4 show two different views of this screen: one with a local device and one with remote devices.
Figure 2.2 The initial ESXi installation routine has options for booting the installer or booting from the local disk.
Figure 2.3 The installer offers options for both local and remote devices; in this case, only a local device was detected.
Figure 2.4 Although local SAS devices are supported, they are listed as remote devices.
Running esxi as a vM
You might be able to deduce from Figure 2.3 that I’m actually running ESXi 6 as a VM. Yes, that’s right – you can virtualize ESXi! In this particular case, I’m using VMware’s desktop virtualization solution for Mac OS X, VMware Fusion, to run an instance of ESXi as a VM. As of this writing, the latest version of VMware Fusion is 6, and it includes ESXi as an officially supported guest OS. This is a great way to test out the latest version of ESXi without the need for server class hardware. You can also run ESXi as a VM on ESXi itself, but remember it is not supported for running production workloads inside these “nested” or virtual hypervisors.
Storage area network logical unit numbers, or SAN LUNs, are listed as remote, as you can see in Figure 2.4. Local serial attached SCSI (SAS) devices are also listed as remote. Figure 2.4 shows a SAS drive connected to an LSI Logic controller; although this device is physically local to the server on which we are installing ESXi, the installation routine marks it as remote.
If you want to create a boot-from-SAN environment, where each ESXi host boots from a SAN LUN, then you’d select the appropriate SAN LUN here. You can also install directly to your own USB or Secure Digital (SD) device – simply select the appropriate device from the list.
Which Destination is Best?
Local device, SAN LUN, or USB? Which destination is the best when you’re installing ESXi? Those questions truly depend on the overall vSphere design you are implementing, and there is no simple answer. Many variables affect this decision. Are you using an iSCSI SAN and you don’t have iSCSI hardware initiators in your servers? That would prevent you from using a boot-from-SAN setup. Are you installing into an environment like Cisco UCS, where booting from SAN is highly recommended? Is your storage larger than 2 GB? Although you can install ESXi on a 2 GB partition, no log files will be stored locally so you’ll receive a warning in the UI advising you to set an external logging host. Be sure to consider all the factors when deciding where to install ESXi.
7. To get more information about a device, highlight the device and press F1.
The information about the device includes whether it detected an installation of ESXi and what Virtual Machine File System (VMFS) datastores, if any, are present on it, as shown in Figure 2.5. Press Enter to return to the device-selection screen when you have finished reviewing the information for the selected device.
8. Use the arrow keys to select the device on which you are going to install ESXi, and press Enter.
9. If the selected device includes a VMFS datastore or an installation of ESXi, you’ll be prompted to choose what action you want to take, as illustrated in Figure 2.6. Select the desired action and press Enter.
These are the available actions:
• Upgrade ESXi, Preserve VMFS Datastore: This option upgrades to ESXi 6 and preserves the existing VMFS datastore.
• Install ESXi, Preserve VMFS Datastore: This option installs a fresh copy of ESXi 6 and preserves the existing VMFS datastore.
• Install ESXi, Overwrite VMFS Datastore: This option overwrites the existing VMFS datastore with a new one and installs a fresh installation of ESXi 6.
10. Select the desired keyboard layout and press Enter.
11. Enter (and confirm) a password for the root account. Press Enter when you are ready to continue with the installation. Be sure to make note of this password – you’ll need it later.
12. At the final confirmation screen, press F11 to proceed with the installation of ESXi.
After the installation process begins, it takes only a few minutes to install ESXi onto the selected storage device.
13. Press Enter to reboot the host at the Installation Complete screen.
Figure 2.5 Checking to see if there are any VMFS datastores on a device can help you avoid accidentally overwriting data.
Figure 2.6 You can upgrade or install ESXi as well as choose to preserve or overwrite an existing VMFS datastore.
After the host reboots, ESXi is installed. ESXi is configured by default to obtain an IP address via Dynamic Host Configuration Protocol (DHCP). Depending on the network configuration, you might find that ESXi will not be able to obtain an IP address via DHCP. Later in this chapter, in the section “Reconfiguring the Management Network,” we’ll discuss how to correct networking problems after installing ESXi by using the Direct Console User Interface (DCUI).
VMware also provides support for scripted installations of ESXi. As you’ve already seen, there isn’t a lot of interaction required to install ESXi, but support for scripting the installation of ESXi reduces the time to deploy even further.
Interactively installing esxi from usb or across the network
As an alternative to launching the ESXi installer from the installation CD/DVD, you can install ESXi from a USB flash drive or across the network via Preboot Execution Environment (PXE). More details on how to use a USB flash drive or how to PXE boot the ESXi installer are found in the vSphere Installation and Setup Guide, available from www.vmware.com/support/pubs/. Note that PXE booting the installer is not the same as PXE booting ESXi itself, something that we’ll discuss later in the section “Deploying VMware ESXi with vSphere Auto Deploy.”
Performing an Unattended Installation of VMware ESXi
ESXi supports the use of an installation script (often referred to as a kickstart, or KS, script) that automates the installation routine. By using an installation script, users can create unattended installation routines that make it easy to quickly deploy multiple instances of ESXi.
ESXi comes with a default installation script on the installation media. Listing 2.1 shows the default installation script.
Listing 2.1: The default installation script provided by ESXi
If you want to use this default install script to install ESXi, you can specify it when booting the VMware ESXi installer by adding the ks=file://etc/vmware/weasel/ks.cfg boot option. We’ll show you how to specify that boot option shortly.
Of course, the default installation script is useful only if the settings work for your environment. Otherwise, you’ll need to create a custom installation script. The installation script commands are much the same as those supported in previous versions of vSphere. Here’s a breakdown of some of the commands supported in the ESXi installation script:
accepteula or vmaccepteula These commands accept the ESXi license agreement.
Install The install command specifies that this is a fresh installation of ESXi, not an upgrade. You must also specify the following parameters:
– firstdisk Specifies the disk on which ESXi should be installed. By default, the ESXi installer chooses local disks first, then remote disks, and then USB disks. You can change the order by appending a comma-separated list to the – firstdisk command, like this:
– firstdisk=remote, local
This would install to the first available remote disk and then to the first available local disk. Be careful here – you don’t want to inadvertently overwrite something (see the next set of commands).
– overwritevmfs or – preservevmfs These commands specify how the installer will handle existing VMFS datastores. The commands are pretty self-explanatory.
Keyboard This command specifies the keyboard type. It’s an optional component in the installation script.
Network This command provides the network configuration for the ESXi host being installed. It is optional but generally recommended. Depending on your configuration, some additional parameters are required:
– bootproto This parameter is set to dhcp for assigning a network address via DHCP or to static for manual assignment of an IP address.
– ip This sets the IP address and is required with – bootproto=static. The IP address should be specified in standard dotted-decimal format.
– gateway This command specifies the IP address of the default gateway in standard dotted-decimal format. It’s required if you specified – bootproto=static.
– netmask The network mask, in standard dotted-decimal format, is specified with this command. If you specify – bootproto=static, you must include this value.
– hostname Specifies the hostname for the installed system.
– vlanid If you need the system to use a VLAN ID, specify it with this command. Without a VLAN ID specified, the system will respond only to untagged traffic.
– addvmportgroup This parameter is set to either 0 or 1 and controls whether a default VM Network port group is created. 0 does not create the port group; 1 does create the port group.
Reboot This command is optional and, if specified, will automatically reboot the system at the end of installation. If you add the – noeject parameter, the CD is not ejected.
Rootpw This is a required parameter and sets the root password for the system. If you don’t want the root password displayed in the clear, generate an encrypted password and use the – iscrypted parameter.
Upgrade This specifies an upgrade to ESXi 6. The upgrade command uses many of the same parameters as install and also supports a parameter for deleting the ESX Service Console VMDK for upgrades from ESX to ESXi. This parameter is the – deletecosvmdk parameter.
This is by no means a comprehensive list of all the commands available in the ESXi installation script, but it does cover the majority of the commands you’ll see in use.
Looking back at Listing 2.1, you’ll see that the default installation script incorporates a %post section, where additional scripting can be added using either the Python interpreter or the BusyBox interpreter. What you don’t see in Listing 2.1 is the %firstboot section, which also allows you to add Python or BusyBox commands for customizing the ESXi installation. This section comes after the installation script commands but before the %post section. Any command supported in the ESXi shell can be executed in the %firstboot section, so commands such as vim-cmd, esxcfg-vswitch, esxcfg-vmknic, and others can be combined in the %firstboot section of the installation script.
A number of commands that were supported in previous versions of vSphere (by ESX or ESXi) are no longer supported in installation scripts for ESXi 6, such as these:
• autopart (replaced by install, upgrade, or installorupgrade)
• auth or authconfig
• bootloader
• esxlocation
• firewall
• firewallport
• serialnum or vmserialnum
• timezone
• virtualdisk
• zerombr
• The – level option of %firstboot
Once you have created the installation script you will use, you need to specify that script as part of the installation routine.
Specifying the location of the installation script as a boot option is not only how you would tell the installer to use the default script but also how you tell the installer to use a custom installation script that you’ve created. This installation script can be located on a USB flash drive or in a network location accessible via NFS, HTTP, HTTPS, or FTP. Table 2.1 summarizes some of the supported boot options for use with an unattended installation of ESXi.
Table 2.1: Boot options for an unattended ESXi installation
Not a comprehensive list of boot options
The list found in Table 2.1 includes only some of the more commonly used boot options for performing a scripted installation of ESXi. For the complete list of supported boot options, refer to the vSphere Installation and Setup Guide, available from www.vmware.com/support/pubs/.
To use one or more of these boot options during the installation, you’ll need to specify them at the boot screen for the ESXi installer. The bottom of the installer boot screen states that you can press Shift+O to edit the boot options.
The following code line is an example that could be used to retrieve the installation script from an HTTP URL; this would be entered at the prompt at the bottom of the installer boot screen:
<ENTER: Apply options and boot> <ESC: Cancel>
> runweasel ks=http://192.168.1.1/scripts/ks.cfg ip=192.168.1.200
netmask=255.255.255.0 gateway=192.168.1.254
Using an installation script to install ESXi not only speeds up the installation process but also helps to ensure the consistent configuration of all your ESXi hosts.
The final method for deploying ESXi – using vSphere Auto Deploy – is the most complex, but it also offers administrators a great deal of flexibility.
Deploying VMware ESXi with vSphere Auto Deploy vSphere Auto Deploy is a network deployment service that enables ESXi hosts to be built off an image template over a network connection. No mounting of installation media is required to get an ESXi host up and running if it is installed using Auto Deploy. You need to address a number of prerequisites before using Auto Deploy. They are listed here, but before I get too far into this section I wanted to mention the requirement for a vCenter Server. Auto Deploy requires an installed vCenter Server to operate but we won’t start discussing this until Chapter 3, “Installing and Configuring vCenter Server.” Feel free to skip this section and come back once your vCenter Server is up and running; otherwise, follow along to see how this service is configured.
vSphere Auto Deploy can be configured with one of three different modes:
• Stateless
• Stateless Caching
• Stateful Install
In the Stateless mode, you deploy ESXi using Auto Deploy, but you aren’t actually “installing” ESXi. Instead of actually installing ESXi onto a local disk or a SAN boot LUN, you are building an environment where ESXi is directly loaded into memory on a host as it boots.
In the next mode, Stateless Caching, you deploy ESXi using Auto Deploy just as with Stateless, but the image is cached on the server’s local disk or SAN boot LUN. In the event that the Auto Deploy infrastructure is not available, the host boots from a local cache of the image. In this mode, ESXi is still running in memory but it’s loaded from the local disk instead of from the Auto Deploy server on the network.
The third mode, Stateful Install, is similar to Stateless Caching except the server’s boot order is reversed: local disk first and network second. Unless the server is specifically told to network boot again, the Auto Deploy service is no longer needed. This mode is effectively just a mechanism for network installation.
Auto Deploy uses a set of rules (called deployment rules) to control which hosts are assigned a particular ESXi image (called an image profile). Deploying a new ESXi image is as simple as modifying the deployment rule to point that physical host to a new image profile and then rebooting with the PXE/network boot option. When the host boots up, it will receive a new image profile.
Sounds easy, right? Maybe not. In theory, it is – but there are several steps you have to accomplish before you’re ready to deploy ESXi in this fashion:
1. You must set up a vCenter Server that contains the vSphere Auto Deploy service. This is the service that stores the image profiles.
2. You must set up and configure a Trivial File Transfer Protocol (TFTP) server on your network.
3. A DHCP server is required on your network to pass the correct TFTP information to hosts booting up.
4. You must create an image profile using PowerCLI.
5. Using PowerCLI, you must also create a deployment rule that assigns the image profile to a particular subset of hosts.
Auto deploy dependencies
This chapter deals with ESXi host installation methods; however, vSphere Auto Deploy is dependent on host profiles, a feature of VMware vCenter. More information about installing vCenter and configuring host profiles can be found in Chapter 3.
Once you’ve completed these five steps, you’re ready to start provisioning hosts with ESXi. When everything is configured and in place, the process looks something like this:
1. When the physical server boots, the server starts a PXE boot sequence. The DHCP server assigns an IP address to the host and provides the IP address of the TFTP server as well as a boot filename to download.
2. The host contacts the TFTP server and downloads the specified filename, which contains the gPXE boot file and a gPXE configuration file.
3. gPXE executes; this causes the host to make an HTTP boot request to the Auto Deploy server. This request includes information about the host, the host hardware, and host network information. This information is written to the server console when gPXE is executing, as you can see in Figure 2.7.
4. Based on the information passed to it from gPXE (the host information shown in Figure 2.7), the Auto Deploy server matches the server against a deployment rule and assigns the correct image profile. The Auto Deploy server then streams the assigned ESXi image across the network to the physical host.
Figure 2.7 Host information is echoed to the server console when it performs a network boot.
When the host has finished executing, you have a system running ESXi. The Auto Deploy server can also automatically join the ESXi host to vCenter Server and assign a host profile (which we’ll discuss in a bit more detail in Chapter 3) for further configuration. As you can see, this system potentially offers administrators tremendous flexibility and power.
Ready to get started with provisioning ESXi hosts using Auto Deploy? Let’s start with setting up the vSphere Auto Deploy server.
Finding the vSphere Auto Deploy Server
The vSphere Auto Deploy server is where the various ESXi image profiles are stored. The image profile is transferred from this server via HTTP to a physical host when it boots. The image profile is the actual ESXi image, and it consists of multiple vSphere Installation Bundle (VIB) files. VIBs are ESXi software packages; these could be drivers, Common Information Management (CIM) providers, or other applications that extend or enhance the ESXi platform. Both VMware and VMware’s partners could distribute software as VIBs.
The vSphere Auto Deploy service is installed but not enabled by default with vCenter Server. Previous versions of vSphere required a separate install of Auto Deploy.
1. Open up the vSphere Web Client (if you haven’t installed it yet, skip ahead to Chapter 3 and then come back) and connect to vCenter Server.
2. Navigate to vCenter Inventory Lists vCenter Manage Manage Settings Auto Deploy.
You’ll see information about the registered Auto Deploy service. Figure 2.8 shows the Auto Deploy screen after we installed vCenter and enabled the Auto Deploy service.
Figure 2.8 This screen provides information about the Auto Deploy server that is registered with vCenter Server.
That’s it for the Auto Deploy server itself; once it’s been installed and is up and running, there’s very little additional work or configuration required, except configuring TFTP and DHCP on your network to support vSphere Auto Deploy. The next section provides an overview of the required configurations for TFTP and DHCP.
Configuring TFTP and DHCP for Auto Deploy
The procedures for configuring TFTP and DHCP will vary based on the specific TFTP and DHCP servers you are using on your network. For example, configuring the ISC DHCP server to support vSphere Auto Deploy is dramatically different from configuring the DHCP Server service provided with Windows Server. Therefore, we can provide only high-level information in the following section. Refer to your specific vendor’s documentation for details on how the configuration is carried out.
Configuring TFTP
For TFTP, you need only upload the appropriate TFTP boot files to the TFTP directory. The Download TFTP Boot Zip link shown in Figure 2.8 provides the necessary files. Simply download the zip file using that link, unzip the file, and place the contents of the unzipped file in the TFTP directory on the TFTP server.
Configuring DHCP
For DHCP, you need to specify two additional DHCP options:
• Option 66, referred to as next-server or as Boot Server Host Name, must specify the IP address of the TFTP server.
• Option 67, called boot-filename or Bootfile Name, should contain the value undionly.kpxe.vmw-hardwired.
If you want to identify hosts by IP address in the deployment rules, then you’ll need a way to ensure that the host gets the IP address you expect. You can certainly use DHCP reservations to accomplish this, if you like; just be sure that options 66 and 67 apply to the reservation as well.
Once you’ve configured TFTP and DHCP, you’re ready to PXE boot your server, but you still need to create the image profile to deploy ESXi.
Creating an Image Profile
The process for creating an image profile may seem counterintuitive at first; it did for me. Creating an image profile involves first adding at least one software depot. A software depot could be a directory structure of files and folders on an HTTP server, or (more commonly) it could be an offline depot in the form of a zip file. You can add multiple software depots.
Some software depots will already have one or more image profiles defined, and you can define additional image profiles (usually by cloning an existing image profile). You’ll then have the ability to add software packages (in the form of VIBs) to the image profile you’ve created. Once you’ve finished adding or removing software packages or drivers from the image profile, you can export the image profile (either to an ISO or as a zip file for use as an offline depot).
All image profile tasks are accomplished using PowerCLI, so you’ll need to ensure that you have a system with PowerCLI installed in order to perform these tasks. We’ll describe PowerCLI, along with other automation tools, in more detail in Chapter 14, “Automating VMware vSphere.” I’ll walk you through creating an image profile based on the ESXi 6.0 offline depot zip file available for downloading by registered customers.
Perform the following steps to create an image profile:
1. At a PowerCLI prompt, use the Connect-VIServer cmdlet to connect to vCenter Server.
2. Use the Add-EsxSoftwareDepot command to add the ESXi 6.0 offline depot file:
Add-EsxSoftwareDepot C: \vmware-ESXi-6.0-XXXXXX-depot.zip
3. Repeat the Add-EsxSoftwareDepot command to add other software depots as necessary. The following code listed adds the online depot file:
Add-EsxSoftwareDepot
https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
4. Use the Get-EsxImageProfile command to list all image profiles in all currently visible depots.
5. To create a new image profile, clone an existing profile (existing profiles are typically read-only) using the New-EsxImageProfile command:
New-EsxImageProfile –CloneProfile "ESXi-6.0-XXXXXX-standard"–Name "My_Custom_Profile"
Once you have an image profile established, you can customize it by adding VIBs or you can export it. You might want to export the image profile because after you exit a PowerCLI session where you’ve created image profiles, the image profiles will not be available when you start a new session. Exporting the image profile as a zip file offline depot, you can easily add it back in when you start a new session.
To export an image profile as a zip file offline depot, run this command:
Export-EsxImageProfile –ImageProfile"My_Custom_Profile" –ExportToBundle–FilePath"C: \path\to\ZIP-file-offline-depot.zip"
When you start a new PowerCLI session to work with an image profile, simply add this offline depot with the Add-EsxSoftwareDepot command.
The final step is establishing deployment rules that link image profiles to servers in order to provision ESXi to them at boot time. I’ll describe how to do this in the next section.
Establishing Deployment Rules
The deployment rules are where the “rubber meets the road” for vSphere Auto Deploy. When you define a deployment rule, you are linking an image profile to one or more hosts. At this point vSphere Auto Deploy will copy all the VIBs defined in the specified image profile up to the Auto Deploy server so they are accessible from the hosts. When a deployment rule is in place, you can actually begin provisioning hosts via Auto Deploy (assuming all the other pieces are in place and functioning correctly, of course).
As with image profiles, deployment rules are managed via PowerCLI. You’ll use the New-DeployRule and Add-DeployRule commands to define new deployment rules and add them to the working rule set, respectively.
Perform the following steps to define a new deployment rule:
1. In a PowerCLI session where you’ve previously connected to vCenter Server and defined an image profile, use the New-DeployRule command to define a new deployment rule that matches an image profile to a physical host:
New-DeployRule –Name"Img_Rule" –Item"My_Custom_Profile"–Pattern"vendor=Cisco","ipv4=10.1.1.225,10.1.1.250"
This rule assigns the image profile named My_Custom_Profile to all hosts with Cisco in the vendor string and that have the IP address 10.1.1.225 or 10.1.1.250. You could also specify an IP range like 10.1.1.225-10.1.1.250 (using a hyphen to separate the start and end of the IP address range).
2. Next, create a deployment rule that assigns the ESXi host to a cluster within vCenter Server:
New-DeployRule –Name"Default_Cluster" –Item"Cluster-1" – AllHosts
This rule puts all hosts into the cluster named Cluster-1 in the vCenter Server with which the Auto Deploy server is registered. (Recall that an Auto Deploy server must be registered with a vCenter Server instance.)
3. Add these rules to the working rule set:
Add-DeployRule Img_Rule
Add-DeployRule Default_Cluster
As soon as you add the deployment rules to the working rule set, vSphere Auto Deploy will, if necessary, start uploading VIBs to the Auto Deploy server in order to satisfy the rules you’ve defined.
4. Verify that these rules have been added to the working rule set with the Get-DeployRuleSet command.
Now that a deployment rule is in place, you’re ready to provision via Auto Deploy. Boot the physical host that matches the patterns you defined in the deployment rule, and it should follow the boot sequence described at the start of this section. Figure 2.9 shows how it looks when a host is booting ESXi via vSphere Auto Deploy.
Figure 2.9 Note the differences in the ESXi boot process when using Auto Deploy versus a traditional installation of ESXi.
By now, you should be seeing the flexibility Auto Deploy offers. If you have to deploy a new ESXi image, you need only define a new image profile (using a new software depot, if necessary), assign that image profile with a deployment rule, and reboot the physical servers. When the servers come up, they will boot the newly assigned ESXi image via PXE.
Of course, there are some additional concerns that you’ll need to address should you decide to go this route:
• The image profile doesn’t contain any ESXi configuration state information, such as virtual switches, security settings, advanced parameters, and so forth. Host profiles are used to store this configuration state information in vCenter Server and pass that configuration information down to a host automatically. You can use a deployment rule to assign a host profile, or you can assign a host profile to a cluster and then use a deployment rule to join hosts to a cluster. We’ll describe host profiles in greater detail in Chapter 3.
• State information such as log files, generated private keys, and so forth is stored in host memory and is lost during a reboot. Therefore, you must configure additional settings such as setting up syslog for capturing the ESXi logs. Otherwise, this vital operational information is lost every time the host is rebooted. The configuration for capturing this state information can be included in a host profile that is assigned to a host or cluster.
In the Auto Deploy Stateless mode, the ESXi image doesn’t contain configuration state and doesn’t maintain dynamic state information, and they are therefore considered stateless ESXi hosts. All the state information is stored elsewhere instead of on the host itself.
Ensuring auto deploy is available
When working with a customer with vSphere 5.0 Auto Deploy, I had to ensure that all Auto Deploy components were highly available. This meant designing the infrastructure responsible for booting and deploying ESXi hosts was more complicated than normal. Services such as PXE and Auto Deploy and the vCenter VMs were all deployed on hosts that were not provisioned using Auto Deploy in a separate management cluster.
As per the Highly Available Auto Deploy best practices in the vSphere documentation, building a separate cluster with a local installation or boot from SAN will ensure there is no chicken-and-egg situation. You need to ensure that in a completely virtualized environment, your VMs that provision ESXi hosts with Auto Deploy are not running on the ESXi hosts they need to build.
Stateless Caching Mode
Unless your ESXi host hardware does not have any local disks or bootable SAN storage, I would recommend considering one of the two other Auto Deploy modes. These modes offer resiliency for your hosts if at any time the Auto Deploy services become unavailable.
To configure Stateless Caching, follow the previous procedure for Stateless with these additions:
1. Within vCenter, navigate to the Host Profiles section: vCenter Home Host Profiles.
2. Create a new host profile or edit the existing one attached to your host.
3. Navigate to System Image Cache Configuration under Advanced Configuration Settings.
4. Select Enable Stateless Caching On The Host.
5. Input the disk configuration details, using the same disk syntax as listed earlier in the section “Performing an Unattended Installation of VMware ESXi.” By default it will populate the first available disk, as you can see in Figure 2.10.
6. Click Finish to end the Host Profile Wizard.
7. Next you need to configure the boot order in the host BIOS to boot from the network first, and the local disk second. This procedure will differ depending on your server type.
8. Reboot the host to allow a fresh Auto Deploy image and the new host profile will be attached.
Figure 2.10 Editing the host profile to allow Stateless Caching on a local disk
This configuration tells the ESXi host to take the Auto Deploy image loaded in memory and save it to the local disk after a successful boot. If for some reason the network or Auto Deploy server is unavailable when your host reboots, it will fall back and boot the cached copy on its local disk.
Stateful Mode
Just like Stateful Caching mode, the Auto Deploy Stateful mode is configured by editing host profiles within vCenter and the boot order settings in the host BIOS.
1. Within vCenter, navigate to the Host Profiles section: vCenter Home Host Profiles.
2. Create a new host profile or edit the existing one attached to your host.
3. Navigate to System Image Cache Configuration under Advanced Configuration Settings.
4. Select Enable Stateful Installs On The Host.
5. Input the disk configuration details, using the same disk syntax as listed earlier in the section “Performing an Unattended Installation of VMware ESXi.” By default it will populate the first available disk (see Figure 2.10).
6. Click Finish to end the Host Profile Wizard.
7. Next you need to configure the boot order in the host BIOS to boot from the local disk first, and the network second. This procedure will differ depending on your server type.
8. The host will boot into Maintenance mode, and you must apply the host profile by clicking Remediate Host on the host Summary tab.
9. Provide IP addresses for the host and then reboot the host.
Upon this reboot, the host is now running off the local disk like a “normally provisioned” ESXi host.
vSphere Auto Deploy offers some great advantages, especially for environments with lots of ESXi hosts to manage, but it can also add complexity. As mentioned earlier, it all comes down to the design and requirements of your vSphere deployment.