Читать книгу VMware Software-Defined Storage - Martin Hosken - Страница 12

Chapter 1
Software-Defined Storage Design
Implementing a Software-Defined Storage Strategy

Оглавление

As a consequence of the ever-increasing cost of enterprise business storage, as outlined previously, more IT industry attention than ever before is focused on new storage architectures and technologies designed to drive down the total cost of ownership associated with storage. This approach aims to reduce both CapEx and OpEx costs by reducing hardware to its bare commodity components, and removing secret source software from the controllers, in favor of placing it onto a common storage software layer provided by either the hypervisor or a software-defined storage model.

In the past, several attempts have been made to develop a common management system that can transcend storage hardware and software vendors. For example, the Storage Networking Industry Association (SNIA) developed the Storage Management Initiative Specification (SMI-S), and the World Wide Web Consortium has Representational State Transfer (REST). However, these have seen only limited adoption by the storage industry. To achieve even limited interoperability and provide a sense of single point of management and support, the only real option for large enterprise IT organizations and cloud service providers has been to deploy homogeneous storage islands from a single hardware vendor in an attempt to manage operational overhead and therefore reduce OpEx costs.

The theory behind the software-defined storage model is to facilitate management across a common plane, by breaking down the barriers to interoperability that exist with proprietary vendor storage hardware. For most IT organizations, storage from different vendors, or even different models of storage array hardware from the same vendor, create isolated storage islands. It can be difficult to interoperate, share resources, or even manage across these islands from a single pane of glass.

The software-defined storage model aims to provide OpEx cost savings by driving efficient capacity utilization and platform management in a more agile way, typically by providing automation and a common management interface for all of the storage infrastructure. Therefore, the challenge for enterprise IT organizations and cloud service providers is to find the right software-defined storage solution, one that can apply the right centralized software services to the entire infrastructure by using simple, unified operational procedures within a common user interface.

The software-defined storage model also aims to reduce CapEx costs by moving away from proprietary storage hardware, and toward technology that facilitates unified management across all components of the storage infrastructure. When considering hardware solutions to deliver a software-defined storage-based environment, IT executives may be focused on reducing the total cost of ownership of storage resources. The following list provides a buyer’s guide that IT organizations can use when working with their respective storage vendors to establish core storage requirements:

• Which storage solutions can work with the applications, hypervisors, and data that we currently have and are predicting to have going forward?

• Which storage solutions can enhance application performance?

• Which storage solutions best provide the required data availability?

• Which storage solutions can be deployed, configured, and managed quickly and effectively using currently available skills?

• Which storage solutions can provide greater, and if possible, optimal, storage capacity?

• Which storage solutions can best facilitate flexibility (provide the ability to add capacity or performance in the future without impacting the applications)?

• Which storage solutions provide automation and centralized management capabilities?

• Which storage technology will meet the preceding requirements within the available budget?

The approach often taken by IT organizations is to follow the lead of a trusted storage vendor. However, a key challenge for IT decision makers is to see beyond current trends in the industry and to arrive at a strategy that will provide a solution meeting not only today’s storage requirements at an acceptable level of cost, but also next year’s requirements for the various lines of business, and even the next decade’s. This requires a subjective and clear-headed evaluation of the options, their costs, and the alternative approaches that could deliver the required storage functionality that optimizes both CapEx and OpEx budgets.

An additional challenge, which you also shouldn’t overlook, is the complication associated with educating decision makers about the intricacies of storage technologies, in order to obtain budgetary approval. Enterprise IT executives rarely question the requirement to store and retain their ever-growing volume of business data. However, explaining the differences between various storage products, and their advantages and drawbacks, often requires a transfer of technical knowledge in order for the decision makers to grasp the concepts and challenges faced by the architect, and how they relate to their storage platform design.

When finances are stretched, as they so often are, a high storage infrastructure expenditure can significantly stand out on an IT executive’s annual budget spreadsheet. By examining the storage environment and calculating the total cost of ownership of storage resources, IT organizations can seek to identify new and innovative ways to address CapEx and OpEx expenditures through the software-defined storage model, without compromising application performance, capacity, availability or other data-related services.

Software-Defined Storage Summary

Just as VMware introduced x86 server virtualization to improve the cost metrics and utilization efficiencies of the compute platform, so too can the software-defined storage model be used to make the most efficient use of storage infrastructure, thereby reducing the total cost of ownership through storage acquisition and operational cost savings.

In the software-defined storage data center, all storage – whether it is directly attached hyper-converged Virtual SAN, or is SAN attached and leveraging Virtual Volumes–enabled arrays – can be used as part of a storage resource pool. This eliminates the requirement to rip and replace all of the storage infrastructure in order to adopt a fully hyper-converged unified storage model as part of a single migration project, and allows the IT organization to spread the costs associated with a full storage infrastructure refresh over a number of years.

This is only one storage strategy. Equally valid is the mixed hybrid approach of employing Virtual Volumes and Virtual SAN as a long-term design, effectively using both solutions for specific use cases and workloads, as illustrated in Figure 1.10.


Figure 1.10 Hybrid Virtual Volumes and Virtual SAN platform


Just like the classic storage model, large enterprise customers and cloud service providers that are adopting software-defined storage typically should configure resources into pools. Each pool is composed of a different set of characteristics and services.

For instance, a Virtual SAN tier 1 pool may be optimized for performance and business-critical workloads, while a tier 0 pool may comprise all-flash disk groups and provide storage resources to specific I/O-intensive workloads. Following a similar model, high-capacity, low-cost, low-performance disks may be fashioned into a pool intended for the data that is infrequently accessed or updated. With this type of approach to storage provisioning, the software-defined storage model will continue to enable the implementation of a tiered storage strategy in order to provide improved capacity utilization and resource efficiency.

Furthermore, the implementation of a software-defined storage model allows technologies such as thin provisioning, compression, and de-duplication to be applied across an entire storage platform, rather than isolating these features behind specific hardware controllers. This helps to ensure that storage capacity can be used more efficiently, via a global storage policy.

These technologies can help slow the rate at which new capacity must be added to the infrastructure, and help ensure that where appropriate, less-expensive hardware can be deployed. In addition, centralizing this functionality through a single control plane enhances ease of administration, which in turn can also help reduce operational costs and the efforts associated with software maintenance.

The software-defined storage model is not an industry standard, and various approaches exist for the design, implementation, and function of the solution stack. Both VMware and independent software vendors (ISVs) have in recent years developed the concepts and product architecture of the software-defined storage platform for its integration into the market’s leading hypervisor, to ensure that software-defined storage can operate within a robust and affordable model. These initiatives, which are the focus of much of this book, include the following:

• The introduction of the hyper-converged infrastructure product Virtual SAN, a bare-bones, hardware-agnostic model with a direct-attached storage configuration. This reduces or removes altogether the requirement for a switched fabric or LAN-attached storage infrastructure to manage, with no more proprietary storage hardware to support.

• The abstraction of advanced storage functions away from the storage vendor, and instead placed in the hypervisor software and management control plane. This approach simplifies operations, with no more proprietary software licenses and firmware levels to manage, and enables storage services to be applied to all capacity, not just specific hardware.

• The introduction of a single storage service management plane, via a unified user interface. This removes the requirement for third-party tools and specific array element managers to monitor and administer a heterogeneous storage infrastructure.

All of these attributes provide a significant improvement over the ongoing challenges associated with classic storage infrastructures, although they do not address all the problems that make proprietary storage systems expensive to own and operate.

Hyper-Converged Infrastructure and Virtual SAN

The hyper-converged infrastructure (HCI) hardware architecture model uses the hypervisor to deliver compute, networking, and shared storage from a single x86 server platform. This software-driven architecture enables physical storage resources to become part of commodity x86 servers, enabling a building-block approach with a web-scale level of scalability. Also, by adopting this commodity x86 server hardware approach, and combining both storage and compute hardware into a single entity, IT organizations and cloud service provider data centers can operate with agility, on a highly scalable, cost-effective, fully converged platform.

Virtual SAN is VMware’s HCI platform, which enables this approach to be taken through the VMware integrated stack of technologies. Virtual SAN aggregates local storage into a unified data plane, which virtual machines can then use. Virtual SAN also uses a fully integrated policy-driven management layer, which allows virtual machines to be managed centrally, through a policy-driven storage mechanism that is integrated into the virtual machines’ own settings. These policies can define reliability, redundancy, and performance characteristics that must be obeyed, independently of all other virtual machines that may reside on the same storage platform.

Virtual SAN is the foundational component of VMware’s hyper-converged infrastructure solution. This model allows the convergence of compute, storage, and networking onto a single integrated layer of software that can run on any commodity x86 infrastructure aligned with the requirements set out on VMware’s hardware compatibility list (HCL). While vSphere abstracts and aggregates compute resources into logical pools, Virtual SAN, embedded into the hypervisor’s VMkernel, can pool together server-attached disk devices to create a high-performance distributed datastore.

This approach can easily meet the storage requirements of the most demanding IT organization or cloud service provider, at a lower cost than legacy monolithic SAN or NAS storage devices. Virtual SAN also allows vSphere and vSphere storage administrators to ignore concepts such as RAID sets and LUNs, and instead focus on the specific storage needs of applications. In addition, Virtual SAN can simplify capacity planning by scaling both storage and compute concurrently, allowing for the nondisruptive addition of new nodes, without the purchase of costly storage frames or disk shelves. Virtual SAN is addressed in more detail in Chapters 47.

Virtual Volumes

While they are not part of an HCI architecture strategy, Virtual Volumes is nevertheless an important component in VMware’s software-defined storage model. Virtual Volumes uses shared storage devices in a new way, and transforms storage management by enabling full virtual machine awareness from the storage array. Based on a T10 industry standard, Virtual Volumes provides a unique level of integration between vSphere and third-party vendors’ storage hardware, which significantly improves the efficiency and manageability of virtual workloads.

Virtual Volumes virtualizes shared SAN and NAS storage devices, which are then presented to vSphere hosts, providing logical pools of raw disk capacity, called a virtual datastore. Then, Virtual Volume objects, which represent virtual disks and other virtual machine entities, natively reside on the underlining storage, making the object, or virtual disk, the primary unit of data management at the array level, instead of a LUN. As a result, it becomes possible to execute storage operations with virtual-machine, or even virtual-disk, granularity on the underlining storage system, and therefore provide native array-based data services, such as snapshots or replication, to individual virtual machines.

To facilitate a simplified and unified approach to management, all this is done with a common storage-policy-driven mechanism, which encompasses both Virtual SAN storage resources and Virtual Volumes external storage, into a single management plane. Virtual Volumes is covered in more detail in Chapter 8, “Policy-Driven Storage Design with Virtual Volumes.”

Classic and Next-Generation Storage Models

This book refers to storage technologies as either classic or next-generation. Because these terms can have multiple meanings, this section provides an overview of each to clarify.

This book uses classic storage model to describe the traditional shared storage model used by vSphere. This typically includes LUNs, VMFS-based volumes and datastores, or NFS mount points, with a shared storage protocol providing I/O connectivity. Despite its constraints, this model has been successfully employed for years, and will continue to be used for some time by IT organizations and cloud service providers across the industry.

The next-generation storage model refers to VMware’s software-defined solutions, Virtual SAN and Virtual Volumes, which bring about a new era in storage design, implementation, and management.

As addressed earlier in this chapter, the primary aim of VMware’s software-defined storage model is to bring about simplicity, efficiency, and cost savings to storage resources. The model does this by abstracting the underlining storage in order to make the application the fundamental unit of management across a heterogeneous storage platform. With both Virtual SAN and Virtual Volumes, VMware moves away from the rigid constraints of the classic LUNs and volumes, and provides a new way to manage storage on a per virtual machine basis, through its more flexible policy-driven approach.

However, before addressing these next-generation storage technologies, you first need to understand the approach taken to storage over the last generation of vSphere-based virtualization platforms, and see how the VMware stack itself interacts with storage resources to provide a flexible, modern virtual data center.

This first chapter has addressed the VMware storage landscape, processes associated with storage design, and challenges faced by vSphere storage administration teams when maintaining complex, heterogeneous storage platforms on a daily basis for enterprise IT organizations and cloud service providers. The next chapter presents many of the essential design considerations based on the classic storage model previously outlined.

______________

VMware Software-Defined Storage

Подняться наверх