• No results found

EMC VSPEX PRIVATE CLOUD

N/A
N/A
Protected

Academic year: 2021

Share "EMC VSPEX PRIVATE CLOUD"

Copied!
94
0
0

Loading.... (view fulltext now)

Full text

(1)

Proven Infrastructure Guide

EMC VSPEX PRIVATE CLOUD

Microsoft Hyper-V and EMC ScaleIO

EMC VSPEX

Abstract

This document describes the EMC® VSPEX® Proven Infrastructure solution for private cloud deployments with Microsoft Hyper-V and EMC ScaleIO technology.

November 2014

(2)

Copyright © 2014 EMC Corporation. All rights reserved. Published in the USA.

Published November 2014

EMC believes the information in this publication is accurate as of its publication date.

The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

EMC VSPEX Private Cloud

Microsoft Hyper-V and EMC ScaleIO Proven Infrastructure Guide

Part Number: H13420

(3)

Contents

Chapter 1 Executive Summary 9

Introduction ... 10

Target audience ... 10

Document purpose ... 10

Business needs ... 11

Chapter 2 Solution Overview 13 Introduction ... 14

Virtualization ... 14

Compute ... 14

Network ... 14

Storage ... 15

ScaleIO software ... 15

Overview ... 15

Software components ... 15

Software architecture ... 16

Storage definitions ... 17

Snapshots ... 19

ScaleIO 1.3 ... 19

Chapter 3 Solution Technology Overview 21 Overview ... 22

VSPEX Proven Infrastructures ... 22

Key components ... 23

Virtualization layer ... 23

Overview ... 23

Microsoft Hyper-V ... 24

Microsoft System Center Virtual Machine Manager ... 24

Windows Server Cluster-Aware Updating ... 24

Compute layer ... 24

Network layer ... 27

Storage layer ... 28

Chapter 4 Solution Architecture Overview 31 Overview ... 32

(4)

Solution architecture ... 32

Logical architecture ... 32

Key components ... 33

Hardware resources ... 34

Software resources ... 35

Server configuration guidelines ... 35

Overview ... 35

Ivy Bridge series processors... 35

Hyper-V memory virtualization ... 36

Memory configuration guidelines ... 37

Network configuration guidelines ... 38

Overview ... 38

VLANs ... 38

ScaleIO configuration guidelines ... 39

Overview ... 39

Hyper-V storage virtualization ... 39

High-availability and failover ... 41

Overview ... 41

Virtualization layer ... 41

Compute layer ... 41

Network layer ... 42

ScaleIO layer... 42

Chapter 5 Sizing the Environment 45 Overview ... 46

Reference workload ... 46

Scalability ... 47

VSPEX building blocks ... 47

Building block approach ... 47

Validated building block ... 47

Customizing the building block ... 48

Planning for high availability ... 50

Determining the number of building block nodes required ... 51

Example... 51

Configuration sizing guidelines ... 52

Overview ... 52

Using the customer sizing worksheet ... 53

Calculating the building block requirement ... 55

Fine-tuning hardware resources ... 55

(5)

Summary ... 56

Chapter 6 VSPEX Solution Implementation 57 Overview ... 58

Pre-deployment tasks ... 58

Deployment prerequisites ... 59

Customer configuration data ... 60

Network implementation ... 60

Preparing the network switches ... 60

Configuring the infrastructure network ... 61

Configuring the VLANs ... 61

Completing the network cabling... 61

Installing and configuring the Microsoft Hyper-V hosts ... 62

Installing the Windows hosts ... 62

Installing Hyper-V and configuring failover clustering ... 62

Configuring Windows host networking ... 62

Planning virtual machine memory allocations ... 63

Installing and configuring Microsoft SQL Server databases ... 63

Overview ... 63

Creating a virtual machine for SQL Server ... 64

Installing Microsoft Windows on the virtual machine ... 64

Installing SQL Server ... 64

Configuring SQL Server for SCVMM ... 64

Deploying the System Center Virtual Machine Manager server ... 65

Overview ... 65

Creating a SCVMM host virtual machine ... 66

Installing the SCVMM guest OS ... 66

Installing the SCVMM server ... 66

Installing the SCVMM Admin Console ... 66

Installing the SCVMM agent locally on a host ... 66

Adding the Hyper-V cluster to SCVMM ... 66

Creating a virtual machine in SCVMM... 66

Performing partition alignment ... 66

Creating a template virtual machine... 67

Deploying virtual machines from the template ... 67

Preparing and configuring the storage ... 67

Preparing the installation worksheet ... 68

Installing the ScaleIO components... 69

Creating and mapping volumes ... 74

(6)

Installing the GUI ... 77

Provisioning a virtual machine ... 77

Summary ... 77

Chapter 7 Verifying the Solution 79 Overview ... 80

Post-install checklist ... 81

Deploying and testing a single virtual server ... 81

Verifying the redundancy of the solution components ... 81

Chapter 8 System Monitoring 83 Overview ... 84

Key areas to monitor ... 84

Performance baseline ... 84

Servers ... 85

Networking ... 85

ScaleIO layer... 85

Appendix A Reference Documentation 87 EMC documentation ... 88

Other documentation ... 88

Appendix B Customer Configuration Worksheet 89 Customer configuration worksheet ... 90

Printing the worksheet ... 91

Appendix C Customer Sizing Worksheet 93 Customer sizing worksheet for Private Cloud ... 94

(7)

Figures

Figure 1. Layout of SDS and SDC ... 16

Figure 2. Protection domains ... 18

Figure 3. Storage pools ... 19

Figure 4. VSPEX private cloud components ... 22

Figure 5. VSPEX Proven Infrastructures ... 23

Figure 6. Compute layer flexibility examples ... 26

Figure 7. Example of highly available network design ... 28

Figure 8. ScaleIO layout ... 29

Figure 9. ScaleIO SCSI device ... 29

Figure 10. Logical architecture for the solution ... 32

Figure 11. Hypervisor memory consumption ... 37

Figure 12. Required networks for ScaleIO ... 39

Figure 13. Hyper-V virtual disk types ... 40

Figure 14. High availability at the virtualization layer ... 41

Figure 15. Redundant power supplies ... 41

Figure 16. Network layer high availability ... 42

Figure 17. Automatic rebalancing when disks are added... 43

Figure 18. Automatic rebalancing when disks or nodes are removed ... 43

Figure 19. Determine the maximum number of virtual machines that a building block can support ... 50

Figure 20. Required resource from the reference virtual machine pool ... 55

Figure 21. Sample Ethernet network architecture ... 61

Figure 22. Installation Manager Home page ... 70

Figure 23. Manage installation packages ... 70

Figure 24. Upload installation packages ... 71

Figure 25. Upload CSV file... 71

Figure 26. Installation configuration ... 72

Figure 27. Monitor page ... 73

Figure 28. Completed Install Operation ... 74

(8)

Tables

Table 1. Recommended 10 Gb switched Ethernet network layer ... 27

Table 2. Key components ... 33

Table 3. Solution hardware ... 34

Table 4. Solution software ... 35

Table 5. Hardware resources for the compute layer ... 36

Table 6. Hardware resources for the network layer ... 38

Table 7. VSPEX Private Cloud workload ... 46

Table 8. Building block node configuration ... 47

Table 9. Maximum number of virtual machines per node, limited by disk capacity48 Table 10. Maximum number of virtual machines per node, limited by disk performance ... 49

Table 11. Redefined building block node configuration example ... 49

Table 12. Example 1 ... 51

Table 13. Example 2 ... 52

Table 14. Customer sizing worksheet example ... 53

Table 15. Reference virtual machine resources ... 54

Table 16. Example worksheet row ... 54

Table 17. Server resource component totals ... 56

Table 18. Deployment process overview ... 58

Table 19. Pre-deployment tasks ... 59

Table 20. Deployment prerequisites checklist ... 59

Table 21. Tasks for switch and network configuration ... 60

Table 22. Tasks for server installation ... 62

Table 23. Tasks for SQL Server database setup ... 63

Table 24. Tasks for SCVMM configuration ... 65

Table 25. Set up and configure a ScaleIO environment ... 67

Table 26. CSV installation spreadsheet ... 68

Table 27. add_volume command parameters ... 75

Table 28. map_volume_to_sdc command parameters ... 76

Table 29. Tasks for testing the installation ... 80

Table 30. Common server information ... 90

Table 31. Hyper-V server information ... 90

Table 32. ScaleIO information ... 90

Table 33. Network infrastructure information ... 91

Table 34. VLAN information ... 91

Table 35. Service accounts ... 91

Table 36. Customer sizing worksheet ... 94

(9)

Chapter 1 Executive Summary

This chapter presents the following topics:

Introduction ... 10

Target audience ... 10

Document purpose ... 10

Business needs ... 11

(10)

Introduction

EMC® VSPEX® Proven Infrastructures are optimized for virtualizing business-critical applications. VSPEX provides modular solutions built with technologies that enable faster deployment, greater simplicity, greater choice, higher efficiency, and lower risk.

This document is a comprehensive guide to the technical aspects of the VSPEX Private Cloud for Microsoft Hyper-V with EMC ScaleIO solution. It describes the solution architecture and key components, and describes how to design, size, and deploy the solution to meet the customer’s needs.

Target audience

Readers of this document must have the necessary training and background to install and configure a VSPEX solution based on the Microsoft Hyper-V hypervisor, EMC ScaleIO, and associated infrastructure, as required by this implementation. External references are provided where applicable, and readers should be familiar with these documents. Readers should also be familiar with the infrastructure and database security policies of the customer installation.

Partners selling and sizing a VSPEX Private Cloud with EMC ScaleIO infrastructure should focus on the first five chapters of this guide. After purchase, implementers of the solution should focus on the implementation guidelines in Chapter 6, the solution validation in Chapter 7, and the monitoring guidelines in Chapter 8.

Document purpose

This document includes an initial introduction to the VSPEX architecture, an explanation of how to modify the architecture for specific engagements, and instructions on how to effectively deploy and monitor the system.

The VSPEX Private Cloud architecture provides customers with a modern system capable of hosting many virtual machines at a consistent performance level. This solution runs on a Microsoft Hyper-V virtualization layer. EMC ScaleIO software runs on top of the Hyper-V hypervisor. The compute and network components, which are defined by the VSPEX partners, are designed to be redundant and sufficiently powerful to handle the processing and data needs of the virtual machine

environment. This document details server capacity minimums for CPU, memory, and network interfaces. The customer can select any server and networking hardware that meets or exceeds the stated minimums.

The VSPEX Private Cloud for ScaleIO with Hyper-V solution described in this document is based on the capacity of the cluster server and on a defined reference workload.

Because not every virtual machine has the same requirements, the document includes methods and guidance to adjust the system to be cost effective as deployed.

A private cloud architecture is a complex system offering. This document provides prerequisite software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. After the last component is installed, the

(11)

validation tests and monitoring instructions ensure that your system is running properly. Following the instructions in this document ensures an efficient and painless journey to the cloud.

Business needs

VSPEX solutions are built with proven technologies to create complete virtualization solutions that allow you to make informed decisions for the hypervisor, server, and networking layers.

Business applications are moving into consolidated compute, network, and storage environments. EMC VSPEX Private Cloud with Microsoft Hyper-V reduces the

complexity of configuring every component of a traditional deployment model. The solution simplifies integration management while maintaining application design and implementation options. It also provides unified administration while still enabling adequate control and monitoring of process separation.

The business benefits of the VSPEX Private Cloud for Microsoft Hyper-V with EMC ScaleIO architectures include:

An end-to-end virtualization solution to effectively use the capabilities of the unified infrastructure components

Efficient virtualization of virtual machines for varied customer use cases

A reliable, flexible, and scalable reference design

(12)
(13)

Chapter 2 Solution Overview

This chapter presents the following topics:

Introduction ... 14

Virtualization ... 14

Compute ... 14

Network ... 14

Storage ... 15

ScaleIO software ... 15

(14)

Introduction

The VSPEX Private Cloud for Microsoft Hyper-V with EMC ScaleIO solution provides a complete system architecture capable of supporting virtual machines with a

redundant server and network topology and highly-available ScaleIO software. The core services provided by the solution include virtualization, compute, networking, and ScaleIO software-defined storage.

Virtualization

Microsoft Hyper-V is a key virtualization platform in the industry, providing flexibility and cost savings by consolidating large, inefficient server farms into nimble, reliable cloud infrastructures. Hyper-V live migration and Dynamic Optimization features make it a solid business choice.

Compute

VSPEX provides the flexibility to design and implement the customer’s choice of server components. The infrastructure must meet the following requirements:

Sufficient cores and memory to support the required number and types of virtual machines

Sufficient network connections to enable redundant connectivity to the system switches

Sufficient capacity to enable the environment to withstand a server failure and failover

Network

VSPEX provides the flexibility to design and implement the customer’s choice of network components. The infrastructure must meet the following requirements:

Redundant network links for the hosts, switches, and storage

Traffic isolation based on industry best practices

Support for link aggregation

IP network switches with a minimum non-locking backplane capacity sufficient for the target number of virtual machines and their associated workloads; EMC recommends using enterprise-class IP network switches with advanced Quality of Service features.

(15)

Storage

Data storage is at the heart of a virtualized data center. Without an effective storage solution, all progress toward virtualization and the efficiency it brings is incomplete.

The virtual server infrastructure must allow storage and compute to be scalable and flexible. It must also deliver enterprise-grade availability, resilience, reliability, and adaptability.

Enterprise-class data centers need robust, large-scale block storage that offers high performance and high availability to support the ever-growing base of business applications, hypervisors, file systems, and databases. The solution must have low total cost of ownership (TCO) and a stable cost/performance ratio to realize the return on investment (ROI) improvement that is expected of modern virtual data centers.

EMC ScaleIO has the architecture and features that make it an ideal storage foundation for virtual data centers.

ScaleIO software

ScaleIO is a software-only solution that uses existing local disks and LANs to create a virtual storage area network (SAN) that has all the benefits of external storage—but with reduced cost and complexity. ScaleIO software turns existing local internal storage into internal shared block storage that is comparable to, or better than, the more expensive external shared block storage. The lightweight ScaleIO software components are installed on the application hosts and communicate using a standard LAN to handle application I/O requests sent to the ScaleIO block volumes.

An extremely efficient decentralized block I/O flow, combined with a distributed, sliced volume layout, results in a massively parallel I/O system that can scale to thousands of nodes.

ScaleIO is designed and implemented with enterprise-grade resilience. The software features efficient, distributed, auto-healing processes that overcome media and node failures without administrator involvement. Dynamic and elastic, ScaleIO allows administrators to add or remove nodes and capacity as needed. The software immediately responds to the changes, rebalancing the storage distribution and achieving a layout that optimally suits the new configuration. Because ScaleIO is hardware agnostic, the software works efficiently with various types of server storage, networks, and hosts.

The ScaleIO virtual SAN software consists of three components:

Meta Data Manager (MDM)—Configures and monitors the ScaleIO system. The MDM can be configured in a redundant Cluster Mode, with three members on three servers, or in Single Mode on a single server.

ScaleIO Data Server (SDS)—Manages the capacity of a single server and acts as a back end for data access. The SDS is installed on all servers contributing storage devices to the ScaleIO system.

Overview

Software components

(16)

ScaleIO Data Client (SDC)—SDC is a lightweight device driver located on each host whose applications or file system requires access to the ScaleIO virtual SAN block devices. The SDC exposes block devices representing the ScaleIO volumes that are mapped to that host.

Storage and compute convergence

ScaleIO converges the storage and application layers. The hosts that run applications can be the same hosts used to realize shared storage, yielding a wall-to-wall, single layer of hosts. Because the same hosts run applications and provide storage for the virtual SAN, the SDC and SDS are typically both installed on each of the participating hosts, as shown in Figure 1.

Figure 1. Layout of SDS and SDC

The ScaleIO software components are designed and implemented to consume the minimum computing resources required for operation, and to have a negligible effect on the applications running on the hosts.

Pure block storage implementation

ScaleIO implements a pure block storage layout. The architecture and data paths are optimized for block storage access needs. For example, when an application submits a read I/O request to the SDC, the SDC deduces which SDS is responsible for the specified volume address and then interacts directly with that SDS. The SDS reads the data (by issuing a single read I/O request to the local storage, or by fetching the data from the cache in a cache-hit scenario), and returns the result to the SDC. The SDC provides the read data to the application.

This simple implementation consumes as few resources as necessary. The data moves over the network exactly once, and SDS storage receives only one I/O request.

The write I/O flow is similarly simple and efficient. ScaleIO offers optimal I/O efficiency, unlike some block storage systems that run on top of a file system or on top of object storage that runs on top of a local file system.

Massively parallel, scale-out I/O architecture

ScaleIO can scale to a large number of nodes, thus breaking the traditional scalability barrier of block storage. Because the SDCs propagate I/O requests directly to the relevant SDSs, there is no central point through which the requests move and a Software

architecture

(17)

potential bottleneck is avoided. This decentralized data flow is important to the linearly scalable performance of ScaleIO. A large ScaleIO configuration results in a massively parallel system. The more servers or disks the system has, the greater the number of parallel channels that are available for I/O traffic.

Hardware agnostic

ScaleIO is platform agnostic and works with existing underlying hardware resources.

In addition to its compatibility with various types of disks, networks, and hosts, ScaleIO can take advantage of the write buffer of existing local RAID controller cards;

it can also run on servers that do not have a local RAID controller card.

ScaleIO supports many options for SDS local storage, including internal disks, directly attached external disks, virtual disks exposed by an internal RAID controller, and partitions within such disks. Partitions are useful for combining system boot partitions with ScaleIO capacity on the same raw disks. If the system already has a large, mostly unused partition, ScaleIO does not require disk repartitioning. The SDS can use a file within that partition as its storage space.

Clustered and striped volume layout

A ScaleIO volume is a block device that is exposed to one or more hosts. It is the equivalent of a logical unit in a SCSI environment. ScaleIO breaks each volume into a large number of data chunks. These data chunks are scattered across the SDS

cluster’s nodes and disks in a fully balanced manner. This layout minimizes hot spots across the cluster and enables scaling of the overall I/O performance of the system through the addition of nodes or disks. Furthermore, this layout enables a single application that is accessing a single volume to use the full IOPS of all the cluster’s disks. This flexible, dynamic allocation of shared performance resources is one of the major advantages of converged scale-out storage.

Volume mapping and volume sharing

The volumes that ScaleIO exposes to the application clients can be mapped to one or more clients running in different hosts, and mapping can be changed dynamically if necessary. ScaleIO volumes can be used by applications that expect shared-

everything block access and by applications that expect shared-nothing or shared- nothing-with-failover access.

When configuring a ScaleIO system, the protection domain and storage pool link the physical layer with the virtualization layer.

Protection domain

A large ScaleIO system can be divided into multiple protection domains, each of which contains a set of SDSs, as shown in Figure 2. ScaleIO volumes are assigned to specific protection domains. Protection domains can be used to mitigate the risk of a dual point of failure in a two-copy scheme, or the risk of a triple point of failure in a three-copy scheme.

Storage definitions

(18)

Figure 2. Protection domains

For example, if two SDSs in different protection domains fail simultaneously, the data is still available. Just as incumbent storage systems can overcome a large number of simultaneous disk failures if they do not occur within the same shelf, ScaleIO can overcome a large number of simultaneous disk or node failures if they do not occur within the same protection domain.

Storage pool

A storage pool is a subset of physical storage devices in a protection domain. Each storage device belongs to one (and only one) storage pool. Newly generated protection domains have one default storage pool.

When a volume is configured over the virtualization layer, the volume is distributed over all devices residing in the same storage pool. This allows more than one failure in the system without losing data.

Figure 3 shows a protection domain with three storage pools. Because a storage pool can withstand the loss of one of its members, three failures in different storage pools in this configuration do not cause data loss.

(19)

Figure 3. Storage pools

ScaleIO enables you to take snapshots of existing volumes. Each snapshot is essentially a volume of its own. For each ScaleIO volume, you can create multiple fully rewritable, redirect-on-write snapshots. The snapshot hierarchy is very flexible.

For example, you can create a snapshot of a snapshot; you can also delete a volume and retain its snapshots. ScaleIO fully supports all expected restore functions.

You can use ScaleIO to take a set of consistent snapshots of a given set of volumes across multiple servers. You can also take a snapshot of the entire cluster’s volumes in a consistent manner. If crash consistency is acceptable, there is no need to stop, pause, or freeze I/O traffic to hosts, for any application activities, during snapshot creation.

ScaleIO 1.3 includes the following features:

Thick and thin provisioning

ScaleIO 1.3 supports both thick and thin provisioning. Thin provisioning provides on-demand storage provisioning and much quicker setup and startup times.

Snapshots

ScaleIO 1.3

(20)

Fault sets

ScaleIO mirroring ensures high data availability. If an SDS goes down, the mirrored data is immediately available from another SDS. ScaleIO 1.3 enables you to define a fault set, which is a group of SDSs that are likely to go down together. For example, you could configure a fault set for all SDSs that are powered in the same rack, to ensure that mirroring takes place outside of the fault set.

RAM read cache

The RAM read cache feature allocates space on the storage devices for caching reads or writes. You can configure RAM cache for an entire storage pool or in individual SDSs. When RAM read cache is enabled at the storage pool level, all SDSs in the storage pool have caching enabled. By default, the RAM cache size is 128 MB in all the SDSs. You can disable caching, or change the RAM

allocation for caching, on a per-SDS basis.

Graphical User Interface (GUI)

The GUI enables you to perform standard configuration and maintenance activities, as well as to monitor the storage system’s health and performance.

You can use the GUI to view the entire system and to drill down to individual elements.

OpenStack support

ScaleIO includes a Cinder driver that interfaces with OpenStack to present volumes to OpenStack as block devices that are available for block storage.

ScaleIO also includes an OpenStack Nova driver for handling compute and instance volume-related operations. The Nova driver executes the volume operations by communicating with the backend ScaleIO MDM through the ScaleIO REST Gateway.

(21)

Chapter 3 Solution Technology Overview

This chapter presents the following topics:

Overview ... 22

VSPEX Proven Infrastructures... 22

Key components ... 23

Virtualization layer ... 23

Compute layer ... 24

Network layer ... 27

Storage layer ... 28

(22)

Overview

This chapter provides an overview of the VSPEX Private Cloud for Microsoft Hyper-V with EMC ScaleIO solution and the key technologies used in the solution. The solution has been designed and proven by EMC to provide virtualization, server, network, and storage resources that enable customers to deploy a small-scale architecture and scale it as their business needs change.

Figure 4 shows the solution components.

Compute components

Hypervisor

Virtual servers Virtual servers

……….

connectionsNetwork

Network

Supporting infrastructure

Network components Storage

components

Virtualization components

Storage network

SDS/SDC SDS/SDC

SDS/SDC

Figure 4. VSPEX private cloud components

VSPEX Proven Infrastructures

VSPEX Proven Infrastructures are modular, virtualized infrastructures validated by EMC and delivered by EMC partners. They include virtualization, server, network, and storage layers, as shown in Figure 5. Customers can choose the virtualization, server, and network technologies that best fit their environment, while ScaleIO storage systems provide the storage layer.

VSPEX delivers faster deployment, greater simplicity and choice, higher efficiency, and lower risk when compared to the challenges and complexity of building an IT infrastructure from scratch. Validation by EMC ensures predictable performance and eliminates planning, sizing, and configuration burdens.

(23)

Figure 5. VSPEX Proven Infrastructures

Key components

The key components of this solution include:

Virtualization layer—Decouples the physical implementation of resources from the applications that use the resources.

Compute layer—Provides memory and processing resources for the virtualization layer software and for the applications running in the private cloud.

Network layer—Connects private cloud users to the resources in the cloud and connects the storage layer to the compute layer.

Storage layer—Provides storage to implement the private cloud. The ScaleIO component implements a pure block storage layout with converged nodes to support storage capacity.

Virtualization layer

The virtualization layer is a key component of any server virtualization or private cloud solution. It decouples the application resource requirements from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance. It also allows the physical Overview

(24)

system to change without affecting the hosted applications. In a server virtualization or private cloud use case, the virtualization layer enables multiple independent virtual machines to share the same physical hardware, instead of being directly implemented on dedicated hardware.

Hyper-V is the hypervisor-based virtualization role of Microsoft Windows Server and provides the virtualization platform for this solution.

Hyper-V live migration and live storage migration enable seamless movement of virtual machines or virtual machines files between Hyper-V servers or storage systems, transparently and with minimal performance impact.

Hyper-V works with Windows Server 2012 Failover Clustering and Cluster Shared Volumes (CSVs) to provide high availability in a virtualized infrastructure, significantly increasing the availability of virtual machines during planned and unplanned downtime. Configure Failover Clustering on the Hyper-V host to monitor virtual machine health and to migrate virtual machines between cluster nodes.

Hyper-V Replica provides asynchronous replication of virtual machines between two Hyper-V hosts at separate sites. Hyper-V replicas protect business

applications in the Hyper-V environment from downtime associated with an outage at a single site.

Hyper-V snapshots provide consistent point-in-time views of a virtual machine and enables users to revert the virtual machine to a previous point-in-time if necessary. Snapshots function as the source for backups, test and

development activities, and other use cases.

Microsoft System Center Virtual Machine Manager (SCVMM) is a centralized management platform that enables data center administrators to configure and manage virtualized host, networking, and storage resources, and to create and deploy virtual machines and services to private clouds. SCVMM simplifies provisioning, management, and monitoring in the Hyper-V environment.

Cluster-Aware Updating (CAU) enables updating of cluster nodes with little or no loss of availability. CAU is integrated with Windows Server Update Services (WSUS), and can be automated using PowerShell.

Compute layer

The choice of a server platform for a VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance and

management features, and many other factors. For these reasons, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a specific number of servers with a specific set of requirements, VSPEX defines the minimum requirements for the number of processor cores and the amount of RAM.

Microsoft Hyper-V

Microsoft System Center Virtual Machine Manager

Windows Server Cluster-Aware Updating

(25)

ScaleIO components are designed to work with a minimum of three server nodes. The physical server node, running Microsoft Hyper-V, can host workloads other than the ScaleIO virtual machine.

For VSPEX systems, EMC recommends a maximum of four virtual CPUs per physical core in a virtual machine environment. For example, a server node with eight physical cores can support up to 32 virtual machines.

Examples 1 and 2 demonstrate how to determine the number of nodes you need to deploy to meet specific CPU resource requirements, based on an 8-core reference server node. Example 3 demonstrates how you can implement the same compute layer requirements using different numbers and types of servers.

Example 1

A customer wants to move a small, custom-built application server into the VSPEX with ScaleIO virtual infrastructure. The server is currently running on a physical system with 24 processors. Using the 8-core reference server node, the compute layer needs the CPUs of three reference server nodes to meet the customer’s requirement.

Example 2

A customer wants to move the database server for a point-of-sale system into the VSPEX with ScaleIO virtual infrastructure. The server is currently running on a physical system with 32 processors. Using an 8-core reference server node, the compute layer needs the CPUs of four reference server nodes to meet the customer’s requirement.

Example 3

The example in Figure 6 shows two different implementations of the same compute layer requirements. The required resources are 25 processor cores and 200 GB RAM.

One customer might want to implement these resources by using white-box servers containing 16 processor cores and 64 GB of RAM, while another customer might want to use higher-end servers with 12 processor cores and 144 GB of RAM. The first customer needs four servers, while the other customer needs three.

Note: To enable high-availability for the compute layer, each customer needs one additional server to ensure that the system has enough capability to maintain business operations when a server fails.

(26)

Figure 6. Compute layer flexibility examples

Apply the following best practices in the compute layer:

Use identical, or at least compatible, servers. VSPEX implements hypervisor- level high-availability technologies that may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area.

When implementing high availability at the hypervisor layer, the largest virtual machine you can create is constrained by the smallest physical server in the environment.

Implement the available high-availability features in the virtualization layer, and ensure that the compute layer has sufficient resources to accommodate at least single server failures. This enables the implementation of minimal- downtime upgrades and tolerance for single unit failures.

(27)

Within the boundaries of these recommendations and best practices, the VSPEX compute layer can be flexible to meet your specific needs. Ensure that there are sufficient processor cores and RAM per core to meet the needs of the target environment.

Network layer

The infrastructure network requires redundant network links for each Hyper-V host.

This configuration provides both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure for the solution already exists, or you are deploying it alongside other components of the solution.

The ScaleIO network creates a Redundant Array of Independent Nodes (RAIN)

topology between the server nodes, distributing data so that the loss of a single node does not affect data availability. This topology requires ScaleIO nodes to send data to other nodes to maintain consistency.

A high-speed, low-latency IP network is required for this to work correctly. We1 created the test environment with redundant 10 Gb Ethernet networks. The network was not heavily used during testing at small scale points. For that reason, at small points of scale you can implement the solution using 1 Gb networks. However, EMC recommends a 10 GbE IP network designed for high availability, as shown in Table 1.

Table 1. Recommended 10 Gb switched Ethernet network layer

Nodes 10 Gb switched Ethernet 1 Gb switched Ethernet 3

Recommended Possible

4 5 6

7 Not recommended

Figure 7 shows an example of this highly available network topology.

1 In this guide, “we” refers to the EMC Solutions engineering team that validated the solution.

(28)

Figure 7. Example of highly available network design

This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high-availability, and security.

Storage layer

ScaleIO, as shown in Figure 8, is implemented as a software layer that takes over the existing local storage on a server, combines that with storage from other servers in the environment, and presents LUNs from the aggregated storage for use by the virtual environment. The LUNs are usable as a type of CSV in Hyper-V Failover Cluster Manager environments.

(29)

Figure 8. ScaleIO layout

In Hyper-V environments, both the SDS and the SDC sit inside the hypervisor. Nothing is installed at the guest layer and ScaleIO is not dependent on the operating system.

This means that you install ScaleIO to only one location; also, there is only one build to maintain and test. In a Windows environment, a ScaleIO SCSI disk device looks like any other local disk device, as shown Figure 9.

Figure 9. ScaleIO SCSI device

(30)
(31)

Chapter 4 Solution Architecture Overview

This chapter presents the following topics:

Overview ... 32 Solution architecture ... 32 Server configuration guidelines ... 35 Network configuration guidelines ... 38 ScaleIO configuration guidelines ... 39 High-availability and failover ... 41

(32)

Overview

This chapter provides a detailed guide to the architecture and key components of the VSPEX Private Cloud for Microsoft Hyper-V with EMC ScaleIO solution, including configuration guidelines for the compute, networking, virtualization, and storage layers.

VSPEX solutions are designed to run on a wide variety of server platforms. VSPEX defines the minimum CPU and memory resources required, but not a specific server type or configuration. You can use any server platform and configuration that meets or exceeds the minimum requirements. EMC validated the specified ScaleIO

architecture, together with a system that meets the server and network requirements, to deliver high levels of performance and a highly available architecture for your private cloud deployment.

Solution architecture

The VSPEX Private Cloud for Microsoft Hyper-V with EMC ScaleIO solution validates the solution infrastructure for various numbers of virtual machines.

Note: VSPEX uses a reference workload to describe and define a virtual machine. Therefore, one physical or virtual server in an existing environment may not be equal to one virtual machine in a VSPEX solution. Evaluate your workload in terms of the reference workload to arrive at an appropriate point of scale for your deployment. Refer to VSPEX building blocks for detailed information.

Windows Server 2012 R2 Hyper-V cluster EMC ScaleIO

Microsoft Windows 2012 R2 Hyper-V virtual servers

Virtual server 1 Virtual server n

……….

10 GbE IP Network vCenter Server

SQL Server

DNS Server

Active Directory Server

Shared infrastructure Storage

network

Figure 10. Logical architecture for the solution Logical

architecture

(33)

The solution uses EMC ScaleIO software and Microsoft Hyper-V to provide the storage and virtualization platforms respectively in a Microsoft Windows Server 2012 R2 environment.

The solution architecture includes the following key components:

Microsoft Hyper-V—Provides a common virtualization layer to host the server environment. Hyper-V provides a highly available infrastructure through features such as:

Live migration—Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service

disruption.

Failover Clustering High Availability (HA) —Detects and provides rapid recovery for a failed virtual machine in a cluster

Dynamic Optimization (DO) —Provides load balancing of computing capacity in a cluster with support of SCVMM.

ScaleIO—Provides the storage layer to host and store applications.

Microsoft System Center Virtual Machine Manager (SCVMM)—SCVMM is not required for this solution. However, if deployed, SCVMM (or its corresponding functionality in Microsoft System Center Essentials) simplifies provisioning, management, and monitoring of the Hyper-V environment.

Microsoft SQL Server 2012—SCVMM requires a SQL Server database instance to store configuration and monitoring details.

DNS Server—Domain Name Service (DNS) performs name resolution for the various solution components. The solution uses Microsoft DNS Server running on Windows Server 2012 R2.

Active Directory Server—Various solution components require Active Directory services to function properly. The Microsoft Active Directory service runs on Windows Server 2012 R2.

IP network—A standard Ethernet network, with redundant cabling and switching, carries all network traffic. A shared IP network carries user and management traffic.

Table 2 summarizes the key components of the solution.

Table 2. Key components

VSPEX layer Components Application layer and

virtualization layer Microsoft Hyper-V virtualization with:

Microsoft Windows 2012 R2 Hyper-V

Microsoft System Center Virtual Machine Manager (SCVMM)

Hyper-V Failover Clustering and High Availability

Compute layer VSPEX defines the minimum amount of compute layer resources required but allows the customer to implement the requirements using any server hardware that meets these requirements.

Key components

(34)

VSPEX layer Components

Network layer VSPEX defines the minimum number of network ports required for the solution and provides general guidance on network

architecture, but allows the customer to implement the requirements using any network hardware that meets these requirements.

Storage layer EMC ScaleIO

Table 3 lists the hardware used in this solution.

Table 3. Solution hardware

Component Configuration

Microsoft Hyper-V servers

CPU 1 vCPU per virtual machine

Maximum of 4 vCPUs per physical core*

Memory 2 GB RAM per virtual machine

2 GB RAM for each physical server for the hypervisor Network 2 x 10 GbE NICs per server

Note: Add at least one server to the minimum requirements to implement Hyper-V HA functionality and meet the minimum number of nodes requirement.

Network infrastructure

Minimum switching capacity

2 physical network switches 2 x 10 GbE ports per Hyper-V server Note: EMC recommends a 10 GbE network

infrastructure for the solution and we validated the solution with this network infrastructure. 1 GbE is acceptable for a small number of nodes (see Table 1), if bandwidth and redundancy are sufficient to meet the solution’s minimum requirements.

Shared infrastructure

A typical customer environment has already configured infrastructure services such as Active Directory and DNS. The setup of these services is beyond the scope of this guide.

If implementing the solution without existing infrastructure, the minimum requirements are:

2 physical servers

16 GB RAM per server

4 processor cores per server

2 x 1 GbE ports per server

Note: These resources can be migrated into VSPEX post-deployment;

however, they must exist before VSPEX can be deployed.

* For Intel Ivy Bridge or later processors, use eight vCPUs per physical core.

Hardware resources

(35)

Table 4 lists the software used in this solution.

Table 4. Solution software

Software Configuration

Microsoft Hyper-V

Microsoft Windows Server Windows Server 2012 R2 Datacenter Edition Note: Datacenter Edition is necessary to support the number of virtual machines in this solution.

Microsoft System Center Virtual Machine Manager

Windows Server 2012 R2 Standard Edition Note: Any supported operating system for Microsoft System Center is acceptable.

Microsoft SQL Server Version 2012 R2 Standard Edition

Note: Any supported database for SCVMM is acceptable.

ScaleIO

ScaleIO virtual machines

ScaleIO 1.3 MDM/Tie-Breaker

SDS SDC

Virtual machines (used for validation, but not required for deployment)

Base operating system Microsoft Window Server 2012 R2 Datacenter Edition

Server configuration guidelines

When designing and ordering the compute layer of this VSPEX solution, several factors can affect the final purchase. For example:

If a system workload is well understood, then virtualization features such as memory ballooning and transparent page sharing can reduce the aggregate memory requirement.

You can reduce the number of vCPUs if the virtual machine pool does not have a high level of peak or concurrent usage. Conversely, if the deployed

applications are highly computational, you might need to increase the number of CPUs and the amount of memory.

Testing on the Intel Ivy Bridge series processors demonstrated significant increases in virtual machine density from the server resource perspective. If your server

deployment uses Ivy Bridge processors, EMC recommends increasing the vCPU/pCPU ratio from 4:1 to 8:1. This essentially halves the number of server cores required to host the reference virtual machines.

Software resources

Overview

Ivy Bridge series processors

(36)

Current VSPEX sizing guidelines require a maximum virtual CPU core to physical CPU core ratio of 4:1, with a maximum 8:1 ratio for Ivy Bridge or later processors. This ratio is based on an average sampling of CPU technologies available at the time of testing.

As CPU technologies advance, OEM server vendors that are VSPEX partners might suggest different (normally higher) ratios. Follow the updated guidance from the server vendor.

Note: Customers can choose any CPU model that meets the minimum VSPEX requirements.

Results will vary depending on a specific workload.

Table 5 lists the hardware resources used for the compute layer.

Table 5. Hardware resources for the compute layer

Component Configuration

Microsoft Hyper-V servers

CPU 1 vCPU per virtual machine

Maximum of 4 vCPUs per physical core Memory 2 GB RAM per virtual machine

2 GB RAM reservation per Microsoft Hyper-V host Network 2 x 10 GbE NICs per server

Note: Add at least one server to the minimum requirements to implement Hyper-V HA functionality and meet the minimum number of nodes requirement.

Note: EMC recommends using a 10 GbE network, or an equivalent 1 GbE network

infrastructure provided that the underlying requirements for bandwidth and redundancy are met.

Microsoft Hyper-V has several advanced features that help maximize performance and overall resource utilization. The most important features relate to memory management. This section describes some of these features, and the items to consider when using these features in a VSPEX environment.

Figure 11 illustrates how a single hypervisor consumes memory from a pool of resources. Hyper-V memory management features such as Dynamic Memory and Smart Paging can reduce total memory usage and increase consolidation ratios in the hypervisor.

Hyper-V memory virtualization

(37)

Figure 11. Hypervisor memory consumption Dynamic Memory and Smart Paging

Dynamic Memory increases physical memory efficiency by treating memory as a shared resource, dynamically allocating it to virtual machines, and reclaiming unused memory from idle virtual machines. Administrators can dynamically adjust the

amount of memory used by each virtual machine at any time.

With Dynamic Memory, Hyper-V allows more virtual machines than the available physical memory can support. This introduces the risk that there might not be sufficient physical memory available to restart a virtual machine if required. Smart Paging is a memory management technique that uses disk resources as a temporary memory replacement when more memory is required to restart a virtual machine.

Non-Uniform Memory Access

Non-Uniform Memory Access (NUMA) is a multinode technology that enables a CPU to access remote-node memory. Because this type of memory access degrades

performance, Windows Server 2012 uses processor affinity, which pins threads to a single CPU, to avoid remote-node memory access. This feature is available to the host and to the virtual machines, where it provides improved performance in symmetrical multiprocessor (SMP) environments.

Hyper-V memory overhead

Virtualized memory has some associated overhead, including the memory consumed by the Hyper-V the parent partition and additional overhead for each virtual machine.

Leave at least 2 GB memory for the Hyper-V parent partition in this solution.

Memory configuration guidelines

(38)

Virtual machine memory

Each virtual machine in this solution is assigned 2 GB memory in fixed mode.

Network configuration guidelines

This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines consider VLANs and the ScaleIO network layer. Table 6 details the network resource requirements.

Table 6. Hardware resources for the network layer

Component Configuration

Network infrastructure

for Block Minimum

switching capacity IP network—2 physical LAN switches Two 10 GbE ports per Hyper-V server

Note: The solution can use a 1 GbE network if the underlying bandwidth and redundancy requirements are met.

Isolate network traffic so that management traffic, traffic between hosts and storage, and traffic between hosts and clients all move over isolated networks. Physical isolation might be required in some cases for regulatory or policy compliance reasons. Logical isolation with VLANs is sufficient in many cases.

EMC recommends separating the network into two types for security and increased efficiency:

A management network, used to connect and manage the ScaleIO

environment. This network is generally connected to the client management network. Because this network has less I/O traffic, EMC recommends a 1 GbE network.

An internal data network, used for communication between the ScaleIO components. This is generally a 10 GbE network.

In this solution, we used one VLAN for client access and one VLAN for management.

Figure 12 depicts the VLANs and the network connectivity requirements for a ScaleIO environment.

Overview

VLANs

References

Related documents

Microsoft System Center Operations Manager Microsoft System Center Virtual Machine Manager Microsoft Systems Management Server (SMS) Microsoft Visual Studio Team System.

Unified Compute Platform Director software on the Unified Compute Platform for Microsoft Cloud integrates directly into Microsoft System Center Virtual Machine Manager 2012 R2..

 Module 2: Configuring and Deploying the Private Cloud with Microsoft System Center 2012 R2 Virtual Machine Manager.  Overview of VMM Architecture

Module 2: Configuring and Deploying the Private Cloud with Microsoft System Center 2012 R2 Virtual Machine Manager.. Students learn how to configure and deploy a private cloud

Module 2: Configuring and Deploying the Private Cloud with Microsoft System Center 2012 R2 Virtual Machine Manager.. Students learn how to configure and deploy a private cloud

Module 2: Configuring and Deploying the Private Cloud with Microsoft System Center 2012 R2 Virtual Machine Manager. Students learn how to configure and deploy a private cloud

The course also provides details on how to manage a server virtualization environment by using System Center products such as System Center Virtual Machine Manager (VMM) 2008,..

The course also provides details on how to manage a server virtualization environment by using System Center products such as System Center Virtual Machine Manager (VMM) 2008,