• No results found

EMC INFRASTRUCTURE FOR MICROSOFT APPLICATIONS IN THE PRIVATE CLOUD

N/A
N/A
Protected

Academic year: 2021

Share "EMC INFRASTRUCTURE FOR MICROSOFT APPLICATIONS IN THE PRIVATE CLOUD"

Copied!
61
0
0

Loading.... (view fulltext now)

Full text

(1)

White Paper

EMC Solutions Group

Abstract

This white paper describes a validated reference architecture for Microsoft's most common business-critical applications, consisting of Microsoft Exchange Server 2010, SharePoint Server 2010, and SQL Server 2012 on the Microsoft Hyper-V virtualization platform and EMC® Symmetrix® VMAX® 10K storage platform. Continuous availability and multi-site protection is achieved with EMC RecoverPoint/Cluster Enabler that replicates and manages the entire

infrastructure to a remote EMC VNX®5700 array. June 2012

EMC INFRASTRUCTURE FOR MICROSOFT

APPLICATIONS IN THE PRIVATE CLOUD

EMC Symmetrix VMAX 10K, EMC RecoverPoint/Cluster Enabler,

EMC VNX5700, Microsoft Hyper-V

Automated performance optimization

Centralized and simplified management

Efficient, automated disaster recovery

(2)

Copyright © 2012 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of

merchantability or fitness for a particular purpose.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All trademarks used herein are the property of their respective owners. Part Number H10720.1

(3)

Table of contents

Executive summary ... 6 Business case ... 6 Solution overview ... 6 Key results ... 7 Introduction ... 8 Purpose ... 8 Scope ... 8 Audience... 8 Terminology ... 9 Technology overview ... 10 Overview ... 10

EMC Symmetrix VMAX 10K ... 10

EMC VNX5700 ... 11

EMC RecoverPoint ... 11

EMC Cluster Enabler ... 11

EMC Storage Integrator ... 11

Microsoft Windows Server 2008 R2 with Hyper-V ... 12

Microsoft System Center Virtual Machine Manager ... 12

Microsoft System Center Operations Manager ... 12

Solution architecture and configuration ... 13

Configuration overview ... 13

Solution architecture ... 13

Hardware resources ... 14

Software resources ... 14

Application design and configuration ... 16

Overview ... 16

Exchange Server 2010 ... 16

Exchange Server 2010 user requirement ... 16

Exchange Server 2010 building block design ... 16

Exchange Server 2010 DAG design ... 17

Hyper-V virtual machine design for Exchange Server 2010 ... 18

SharePoint Server 2010 ... 19

SharePoint Server 2010 user requirement ... 19

SharePoint Server 2010 farm and component design ... 19

Hyper-V virtual machine design for SharePoint Server 2010 ... 20

(4)

SQL Server 2012 user requirement ... 22

Hyper-V virtual machine design for SQL Server 2012 ... 22

Hyper-V cluster ... 23

FAST VP storage design ... 25

Symmetrix FAST VP ... 25

VMAX 10K storage and FAST VP design guidelines ... 26

FAST VP configuration in this solution... 26

System management design and configuration ... 28

Overview ... 28

SCVMM 2008 R2 ... 28

ESI ... 29

SCOM 2007 R2 ... 30

RecoverPoint/CE design and configuration ... 31

Overview ... 31

VMAX 10K preparation for RecoverPoint/CE ... 31

RecoverPoint CRR configuration ... 32

RecoverPoint journal sizing ... 32

EMC Cluster Enabler configuration ... 33

Hyper-V cluster preparation for disaster recovery ... 37

Test methodology ... 39

Overview ... 39

Microsoft Exchange Load Generator ... 39

SQL Server TPC-E like workload ... 39

Microsoft SharePoint 2010 VSTS—generated custom workload ... 39

How the sample documents were created ... 39

VSTS test client and test mechanism ... 40

SharePoint user profiles ... 40

FAST VP performance test results ... 41

Overview ... 41

Test objectives ... 41

Test scenarios ... 41

Hyper-V root servers ... 41

Exchange Server 2010 ... 42

SharePoint Server 2010 ... 43

SQL Server 2012 ... 45

VMAX 10K storage ... 47

RecoverPoint replication impact ... 48

(5)

RecoverPoint performance under 25 ms latency ... 51

RecoverPoint performance under 85 ms latency ... 53

Failover and disaster recovery test results ... 56

Overview ... 56

Planned failover ... 56

Virtual machine live migration time ... 56

Live migration impact on SharePoint ... 57

Site disaster recovery ... 57

Site disaster recovery time ... 57

Site disaster recovery impact on SharePoint ... 58

Conclusion ... 60 Summary ... 60 Findings ... 60 References ... 61 White papers ... 61 Product documentation ... 61 Other resources ... 61

(6)

Executive summary

Integrating Microsoft and EMC technologies can help organizations implement private clouds with increased ease and confidence. Among the benefits of leveraging

Microsoft virtualization and management tools, teamed with EMC's powerful, trusted, and efficient storage technologies, are:

Business case

• Faster deployment

ƒ End-to-end architectural and deployment guidance

ƒ Streamlined infrastructure planning due to predefined capacity ƒ Enhanced functionality and automation through deep knowledge of

infrastructure

ƒ Integrated management for virtual machine and infrastructure deployment • Higher availability

ƒ End-to-end infrastructure protection from the rotating spindles, through the caching boards to the database instance

ƒ Automated availability within a site or between sites with dissimilar infrastructure components using EMC advanced replication technologies • Reduced risk

ƒ Tested, end-to-end interoperability of compute, storage, and network ƒ Predefined, out-of-box solutions based on a common cloud architecture

that is already tested and validated

ƒ High degree of service availability through automated load balancing • Lower cost of ownership

ƒ A cost-optimized platform and software-independent solution for rack system integration

ƒ Fully automated and cost efficient storage tiering

ƒ High performance and scalability with Windows Server 2008 R2 operating system, advanced platform editions of Hyper-V technology, and the EMC® Symmetrix® VMAX® 10K storage array

This solution showcases the EMC Symmetrix VMAX 10K platform as a viable, trusted storage array to service a mixed workload of Microsoft applications including: Solution overview

• Exchange Server 2010 for messaging • SharePoint Server 2010 for collaboration • SQL Server 2012 as the Tier-1 database

• Microsoft Windows 2008 R2 with Hyper-V provides the virtualization platform EMC Fully Automated Storage Tiering for Virtual Pools (FAST VP) provides high performance and efficient storage tiering for the Microsoft applications. In addition,

(7)

EMC RecoverPoint/Cluster Enabler (CE) enables a common platform for disaster recovery (DR) across all applications with a minimal recovery point objective (RPO) and recovery time objective (RTO). The solution also showcases the ability to enable a more cost-effective DR storage solution by using the EMC VNX® midrange series platform as the target storage.

Furthermore, the solution eases provisioning and simplifies management of the infrastructure environment using:

• Microsoft System Center Operations Manager (SCOM) • Microsoft System Center Virtual Machine Manager (SCVMM) • EMC Storage Integrator (ESI)

The solution offers the following key benefits: Key results

• Desired performance results achieved for the following Microsoft application profiles on Symmetrix VMAX 10K storage (Enginuity™ version 5875 was used in this solution). FAST VP provides efficient storage tiering and flexible

configurations for customers who want custom control for different application requirements.

ƒ 10,000 Exchange concurrent users at a 0.10 IOPS profile

ƒ More than 35,000 active SharePoint users at 10 percent concurrency ƒ 60,000 SQL Server users configured for a SQL OLTP TPC-E-like environment

with more than 4,000 SQL transactions processed and 9,000 IOPS achieved

• Ease of provisioning and simplified management of the whole infrastructure Simplified RecoverPoint implementation with the in-array splitter technology, and disaster recovery standardization across all Microsoft applications. ƒ In this solution, RecoverPoint protects the production infrastructure to a

remote site over an extended distance, for example, 8,500 km from the west to east coasts in the United States, with an RPO of around three seconds.

ƒ RecoverPoint achieves 3x compression ratio to help reduce the network bandwidth needed for data replication

• Together with RecoverPoint, Cluster Enabler (CE) provides a comprehensive data protection solution for the entire data center, with a completely automatic site failover, significantly reducing operational complexity.

ƒ With a site failure over an extended distance of 8,500 km, RecoverPoint/CE can bring all virtual machines online within five minutes.

ƒ All application services come back online automatically, with a downtime of no more than 18 minutes (worst case) for users to connect to the Exchange mailboxes, SharePoint farm, and SQL services.

(8)

Introduction

This white paper presents a validated reference architecture and design guidelines for shared Microsoft application workloads, including Exchange Server 2010, SharePoint Server 2010, and SQL Server 2012 on the EMC Symmetrix VMAX 10K storage platform, with FAST VP. Microsoft Windows 2008 R2 with Hyper-V provides the virtualization platform and ESI, together with Microsoft SCVMM and SCOM, provides ease of provisioning and management.

Purpose

Building on the base solution, local high availability (HA) was extended to also enable remote site-level recovery in an automated, common platform for applications through EMC Cluster Enabler, integrated with Windows Failover Clustering.

The scope of this white paper is to document: Scope

• A design methodology for Microsoft applications on Microsoft’s Hyper-V virtualization platform and the VMAX 10K storage array

• System functionality and overall performance under active user loads

• Protection of the infrastructure environment using EMC RecoverPoint integrated with Cluster Enabler

• Simplified management and storage provisioning using ESI

• Discovery and health monitoring of the Microsoft Hyper-V and application environment through Microsoft Systems Center products

The target audience for this white paper is business executives, IT directors, and infrastructure administrators who are responsible for their company’s storage and Microsoft application landscape. The target audience also includes professional services groups, system integration partners, and other EMC teams tasked with deploying a Microsoft private cloud in a customer environment.

Audience

A high-level understanding of Exchange, SharePoint and SQL server landscapes is beneficial. Familiarity with virtualization concepts is also beneficial.

(9)

This paper includes the following terminology. Terminology

Table 1. Terminology

Term Definition

Background Database

Maintenance (BDM) BDM is the process of Exchange 2010 database maintenance that involves check summing both active and passive database copies.

Continuous remote

replication (CRR) CRR supports synchronous and asynchronous replication between remote sites over Fibre Channel (FC) and a wide-area network (WAN). Synchronous replication is supported when the remote sites are connected through FC and provides a zero RPO. Asynchronous replication provides crash-consistent protection and recovery to specific points in time, with a small RPO. Database availability

group (DAG) A DAG is the base component of the HA and site resilience framework built into Microsoft Exchange Server 2010. A DAG is a group of up to 16 mailbox servers that hosts a set of databases and provides automatic database-level recovery from failures that affect individual servers or databases.

EFD Enterprise Flash Drive.

FAST VP Fully Automated Storage Tiering for Virtual Pools. EMC Symmetrix VMAX 10K FAST VP for virtually provisioned environments automates the identification of thin device extents to re-allocate application data across different performance tiers within a single array. FAST VP proactively monitors workloads at the sub-LUN level to identify busy thin device extents that would benefit from being moved to higher performing drives. FAST VP also identifies less busy extents that could be moved to higher capacity drives, without affecting existing performance.

Pass-through disk A pass-through disk is where virtual machines have direct access to disks. It is only applicable to block devices such as iSCSI or FC. Recovery point

objective (RPO) RPO defines the maximum acceptable time period between the last available consistent image and a disaster or failure. Recovery time

objective (RTO) RTO defines the maximum acceptable time to bring a system, service or application back to an operational state after a disaster or failure.

Virtual hard disk

(VHD) VHD is a publicly available image format specification that allows encapsulation of the hard disk into an individual file for use by the operating system as a virtual disk, in all the same ways that physical hard disks are used. These virtual disks are capable of hosting native Windows file systems (NTFS, FAT, exFAT, and UDFS) while supporting standard disk and file operations.

(10)

Technology overview

The solution is validated with the following hardware and software components: Overview

• EMC Symmetrix VMAX 10K

• EMC VNX5700, part of the EMC VNX family of unified storage platforms • EMC RecoverPoint

• EMC Cluster Enabler (CE) • EMC Storage Integrator (ESI)

• Microsoft Windows Server 2008 R2 with Hyper-V • Microsoft System Center Virtual Machine Manager • Microsoft System Center Operations Manager

EMC Symmetrix is the world’s most trusted storage platform. Enterprise customers have been deploying large mission-critical applications with Symmetrix for over 20 years. The Symmetrix VMAX 10K series provides enterprise storage that delivers efficient and cost-effective hardware design combined with built-in software and simplified installation, configuration, and management. VMAX 10K brings high-end storage array capabilities to service providers and IT organizations that have demanding virtual computing environments but limited storage expertise and IT resources.

EMC Symmetrix VMAX 10K

VMAX 10K uses 100 percent internal redundancy to deliver enterprise-class

reliability, availability, and serviceability (RAS), and has an integrated write splitter to support RecoverPoint replication. The VMAX 10K series is designed for fast and efficient deployment and includes features that are particularly useful in small or crowded data centers:

• VMAX 10K uses an implementation of the Symmetrix Virtual Matrix

Architecture™ that is optimized for rapid deployment and easier management. • VMAX 10K systems are delivered preconfigured, 100 percent virtually

provisioned, and ready for same-day installation and startup.

• High storage densities can be achieved, with a single-rack, single-engine VMAX 10K capable of supporting 120 drives. The system can scale to a six-bay, four-engine system with 960 drives and up to 1.3 PB of usable capacity. This enables customers to cost-effectively grow and upgrade their system to accommodate application and data growth.

• With the VMAX 10K array dispersion capability, bays can be separated up to 10 meters apart, enabling very flexible deployments.

(11)

The VNX5700, a member of the EMC VNX family, delivers industry-leading innovation and enterprise capabilities for file, block, and object storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today’s enterprises.

EMC VNX5700

The VNX series is designed to meet the high-performance, high-scalability requirements of midsize and large businesses while providing market-leading simplicity and efficiency to minimize total cost of ownership.

EMC RecoverPoint is an enterprise-scale solution designed to protect application data on heterogeneous SAN-attached servers and storage arrays. RecoverPoint runs on a dedicated appliance and combines industry-leading continuous data protection technology with a bandwidth-efficient, no-data-loss replication technology, allowing it to protect data both locally and remotely.

EMC RecoverPoint

Innovative data-change journaling and application integration capabilities enable organizations to address their pressing business, operations, and regulatory data protection concerns. Organizations that implement RecoverPoint see dramatic

improvements in application protection and recovery times compared with traditional host and array snapshots or disk-to-tape backup products.

The CE software integrates with Microsoft Failover Cluster software, enabling

geographically dispersed cluster nodes to be replicated by RecoverPoint continuous remote replication (CRR). RecoverPoint/CE seamlessly manages all storage system processes necessary to facilitate cluster node failover. RecoverPoint/CE supports Windows Server 2003 and Windows Server 2008 in both Enterprise and Datacenter editions that use Node Majority and Node and File Share Majority quorum modes only.

EMC Cluster Enabler

ESI for Windows provides capabilities for viewing and provisioning storage. As part of the viewing capability, it depicts Windows-to-storage resource mapping. As part of storage provisioning, it simplifies the steps of creating a logical unit number (LUN), preparing the LUN through the steps of partitioning and formatting, and creating a drive letter.

EMC Storage Integrator

ESI also:

• Enables users to create a file share and mount that file share as a network-attached drive in the Windows environment

• Provides virtualization capability by supporting Microsoft Hyper-V

• Supports storage provisioning and discovery using the PowerShell toolkit ESI for SharePoint provides capabilities for viewing and provisioning storage. To view storage, ESI discovers SharePoint farms, sites, and content databases. It also maps these resources to the underlying storage resources. To provision storage on the SharePoint server, ESI prepares the LUN by partitioning it, formatting it, and assigning it a drive letter, and provisions the storage to the SharePoint site. ESI also supports File Stream Remote Blob Store.

(12)

Hyper-V is an integral part of Windows Server that enables customers to make the best use of their server hardware investments> Hyper-V consolidates multiple server roles as separate virtual machines running on a single physical machine, and provides a foundational virtualization platform for transition to the cloud. With Windows Server 2008 R2, it presents a solution for core virtualization scenarios such as production server consolidation, dynamic datacenter, business continuity, Virtual Desktop Infrastructure (VDI), and test and development.

Microsoft Windows Server 2008 R2 with Hyper-V

Microsoft System Center SCVMM enables centralized management of physical and virtual IT infrastructure, increased server utilization, and dynamic resource

optimization across multiple virtualization platforms. It includes end-to-end capabilities such as planning, deploying, managing, and optimizing the virtual infrastructure.

Microsoft System Center Virtual Machine Manager

You can use SCVMM to maximize datacenter resources and promote IT agility while leveraging existing skills. With SCVMM, you can:

• Centrally create and manage virtual machines across datacenters • Easily consolidate multiple physical servers onto virtual hosts • Rapidly provision and optimize virtual machines

• Dynamically manage virtual resources

Microsoft SCOM is an end-to-end service-management product for Windows environments. It works seamlessly with Microsoft infrastructure servers, such as Windows Server, and application servers, such as Microsoft Exchange, helping you to increase efficiency while enabling greater control of the IT environment.

Microsoft System Center Operations Manager

(13)

Solution architecture and configuration

This solution deploys all Exchange, SQL Server, and SharePoint servers as virtual machines on a Hyper-V cluster across the production site and the DR site. With the built-in RecoverPoint VMAX 10K array splitter, EMC RecoverPoint provides

heterogeneous replication from VMAX 10K storage on the production site to a lower tier of the VNX5700 storage array on the DR site, and EMC Cluster Enabler automates the site failover.

Configuration overview

Figure 1 illustrates the solution architecture. Refer to Hyper-V cluster in

the Application design and configuration section for detailed information about Hyper-V virtual machine setup and configu

Solution architecture

ration.

(14)

Table 2 details the hardware resources deployed in this solution. Hardware

resources

Table 2. Hardware resources

Equipment Quantity Configuration

EMC VMAX 10K 1 • 2 system bays and 1 storage bay

• 2 engines and 128 GB memory (mirrored) • Enginuity version: 5875.230.172.e • EFD, FC, and SATA disks:

ƒ 200 GB EFD drives: 17

ƒ 450 GB 15k rpm FC drives: 100

ƒ 600 GB 10k rpm FC drives: 82

ƒ 2 TB 7.2k rpm SATA drives: 89 EMC VNX5700 1 Block OE code: 5.31.000.5.704

EFD, SAS, and NL-SAS disks

RecoverPoint 4 4 RecoverPoint appliances (2 per site) FC switch 2 8 Gb FC switch (1 per site)

GbE network switch 2 48-port IP switch

Servers 11 • 8 Hyper-V root servers: 4 cores x 4 processors CPU and 128 GB RAM

• 1 management server: 4 cores x 4 processors CPU and 32 GB RAM

• 2 domain controller servers: 4 processors and 4 GB RAM

Table 3 details the hardware resources deployed in this solution. Software resources

Table 3. Software resources

Software Configuration

Hyper-V cluster nodes Windows Server 2008 R2 with Service Pack (SP) 1

Virtual machine operating system Windows Server 2008 R2 with SP1 SQL Server 2012 RTM Enterprise Edition Exchange Server 2010 Enterprise Edition with SP2 SharePoint Server 2010 2010 Enterprise Edition with SP1 and

Cumulative Update in February 2012 SCVMM 2008 R2 SP1

(15)

Software Configuration

EMC PowerPath Version 5.5 SP1

ESI Version 1.3

RecoverPoint Version 3.4.3 Cluster Enabler Version 4.1.2

(16)

Application design and configuration

The solution was designed for a mixed Microsoft application workload, including Exchange Server 2010, SharePoint Server 2010, and SQL Server 2012, on the Symmetrix VMAX 10K storage array. Microsoft Hyper-V provides the virtualization platform for all Microsoft applications.

Overview

The following sections provide the design methodology for all three Microsoft applications, the Hyper-V cluster, and Symmetrix FAST VP.

Microsoft Exchange Server 2010 introduces the database availability group (DAG) as the new HA mechanism to replace previous, integrated HA technologies. This section provides the virtualization platform, DAG, and disk design for Exchange Server 2010 as deployed in this solution.

Exchange Server 2010

Exchange Server 2010 user requirement

The user requirements in this solution are detailed in Table 4.

Table 4. Exchange user requirement

Item Value

Number of Exchange 2010 users 10,000 Number of Mailbox Server virtual machines 4

Number of DAGs and database copies 1 DAG with 2 copies Number of users per Mailbox Server 5,000 total mailboxes

(2,500 active and 2,500 passive during normal operating conditions)

User profile (in DAG configuration) 100 messages/user/day (0.10 IOPS) Read:Write ratio 3:2 in a DAG configuration

Mailbox size Start at 500 MB, grow to 2 GB Target average message size 75 KB

Deleted items retention window 14 days Logs protection buffer 3 days 24 x 7 BDM configuration Enabled

Exchange Server 2010 building block design

Sizing and configuring storage for use with Exchange Server 2010 can be a complicated process, driven by many variables and factors that vary from

organization to organization. Properly configured Exchange storage, combined with a correctly sized server and network infrastructure, can guarantee smooth Exchange operations and the best user experience.

One of the methods that can simplify the sizing and configuration of large Microsoft Exchange Server 2010 environments is to define a unit of measure called a building

(17)

block. A building block represents the required resources needed to support a specific number of Exchange 2010 users on a single virtual machine. You can derive the number of required resources from a specific user profile type, mailbox size, and disk requirement.

For more information about the EMC building block methodology for Exchange 2010, refer to the EMC white paper Microsoft Exchange 2010: Storage Best Practices and Design Guidance for EMC Storage.

Table 5 shows the detailed building block information for this solution.

Table 5. Exchange building block

Item Value

Number of Exchange users per Mailbox

Server 5,000 total mailboxes (2,500 active and 2,500 passive during normal operating conditions)

Mailbox size Start at 500 MB, grow to 2 GB Number of databases per Mailbox Server 10 (5 active/5 passive) User mailboxes per database 500

Database LUN size 1.8 TB (sizing for 2 GB mailbox) Log LUN size 100 GB

The requirements include starting with a user mailbox size of 500 MB with the ability to seamlessly grow to 2 GB. This can be easily accomplished using the VMAX 10K Virtual Provisioning™ feature.

Symmetrix Virtual Provisioning technology builds on the base thin provisioning functionality, which is the ability to have a large thin device (that is, volume) configured and presented to the host while consuming physical storage from a shared pool only as needed. Symmetrix Virtual Provisioning can improve storage capacity utilization and simplify storage management by presenting the application with sufficient capacity for an extended period of time, reducing the need to provision new storage frequently and avoiding costly allocated but unused storage.

Exchange Server 2010 DAG design

This solution uses the Exchange 2010 DAG feature to provide HA for Exchange databases. A DAG is a group of up to 16 Mailbox Servers that hosts a set of

databases and provides automatic database-level recovery from failures that affect individual servers or databases.

We configured each Mailbox Server in this solution with ten databases, five active and five passive. All databases were balanced and distributed between Mailbox Servers within the DAG and between the Hyper-V nodes to eliminate a single point of failure. Figure 2 shows the DAG database distribution.

(18)

Figure 2. DAG database distribution

Hyper-V virtual machine design for Exchange Server 2010

This solution deploys all the Exchange 2010 servers as Hyper-V virtual machines. We configured four Exchange 2010 Mailbox Servers in a DAG to provide HA for

databases. Each Mailbox Server virtual machine was set up on a separate Hyper-V host server for additional redundancy.

From the HUB/CAS servers, the HUB/CAS combined role had a 1:1 CPU core ratio to the Mailbox Server. Therefore, the solution included four HUB/CAS servers as virtual machines, separated into different Hyper-V hosts. Furthermore, we deployed the Exchange Server 2010 Client Access array and network load balancer to provide load balancing between the Client Access servers.

For the Exchange virtual machines, we based the memory and CPU requirements on Microsoft best practices. For more information, visit the following websites:

• http://technet.microsoft.com/en-us/library/ee832793.aspx

• http://technet.microsoft.com/en-us/library/ee712771.aspx

Table 6 provides a summary of the Exchange virtual machine configuration.

Table 6. Exchange virtual machine configuration

Virtual machine role Quantity vCPU Memory (GB) Boot disk VHD (GB)

Mailbox 4 4 16 100

HUB/CAS 4 4 8 100

We configured all the database and log LUNs for the Exchange Mailbox Server virtual machine as pass-through disks in Hyper-V. Table 7 provides detailed information on the Exchange virtual machine pass-through disk configuration.

Table 7. Exchange virtual machine disk configuration

Virtual machine role No. of virtual machines Pass-through disks (GB)

Description No. of pass-through disks

on each virtual machine

Mailbox 4 1.8 TB Database LUNs 10 100 GB Log LUNs 10

(19)

The SharePoint farm was designed for optimized performance, ease of manageability and growth. This section describes the SharePoint Server 2010 design and

configuration in this solution. SharePoint Server

2010

SharePoint Server 2010 user requirement

Table 8 outlines the SharePoint 2010 user profile in this solution.

Table 8. SharePoint user requirement

Item Value

Total user count 10,000

Usage profiles (%browse/%search/%modify) 80%/10%/10% User concurrency 10%

Total data 4 TB Content database size 1 TB Total site collections count 20 Sites per site collection 10

Document size range 10 KB–10 MB

SharePoint Server 2010 farm and component design

To meet the user requirements listed in Table 8, we deployed seven SharePoint servers in three different roles:

• Three SharePoint web front-end servers for load balancing that were also configured as query servers. We scaled the query components out to three partitions. Each query server contained one index partition and a mirror of another one for better query performance and fault tolerance.

• Two SharePoint application servers with two crawlers on each to improve the full and incremental performance.

• Two SharePoint SQL servers with two content databases on each.

SharePoint SQL Server configuration

Before you deploy SharePoint Server, configure the following SQL Server settings and options:

• Do not enable auto-create statistics on a SQL Server that is supporting SharePoint Server. SharePoint Server configures the required settings upon provisioning and upgrade. Auto-create statistics can significantly change the execution plan of a query from one SQL Server instance to another SQL Server instance. Therefore, to provide consistent support for all customers, SharePoint Server provides coded hints for queries as needed to give the best performance across all scenarios.

• To ensure optimal performance, EMC strongly recommends that you set the max degree of parallelism (MAXDOP) option to 1 on SQL Server instances that

(20)

host SharePoint Server 2010 databases. For more information, visit: http://technet.microsoft.com/en-us/library/cc298801.aspx

SharePoint farm configuration

The SharePoint farm was designed as a collaboration portal. It comprised 4 TB of user content consisting of twenty SharePoint site collections (through the collaboration portal) with four content databases, each populated with 1 TB of random documents.

SharePoint search configuration

SharePoint 2010 search architecture improves scalability for both crawl and query components compared with Microsoft Office SharePoint Server 2007 (MOSS 2007). The search server consists of crawler servers with the function to crawl and propagate the indexes on the query server and update the property stores on the SQL Server. In SharePoint 2010, the crawler server no longer stores a copy of the index files. They are propagated to the query component during the crawl operation. Because of this, the crawler server is no longer a single point of failure.

The query servers split the content between themselves so that each of the query servers holds only a subset of the content index files and queries. The property store is the authoritative source for all indexed content properties and does not need to be synchronized with the crawler servers.

In this solution, we enabled the query function on the web front-end (WFE) server. We scaled the query components out to three partitions for load balancing. Each query component also held a mirror of another index partition for fault tolerance

consideration. We provisioned two 120 GB LUNs that store the index partition and its mirror to each query server. We also provisioned two 80 GB LUNs on each crawler server to store the temporary index files during the crawl operation.

SQL tempdb configuration

Microsoft SQL Server performance best practices recommend that the number of tempdb datafiles should be the same as the number of core CPUs, and each of the tempdb datafiles should be of the same size.

In this solution, we created four tempdb data files. The number is equal to that of SQL Server core CPUs. We placed the tempdb data and log files on a dedicated RAID 1 LUN, enabled for better performance and utilization. For more information about

optimizing tempdb performance, visit: http://msdn.microsoft.com/en-us/library/ms175527.aspx.

Hyper-V virtual machine design for SharePoint Server 2010

You can calculate virtual machine and Hyper-V requirements based on user requirements. The RAM recommended for the computer running SQL Server is calculated by the combined size of the content databases. For more information on storage and SQL Server capacity planning and configuration for SharePoint Server 2010, visit: http://technet.microsoft.com/en-us/library/cc298801.aspx.

In the solution, we had two SQL Servers in the SharePoint farm, so 32 GB for each was a good option. Table 9 provides a summary of the SharePoint virtual machine configurations.

(21)

Table 9. SharePoint virtual machine configuration

Virtual machine role Quantity vCPU Memory (GB) Boot disk VHD (GB)

SQL 2 4 32 100

WFE 3 4 4 100

App 2 4 8 100

We added all the SharePoint LUNs as pass-through disks. To estimate the required storage for the property and crawl databases, we used the following multiplier suggested by Microsoft:

• Crawl: 0.046 * (sum of content databases) = 0.046 * 4 TB = 188.4 GB • Property: 0.015 * (sum of content databases) = 0.015 * 4 TB = 61.4 GB With a 20 percent capacity buffer, we had a 220 GB volume for the crawl database and an 80 GB volume for the property database.

Table 10 provides detailed information about the SharePoint virtual machine pass-through disk configuration.

Table 10. SharePoint virtual machine disk configuration

Virtual machine role No. of virtual machines Pass-through disks (GB)

Description No. of pass-through

disks on each virtual machine

WFE 3 120 Query component and query

component mirror 2

Index 2 80 Index component 2

SQL Server

1 1200 Content database data volume 2 30 Content database log volume 2 50 Configuration/central

administration/search

administration database volume 1

80 + 20 SharePoint property database

and its log volumes 2 (1 * 80 GB disk and 1* 20 GB disk) 220 + 50 SharePoint crawl database and

log volumes 2 (1 * 220 GB disk and 1 * 50 GB disk) 50 SQL temp database and log

volumes 5 1 1200 Content database data volume 2 30 Content database log volume 2 50 SQL temp database and log

(22)

To maintain flexibility and performance, it is necessary to ensure that the storage sizing and virtual machine configuration for SQL Server is optimal. This section provides detailed information about SQL Server user requirements and design. SQL Server 2012

SQL Server 2012 user requirement

Table 11 shows the user requirement for SQL Server 2012 in this solution.

Table 11. SQL Server user requirements

Profile Value

SQL database capacity and user profile 3 user databases per SQL Server: • 1 x 50 GB (5,000 user) • 1 x 100 GB (10,000 user) • 1 x 150 GB (15,000 user) Number of SQL Server instances 2

Number of userdatabases for each virtual machine 3 Number of virtual machines 2 Total database size 600 GB Number of total users 60,000

Read-write ratio 85:15 online transaction processing (OLTP) Concurrent users Mixed, to simulate hot, warm, and cold

workloads across the databases

Hyper-V virtual machine design for SQL Server 2012

In this solution, we deployed two SQL Server virtual machines. The following list shows the Windows and SQL Server 2012 configurations of each virtual machine. Keep the default values for all other settings.

• Grant the Lock pages in memory privilege to the SQL startup account. Refer to Pre-Configuration Database Optimizations on the Microsoft website for mor

information. e

• Format the user data device by allocating 64 KB for the NTFS allocation unit size. Refer to the SQL Server Best Practices Article on the Microsoft website for more information.

Table 12 provides a summary of the SQL Server virtual machine configuration.

Table 12. SQL virtual machine configuration

Virtual machine role Quantity vCPU Memory (GB) Boot disk VHD (GB)

SQL 2 4 24 100

As described in Table 11, the total user database size is 600 GB. The user database log and the tempdb log are laid out on a separate LUN for each database. For tempdb

(23)

data, as suggested by best practice, we created the same number of LUNs for the tempdb datafiles as the number of CPUs on the SQL Server.

We configured all the database and log LUNs for the SQL Server virtual machine as pass-through disks in Hyper-V. Table 13 provides detailed information about the SQL Server virtual machine pass-through disk configuration.

Table 13. SQL virtual machine disk configuration

Virtual machine role No. of virtual machines Pass-through disks (GB)

Description No. of pass-through

disks on each virtual machine SQL 2 250 Database LUN of 150 GB DB 1 125 Log LUN of 150 GB DB 1 200 Database LUN of 100 GB DB 1 100 Log LUN of 100 GB DB 1 150 Database LUN of 50 GB DB 1 75 Log LUN of 50 GB DB 1 100 SQL tempdb data 4 50 SQL tempdb log 1

After we determined the virtual machine design for each individual application, the next step was to consider the Hyper-V cluster configuration.

Hyper-V cluster

This solution deploys a Hyper-V cluster consisting of four nodes in the production site and another four nodes in the DR site to increase the availability of virtual machines and applications. As this solution has a multisite failover cluster with an even number of nodes, we used the quorum configuration of Node and File Share Majority.

When determining where to place the virtual machines, it is important to consider load balancing and failure protection in your plan. You should distribute virtual machines with the same application roles as the different Hyper-V root servers. For example, this solution separates Exchange Mailbox Server virtual machines into different Hyper-V nodes, so if a Hyper-V node fails, only one of Mailbox Servers is affected. The same rule also applies to SharePoint WFE servers, SharePoint App WFE servers, Exchange HUB/CAS servers, and SQL Servers (including those for SharePoint). Table 14 describes the virtual machine placement on each of the solution’s Hyper-V nodes and the summary of total resources allocated to the virtual machines.

(24)

Table 14. Virtual machine distribution in Hyper-V cluster nodes

Production site server VM Role VM host name vCPU Memory (GB)

Node 1

Exchange Mailbox ExMBX01 4 16 Exchange HUB/CAS ExHC01 4 8 SharePoint WFE SPSWFE01 4 4 SharePoint App SPSAPP01 4 8

SQL SQL01 4 24

Total: 20 60

Node 2

Exchange Mailbox ExMBX02 4 16 Exchange HUB/CAS ExHC02 4 8 SharePoint SQL SPSSQL01 4 32 SharePoint WFE SPSWFE02 4 4

Total: 16 60

Node 3

Exchange Mailbox ExMBX03 4 16 Exchange HUB/CAS ExHC03 4 8

SQL SQL02 4 24

Domain Controller DC03 2 4 SharePoint App SPSAPP02 4 8

Total: 18 60

Node 4

Exchange Mailbox ExMBX04 4 16 Exchange HUB/CAS ExHC04 4 8 SharePoint SQL SPSSQL02 4 32 SharePoint WFE SPSWFE03 4 4

Total: 16 60

For each virtual machine, a 200 GB virtual machine boot LUN is provisioned to provide storage for a 100 GB fixed-size virtual hard disk (VHD) disk for the virtual machine Operation System (OS), virtual machine configuration file, and the virtual machine memory swap file (of the same size as the memory configured for this virtual machine).

We used the volume GUID to configure the boot LUN for each virtual machine. If the boot LUN was formatted with the NT file system (NTFS) and assigned with a drive letter on the Hyper-V node, it could not be configured as a cluster resource disk, and so a live migration would fail. By using the volume GUID, this GUID on the primary node was replicated to all remaining nodes and stayed the same on all nodes, leading to a successful virtual machine live migration across all cluster nodes.

(25)

Figure 3 shows the volume GUID of a virtual machine boot LUN in the Failover Cluster Manager.

Figure 3. Volume GUID

Figure 4 shows the volume GUID configuration for the virtual machine storage path.

Figure 4. Storage path for VM OS VHD file

The storage design of the VMAX 10K platform provides a robust, scalable, and simplified storage infrastructure. We used FAST VP technology in this solution for automated tiering storage to meet the changing application needs.

FAST VP storage design

Symmetrix FAST VP

Symmetrix FAST VP operates on Virtual Provisioning thin devices and uses intelligent algorithms to continuously analyze devices at the sub-LUN level. This enables FAST VP to identify and relocate the specific parts of a LUN that are most active and would benefit from being moved to higher-performing storage such as EFD. FAST VP also identifies the least active parts of a LUN and relocates that data to higher-capacity, more cost-effective storage such as SATA, without altering performance.

Data movement between tiers is based on performance measurement and user-defined policies, and is executed automatically and nondestructively by FAST VP. For FAST VP to operate, you need to configure the following three storage elements:

• Storage tiers—A storage tier is a specification of a set of resources of the same disk technology type (EFD, FC, or SATA) combined with a given RAID protection type (RAID 1, RAID 5, or RAID 6).

• Storage groups—A storage group is a logical collection of Symmetrix devices that are to be managed together.

(26)

• FAST policies— A FAST policy groups between one and three tiers and assigns an upper usage limit for each storage tier. The upper limit specifies the maximum capacity that a storage group associated with the policy can have while residing on that particular tier.

VMAX 10K storage and FAST VP design guidelines

The following list provides general design guidance for running a mixed Microsoft application workload on a VMAX 10K storage array using FAST VP:

• EMC recommends separating databases and logs onto their own LUNs.

• Balance front-end processor and port utilization across all available VMAX 10K resources intended for the mixed Microsoft application environment. For FC, use port 0 of a given front-end processor (a slice) before also using port 1 of the same processor.

• For the thin devices used for the application data LUN in the solution, we created eight-way striped meta devices for better performance.

• Set FAST VP only for application data LUNs:

ƒ For Exchange Server, exclude transaction log volumes from the FAST VP policy.

ƒ For SharePoint and SQL Server, exclude the log and tempdb LUNs from the FAST VP policy.

ƒ Exclude the Hyper-V virtual machine boot LUNs from the FAST VP policy. FAST VP configuration in this solution

FAST VP in a VMAX 10K environment provides an easy way to employ the storage service specializations of an array configuration with a mixture of drive types. When configuring FAST VP in this solution, we considered the following:

• We chose the FAST VP tiers for this solution per the application requirements, with all three applications sharing the same tiers. This configuration allowed FAST VP to automatically move data across all disks in this tier for optimal performance.

• We chose a RAID 5 protection type for faster tiers like FC and EFD to yield the best total cost of ownership (TCO), and RAID 1 mirrored protection for SATA to yield the best performance results.

• For the SharePoint Server in this solution, each content database was 1 TB in size. In this situation, more IOPS are required than databases in a small size. Therefore, we used the FC tier for the majority of the storage devices.

(27)

Table 15 lists the FAST VP tiers and disk information used in this solution.

Table 15. FAST VP tiers and disks

Tier name Disk technology Quantity of disks RAID type

EFD 200 GB EFD drives 12 RAID 5 (3+1) FC 450 GB 15k rpm FC drives 80 RAID 5 (3+1) SATA 2 TB 7.2k rpm SATA drives 68 RAID 1

The FAST VP policy settings were different for each application since each application has different performance requirements. Table 16 shows the FAST VP tiers and policies used in this solution.

Table 16. FAST VP policy

Application Tier FAST VP policy

Exchange FC 10% SATA 90% SharePoint EFD 10% FC 80% SATA 10% SQL EFD 25% FC 65% SATA 10%

(28)

System management design and configuration

This solution architecture includes the following components to demonstrate a private cloud solution for customers who are looking for enterprise consolidation with management simplicity:

Overview

• SCVMM enables rapid deployment of virtual machines.

• ESI provides the ability to provision storage for the Microsoft Hyper-V environment.

• SCOM enables the discovery and health monitoring of Windows, Hyper-V, and Microsoft applications.

This solution uses SCVMM to provide unified management for an environment of Hyper-V servers hosting SQL, SharePoint, and Exchange virtual machines. SCVMM also helps to consolidate physical servers in a virtual environment and monitor all clusters, hosts, and virtual machines in this environment. In addition, administrators can use SCVMM to rapidly provision the virtual machines and to dynamically optimize virtual resources.

SCVMM 2008 R2

Figure 5 shows the virtual machine environment, including the host, CPU average, and memory information of a virtual machine.

(29)

ESI greatly simplifies managing, and viewing and provisioning of EMC storage in a Hyper-V environment. As part of storage provisioning, ESI simplifies the steps involved in creating a LUN, processing the LUN through the steps of partitioning, formatting, and creating a drive letter.

ESI

While configuring the cluster node in ESI, it is very easy to add a cluster system. After you add the cluster node to ESI, ESI shows all the cluster disks and its relative information. Figure 6 shows the Hyper-V hosts and VMAX 10K across the entire solution environment and that ESI provides insight into cluster disk resources on the storage.

Figure 6. ESI simplified view of Hyper-V hosts and VMAX 10K

ESI also supports PowerShell commands. For large environments, storage

administrators can use ESI PowerShell commands to deploy multiple volumes in the Windows platform at the same time. For detailed steps on how to use ESI, refer to EMC Storage Integrator for Windows Product Guide.

Note The features described in this white paper are based on the ESI version available at the time of solution validation. EMC constantly improves and

(30)

updates its products and technology with new features and functionality. Visit http://www.emc.com/for the latest features and updates.

This solution uses SCOM 2007 R2 to discover and monitor the health, performance, and availability of the whole virtual infrastructure across Exchange, SQL, and SharePoint applications, the operation system, and hypervisors.

SCOM 2007 R2

The following management packs are imported into SCOM 2007 R2 to monitor the whole infrastructure:

• SQL Server Management Pack

• Microsoft Exchange Server 2010 Management Pack

• System Center Virtual Machine Manager (SCVMM) 2008 R2 Management Pack • Microsoft Windows Server Operating System Management Pack

Figure 7 shows SCOM monitoring of the Microsoft Windows and Hyper-V environment deployed in this solution.

(31)

RecoverPoint/CE design and configuration

This section shows the design and configuration of RecoverPoint /CE, as well as the VMAX 10K preparation for RecoverPoint/CE in this solution.

Overview

While RecoverPoint is qualified with VMAX 10K, and VMAX 10K has an integrated write-splitter to support RecoverPoint replication, a few more steps must be completed to make VMAX 10K fully prepared for RecoverPoint:

VMAX 10K preparation for RecoverPoint/CE

1. Configure the RecoverPoint splitter on the Symmetrix VMAX 10K array by provisioning the following volumes:

ƒ A repository volume (3 GB) for the RecoverPoint appliance (RPA) cluster. This volume stores configuration information about the RPAs and RecoverPoint consistency groups, which enables a properly functioning RPA to seamlessly assume the replication activities of a failing RPA from the same cluster. We also provisioned a repository volume of the same size on the VNX5700 for the RPA cluster at the DR site.

ƒ Eight unique gatekeeper volumes each for RPA1 and RPA2.

To provision these volumes, create auto-provisioning masking views that present the volumes to the RPAs. For the solution, we created three masking views for the RecoverPoint cluster, as it consists of two RPAs.

2. When replicating volumes to a Symmetrix splitter, the production and replica LUN sizes must be identical. LUN size faking (fake size feature) is not

supported by Symmetrix splitters. When replicating from a Symmetrix splitter to a different splitter, the replica LUN can be larger than the production LUN. However, as a best practice, always make the replica LUN the same size as the production LUN by using block count.

3. Enable the write-protect bypass for RPA initiators. The RecoverPoint splitter for VMAX 10K requires the RPA initiators to have special access that enables them to write to write-protected devices. Figure 8 shows this setting.

Figure 8. Write-protect bypass for RPAs

For more information, see EMC RecoverPoint Deploying with Symmetrix Arrays and Splitter Technical Notes.

(32)

In this solution, RecoverPoint CRR replicates the production environment to the recovery environment using the RecoverPoint splitter.

RecoverPoint CRR configuration

We used asynchronous replication over IP WAN for RecoverPoint CRR. We connected the RPAs for the production and DR sites by a trunked network between two switches in order for them to communicate with each other. Each RPA was connected to the network switches by two 1 Gb connections. A network Distance Emulator simulated the network latency between the RPA’s WAN links. We configured the following two scenarios for testing:

• 25 ms network latency (2,500 km round trip distance) • 85 ms network latency (8,500 km round trip distance)

The RecoverPoint management console displays the data flow with more detailed information. Figure 9 shows the data flow on one of the consistency groups implemented in this solution.

Figure 9. RecoverPoint CRR data flow in the management console

The size of the journal volumes is closely related to your required protection window. Determining the journal size requires administrators to calculate the expected application data change rate in the environment.

RecoverPoint journal sizing

(33)

As a general rule, EMC RecoverPoint field implementation experts recommend that you size journals at 20 percent of the data being replicated when change rates are not available.

In this solution, the Exchange Mailbox data change rate is 6 Mb/s based on the Exchange user profile. To support a 3-day (72-hour) rollback requirement, the journal size should be 250 GB:

6 Mbps * 259,200 seconds / 0.8 * 1.05 = 2,041,200 Mb (~250 GB)

In the calculation shown above, 259,200 seconds represents a 72-hour rollback window, and 0.8 represents 20 percent for reserved journal space.

For optimal journal performance, ensure that you choose the appropriate RAID and drive types. In this solution, we used 600 GB 10k rpm SAS drives in a RAID 1/0

configuration on VNX5700 for the remote protection, and 600 GB 10k rpm FC drives in a RAID 1 configuration on VMAX 10K for failback during a site failover.

To meet the protection window in this solution, EMC recommends that you configure the proper journal size for each consistency group as shown in Table 17. Consult your local RecoverPoint champion to size your journals based on your particular

requirements.

Table 17. Journal sizing for each consistency group

Site Production site DR site

Protection window 1 day 3 days

Journal size Exchange Mailbox Server 75 GB Exchange Mailbox Server 250 GB Exchange HUB/CAS 5 GB Exchange HUB/CAS 20 GB SQL Server 75 GB SQL Server 250 GB SharePoint SQL Server 90 GB SharePoint SQL Server 300 GB SharePoint Application

server 5 GB SharePoint Application server 20 GB SharePoint WFE server 15 GB SharePoint WFE server 50 GB Domain controller 5 GB Domain controller 20 GB

EMC Cluster Enabler for Microsoft Failover Clusters is a software extension of failover cluster functionality. Cluster Enabler allows Microsoft Failover Clusters to operate across multiple connected storage arrays in geographically distributed clusters. Each cluster node connects through a storage network to the supported storage arrays. EMC Cluster

Enabler configuration

In a typical environment, storage data is replicated to another storage array on the DR site to protect the environment against site failure. It has read-write access on the source array and read-only access on the target array, which adds extra steps to allow read-write access on the remote site after failover when implementing stretched clusters across data centers. EMC Cluster Enabler helps automate these additional steps by providing the CECluRes custom resource type as part of the failover

(34)

cluster. Figure 10 shows this resource in a cluster’s properties in the Failover Cluster Manager.

Figure 10. CE cluster resource

To configure the Cluster Enabler, we installed the CE base component and plug-in for RecoverPoint on each Hyper-V node (requires a reboot). The Hyper-V cluster was configured by Cluster Enabler Manager.

Figure 11 shows the Cluster Enabler Manager managing the current Hyper-V cluster and the disk resources of a virtual machine.

Figure 11. EMC Cluster Enabler Manager Console

When the CE cluster is configured, it creates the CE custom resource for every

resource group (a virtual machine is a resource group) that contains disk resources. It then makes all the disk resources in the cluster group dependent on this custom resource. Figure 12 shows the CE custom resource and disk dependency in a SharePoint WFE virtual machine.

(35)

Figure 12. EMC CE customer resource

On the RecoverPoint side, there must be one matching consistency group for each virtual machine for Cluster Enabler to work. In this solution, we created 18

consistency groups:

• We named each consistency group the same as each virtual machine name. Figure 13 shows the matching names in the Failover Cluster Manage and RecoverPoint Management Applicati

r on.

(36)

Figure 13. Virtual machine names in Failover Cluster Manager and consistency group names in RecoverPoint Management Application

• For each consistency group, we set up two journal volumes, one at the production site and the other at the DR site. Each consistency group’s replication set consists of production LUNs and DR replica LUNs. Figure 14 shows the replication sets of an Exchange Mailbox consistency group.

(37)

• We set the Stretch Cluster setting in each consistency group’s policy to Use

RecoverPoint/CE and Group is managed by CE, RecoverPoint can only monitor. Figure 15 shows this setting.

Figure 15. Stretch Cluster setting

For more information about configuring RecoverPoint and CE, refer to EMC RecoverPoint/Cluster Enabler Plug-in Product Guide

The following list describes the best practices for the Hyper-V cluster and virtual machines. To ensure that live migration and site failover is successful:

Hyper-V cluster preparation for

disaster recovery 1. Install the recommended Hyper-V cluster hot fixes from the following links: ƒ http://support.microsoft.com/kb/2494036

ƒ http://support.microsoft.com/kb/2545685

2. It is important to adjust the default setting for the Hyper-V cluster to move virtual machines in the event that a resource becomes unavailable. Change the default setting from the default Save to Shut Down, as shown in Figure 16. In this way, the virtual machines do not save and restore the disk state when it is moved or taken offline.

Figure 16. Virtual machine settings when VM is offline

3. Configure each virtual machine to use the correct network for Hyper-V cluster live migration, as shown in Figure 17.

(38)

Figure 17. Virtual machine network for live migration

4. Set the Preferred Owners list for a virtual machine in the Failover Cluster Manager for a balanced failover to local and remote servers.

5. In the case of a disaster, the virtual machines fail over to another surviving server. During this process, the virtual machine is restarts. At its startup, the Hyper-V synthetic virtual network card in the virtual machines may not be ready before the network or application services (like Exchange and SQL Server) that are also trying to start. In this case, the network and application services would fail as they are network dependent.

To address this issue, you can use either of the following methods:

a. Configure application services like SQL service to be Automatic (Delayed

Start) so they will start after all system services are ready.

b. Increase the service startup time from the default 30 seconds, by

creating the following registry entry:

ƒ Subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control ƒ Name: ServicesPipeTimeout

ƒ Type: REG_DWORD

ƒ Data: The number of milliseconds that you want to allow for the services to start

(39)

Test methodology

To simulate a client workload for Exchange Server, SQL Server, and SharePoint Server, this solution uses the following tools:

Overview

• Microsoft Exchange Load Generator • SQL Server TPC-E-like workload

• Microsoft SharePoint 2010 Visual Studio Team System (VSTS)—generated custom workload

Microsoft Exchange Load Generator (LoadGen) is a simulation tool used to measure the impact of MAPI, OWA, ActiveSync, IMAP, POP, and SMTP clients on Exchange Server 2010 and 2007. LoadGen allows you to test how a server running Exchange 2010/2007 responds to email loads. These tests send multiple messaging requests to the Exchange Server, which causes a mail load. LoadGen is a useful tool for administrators who are sizing servers and validating a deployment plan. Specifically, LoadGen helps organizations to determine if each of their servers can handle the load they are intended to carry.

Microsoft Exchange Load Generator

LoadGen requires a full deployment of the Exchange environment for validation testing. Customers should perform all LoadGen validation testing in an isolated lab environment where there is no connectivity to production data.

To get a copy of LoadGen tool, visit the Microsoft Download Center.

In the test environment, we used a Microsoft Benchcraft TPC-E testing tool to generate a TPC-E-like workload (Online Transaction Processing, OLTP). The test environment included a set of transactional operations that simulated the activity of a brokerage firm, such as managing customer accounts, executing customer trade orders, and other transactions within the financial markets.

SQL Server TPC-E like workload

This solution used a TPC-E like application as a typical OLTP workload for SQL Server. Because a TPC-E like application workload is adjustable, configure different user profiles to emulate hot, warm, and cold workloads.

Microsoft

SharePoint 2010 VSTS—generated custom workload

How the sample documents were created

To provide a realistic document archive scenario, document uniqueness was critical. We used two separate utilities: the first to create unique documents, and the second to read these files from disk and load them directly into targeted SharePoint Web Applications and document libraries.

Tool to create large numbers of documents

We created documents using the Bulk Loader command-line tool, which was written using the Microsoft .NET 4.0 Framework. This tool uses a dump file of Wikipedia content as input to create up to 10 million unique documents to a disk location. Stock images are used to replace image references from the Wikipedia dumps. This tool is available as source code at:

(40)

Tool to load documents into SharePoint

We added documents to SharePoint Server using the LoadBulk2SP command-line tool, which was written using C# and the Microsoft .NET 3.5 Framework to be

compatible with SharePoint Server. This tool takes the disk output files from the Bulk Loader tool as input and mimics the same folder and file structure directly into SharePoint Server using the targeted web applications and document libraries specified in the application configuration. Using this tool, we loaded over 100 million 250 KB documents into SharePoint Server with a peak performance of 233

documents per second, and an overall average load time of 137 documents per second. This tool is available as source code at:

http://code.msdn.microsoft.com/Load-Bulk-Content-to-3f379974. VSTS test client and test mechanism

We used VSTS with custom code developed by EMC to simulate the SharePoint load, and built up the test environment with the VSTS team test tool that consists of a single controller and three agents.

SharePoint user profiles

During validation, we used a Microsoft heavy-user load profile to determine the maximum user count that the SharePoint farm could sustain while ensuring the average response times remained within acceptable limits. Microsoft standards state that a heavy user performs 60 requests in each hour, which means one request every 60 seconds.

The user profiles in this testing consisted of three user operations: • 80 percent browse

• 10 percent search • 10 percent modify

Note Microsoft publishes default service-level agreement (SLA) response times for each SharePoint user operation. Common operations (such as browse and search) should be completed within 3 seconds or less, and uncommon operations (such as modify) should be completed within 5 seconds or less. The response times in the testing should meet or exceed these SLAs.

(41)

FAST VP performance test results

During the performance test, the tools described in the Test methodology section generated the client workload for Exchange Server, SQL Server, and SharePoint Server simultaneously and the application performance was monitored. The solution team ran each test for 8 hours to simulate a normal working day. This section provides the detailed performance results.

Overview

Notes

• Benchmark results are highly dependent upon workload, specific application requirements, and system design and implementation. Relative system performance will vary as a result of these and other factors. Therefore, this workload should not be used as a substitute for a specific customer application benchmark when critical capacity planning and/or product evaluation decisions are contemplated.

• All performance data contained in this report was obtained in a rigorously controlled environment. Results obtained in other operating environments may vary significantly.

• EMC Corporation does not warrant or represent that a user can or will achieve similar performance expressed in transactions per minute. Test objectives

Testing of this solution validates application and VMAX 10K storage performance with FAST VP enabled.

We introduced RecoverPoint CRR replication into the environment to provide remote data protection, and a network emulator device to produce the network latency between the two sites. In this case, RPAs provided asynchronous replication, where the VMAX 10K array-based splitter splits all the writes and sends them to the production volumes and RPA. We conducted tests to verify the RecoverPoint replication impact on the production environment.

Test scenarios

The solution team used the following scenarios to test the solution: • Baseline testing without RecoverPoint replication in place

• RecoverPoint CRR protection under 25 ms network latency (2,500 km round trip distance)

• RecoverPoint CRR protection under 85 ms network latency (8,500 km round trip distance)

Table 18 lists the performance counters that we monitored on the Hyper-V root servers during the baseline test.

Hyper-V root servers

• Hyper-V Hypervisor Virtual Processor\% Guest Run Time shows the percentage of virtual processor time spent in guest code for the virtual machines.

(42)

• Hyper-V Hypervisor Virtual Processor\% Hypervisor Run Time shows the percentage of processor time spent in hypervisor code for the virtual machines. • Hyper-V Hypervisor Virtual Processor\% Total Run Time shows the percentage

of processor time spent in guest and hypervisor code for the virtual machines. The processor utilization remained in a healthy state on all Hyper-V nodes.

Table 18. Hyper-V processor usage in baseline test

Performance counter Target Node 1 Node 2 Node 3 Node 4

Hyper-V Hypervisor Virtual

Processor\% Guest Run Time <65% 43.2% 44.9% 35.0% 38.9% Hyper-V Hypervisor Virtual

Processor\% Hypervisor Run Time <5% 4.9% 4.1% 4.7% 3.4% Hyper-V Hypervisor Virtual

Processor\%Total Run Time <70% 48.1% 49.0% 39.7% 42.3%

Table 19 shows the detailed LoadGen test results on the Exchange Mailbox Server and HUB/CAS server. Performance data was collected at 15-second intervals for the duration of each 8-hour test run. For accuracy, the results of the first and the last hours were discarded and were averaged over the duration of the rest of the test. Exchange Server

2010

As shown in Table 19, the performance results were all within the acceptable parameters.

Table 19. Exchange LoadGen results in baseline test

Server Performance counter Target Result

MBX server Processor\%Processor Time < 80% 52.6% MSExchangeIS\RPC Requests < 70 2.1 MSExchangeIS\RPC Averaged Latency < 10 ms 2.8 ms MSExchange Database\I/O Database

Reads (Attached) Average Latency < 20 ms 16.7 ms MSExchange Database\I/O Database

Reads (Recovery) Average Latency < 200 ms 16.8 ms MSExchange Database\I/O Database

Writes (Attached) Average Latency < 20 ms 4.0 ms MSExchange Database\I/O Database

Writes (Recovery) Average Latency < 200 ms 4.7 ms MSExchange Database\IO Log Writes

Average Latency < 10 ms 2.7 ms HUB /CAS server Processor\%Processor Time < 80% 40.8%

MSExchange RpcClientAccess\RPC

References

Related documents

In the case where original ballast mounting holes are not used to mount the LED driver, leave the ballast mounting screws in their original holes in the enclosure9. Open the

This award is open to all businesses operating in the UK insurance markets - or agencies working on their behalf - that have undertaken a digital marketing campaign, targeting

Using Hyper-V, Kolektor has virtualized every production workload that it can, including its recent upgrade to Microsoft Exchange Server 2010, Microsoft SQL Server 2008

Within the scope of this performance study paper, the PowerEdge M710 server was used at the web front-end, database tiers of farm configuration 1 (Figure 1) and web front-end tier

Licenses for additional software that may be required for the solution—such as Microsoft Windows Server, Microsoft SQL Server, and Microsoft SharePoint Server—and their

Note: Licenses for additional software required to run the solution, such as Microsoft Windows Server, Microsoft SQL Server, and Microsoft SharePoint Server, are not included with

To develop a C++project to manage a railway ticket reservation service using object oriented programming and data file

Figure 2. Survival of western corn rootworm on Bt and non-Bt maize. In both cases, survival also is shown for a non-Bt near isogenic hybrid. Bar heights are means and error bars are