Deploying a 48,000-user Exchange Server 2010 Environment with Hitachi Compute Blade 2000 and Hitachi Adaptable Modular Storage 2500

36  Download (0)

Full text

(1)

1

Deploying a 48,000-user Exchange Server

2010 Environment with Hitachi Compute Blade 2000 and Hitachi Adaptable Modular Storage 2500

Reference Architecture Guide

Leo Nguyen April 2011

Month Year

(2)

Feedback

Hitachi Data Systems welcomes your feedback. Please share your thoughts by sending an email message to SolutionLab@hds.com. Be sure to include the title of this white paper in your email message.

(3)

Table of Contents

Key Solution Components ... 5

Hitachi Compute Blade 2000... 5

Hitachi Adaptable Modular Storage 2500 ... 8

Hitachi Dynamic Provisioning Software ... 9

Hitachi Dynamic Link Manager Software ... 9

Hitachi Compute Systems Manager ... 9

Exchange Server 2010 ... 9

Solution Design ... 10

Hitachi Compute Blade 2000 Chassis Configuration ... 11

Hitachi Compute Blade 2000 Blade Configuration ... 12

Active Directory Design ... 16

Exchange Server Roles Design ... 16

Mailbox High Availability Design ... 16

Site Resiliency Design ... 17

Client Access Namespace Design ... 17

SAN Architecture ... 17

Network Architecture ... 18

Storage Architecture ... 20

Engineering Validation ... 29

Exchange Jetstress 2010 Test ... 29

Exchange Loadgen 2010 Test ... 32

Datacenter Switchover Test ... 32

N+1 Cold Standby Switchover and Failover Tests ... 32

Conclusion ... 33

Appendix A: Bill of Materials ... 34

(4)

Deploying an 48,000-user Exchange Server 2010 Environment with Hitachi Compute Blade 2000 and Hitachi

Adaptable Modular Storage 2500

Reference Architecture Guide

It can be difficult to design, deploy, manage and support Microsoft Exchange 2010 environments with hardware components from multiple vendors. This solution reduces the complexity of Exchange environments by using servers and storage from Hitachi that work together seamlessly.

This white paper describes a reference architecture that provides system reliability and high availability in a Microsoft Exchange 2010 environment by using the Hitachi Compute Blade 2000 with N+1 cold standby and logical partitioning technologies, the Hitachi Adaptable Modular Storage 2000 family, and an Exchange 2010 Database Availability Group (DAG).

Hitachi Compute Blade 2000 is an enterprise-class platform that offers the following features:

Balanced system architecture that eliminates bottlenecks in performance and throughput

Embedded Hitachi logical partitioning (LPAR) virtualization

Unprecedented configuration flexibility

Eco-friendly power-saving features and capabilities

Fast recovery from server failures due to N+1 cold standby design that allows you to replace failed servers within minutes instead of hours or days

With its unique combination of power, efficiency and flexibility, you can now extend the benefits of virtualization to new areas of the enterprise data center — including mission-critical application servers and database servers like Exchange 2010 — with minimal cost and maximum simplicity.

The 2000 family is ideal for a demanding application like Exchange and delivers enterprise-class performance, capacity and functionality at a midrange price. It’s a midrange storage product with symmetric active-active controllers that provide integrated, automated, hardware-based, front-to-back- end I/O load balancing.

This white paper describes a reference architecture for an Exchange 2010 deployment that supports 48,000 users on two Hitachi Compute Blade 2000 and two Hitachi Adaptable Modular Storage 2500 systems across two datacenters. The reference architecture uses a single stretched DAG containing 12 mailbox servers.

(5)

Figure 1 shows the topology of the Exchange implementation across two datacenters. Note that the active and passive databases are shown on the servers to which they are mapped from the Adaptable Modular Storage 2500s.

Figure 1

This reference architecture guide is intended for IT administrators involved in data center planning and design, specifically those with a focus on the planning and design of Microsoft Exchange Server 2010 implementations. It assumes some familiarity with the Hitachi Adaptable Modular 2000 family, Hitachi Storage Navigator Modular 2 software, Microsoft Windows Server 2008 R2 and Exchange Server 2010.

(6)

Key Solution Components

The following sections describe the key hardware and software components used to deploy this solution.

Hitachi Compute Blade 2000

The Hitachi Compute Blade 2000 features a modular architecture that delivers unprecedented configuration flexibility, as shown in Figure 2.

Figure 2

The Hitachi Compute Blade 2000 combines all the benefits of virtualization with all the advantages of the blade server format: simplicity, flexibility, high compute density and power efficiency. This allows you to take advantage of the following benefits:

Consolidate more resources

Extend the benefits of virtualization solutions (whether Hitachi logical partitioning, VMware vSphere, Microsoft Hyper-V, or all three)

Cut costs without sacrificing performance

The Hitachi Compute Blade 2000 enables you to use virtualization to consolidate application and database servers for backbone systems, areas where effective consolidation was difficult in the past.

And by removing performance and I/O bottlenecks, the Hitachi Compute Blade 2000 opens new opportunities for increasing efficiency and utilization rates and reduces the administrative burden in your data center.

No blade system is more manageable or flexible than the Hitachi Compute Blade 2000. You can configure and administer the Hitachi Compute Blade 2000 using a web-based HTML browser that supports secure encrypted communications, or leverage the optional management suite to manage multiple chassis using a unified GUI-based interface.

(7)

Chassis

The Hitachi Compute Blade 2000 chassis is a 19-inch rack compatible, 10U-high chassis with a high degree of configuration flexibility. The front of the chassis has slots for eight server blades and four power supply modules, and the back of the chassis has six bays for I/O switch modules, eight fan modules, two management modules, 16 half-height PCIe slots and connections for four power cables, as shown in Figure 3.

Figure 3

All modules, including fans and power supplies, can be configured redundantly and hot swapped, maximizing system uptime.

The Hitachi Compute Blade 2000 accommodates up to four power supplies in the chassis and can be configured with mirrored power supplies, providing backup on each side of the chassis and higher reliability. Cooling is provided by efficient, variable-speed, redundant fan modules. Each fan module includes three fans to tolerate fan failures within a module, but in addition, if an entire module fails, the other fan modules will continue to cool the chassis.

(8)

Server Blades

The Hitachi Compute Blade 2000 supports two blade server options that can be combined within the same chassis. Table 1 lists the specifications for each option.

Table 1. Hitachi Compute Blade 2000 Specifications

Feature X55A2 X57A1

Processors (up to two per blade)

Intel Xeon 5600 - 4 or 6 cores Intel Xeon 7500 – 6 or 8 core

Processor cores 4, 6, 8 or 12 6, 8, 12 or 16

Memory slots 18 32

Maximum memory 144GB (with 8GB DIMMS) 256GB (with 8GB DIMMS)

Hard drives Up to 4 N/A

Network interface cards (on- board)

Up to 2 1Gb Ethernet Up to 2 1Gb Ethernet

Other interfaces 2 USB 2.0 port and 1 serial port 2 USB 2.0 and 1 serial port

Mezzanine slots 2 2

PCIe 2.0 (8x) expansion slots 2 2

Up to four X57A1 blades can be connected using the SMP interface connector to create a single eight- socket SMP system with up to 64 cores and 1024GB of memory.

I/O Options

The connections from the server blades through the chassis’ mid-plane to the bays or slots on the back of the chassis consist of the following:

The two on-board NICs connect to switch bays one and two.

The optional mezzanine card in the first mezzanine slot connects to switch bays three and four

The optional mezzanine card in the second mezzanine slot connects to switch bays five and six

Two connections to dedicated PCIe slots

The I/O options supported by the optional mezzanine cards and the switch modules are either 1Gbps Ethernet or 8Gbps Fibre Channel connectivity.

Logical Partitioning

The Hitachi logical partitioning (LPAR) virtualization feature is embedded in the firmware of Hitachi Compute Blade 2000 server blades. This is a proven, mainframe-class technology that combines Hitachi LPAR expertise with Intel VT technologies to improve performance, reliability and security.

(9)

Unlike emulation solutions, the embedded LPAR virtualization feature does not degrade application performance, and unlike third-party virtualization solutions, it does not require the purchase and

installation of additional components, keeping total cost of ownership low. A blade can operate in one of two modes, basic or Hitachi Virtualization Manager (HVM), with two licensing options available in the HVM mode, as follows:

Basic — The blade operates as a standard server without LPAR support.

HVM with essential license — The blade supports two LPARs. No additional purchase is required for this mode.

HVM with enterprise license — The blade supports up to 16 LPARs. The enterprise license is an additional cost.

You can use the embedded LPAR feature alone or combine it with Microsoft Hyper-V, VMware vSphere or both in a single system, providing additional flexibility.

Hitachi Compute Blade 2000 Management Modules

The Hitachi Compute Blade 2000 supports up to two management modules for redundancy. Each module is hot-swappable and supports live firmware updates without the need for shutting down the blades. Each module supports an independent management LAN interface from the data network for remote and secure management of the chassis and all blades. Each module supports a serial command line interface and a web interface. SNMP and email alerts are also supported.

N+1 or N+M Cold Standby Failover

The Hitachi Compute Blade 2000 maintains high uptime levels through sophisticated failover mechanisms. The N+1 cold standby function enables multiple servers to share a standby server, increasing system availability while decreasing the need for multiple standby servers or costly software- based high-availability servers. It enables the system to detect a fault in a server blade and switch to the standby server, manually or automatically. The hardware switching is executed even in the absence of the administrator, enabling the system to return to normal operations within a short time.

The N+M cold standby function has “M” backup server blades for every “N” active server blade, so failover is cascading. In the event of multiple hardware failures, the system automatically detects the fault and identifies the problem by indicating the faulty server blade, allowing immediate failure recovery. This approach can reduce total downtime by enabling the application workload to be shared among the working servers.

Hitachi Adaptable Modular Storage 2500

The Hitachi Adaptable Modular Storage 2500 is a member of the Hitachi Adaptable Modular Storage 2000 family, which provides a reliable, flexible, scalable and cost-effective modular storage system for this solution. Its symmetric active-active controllers provide integrated, automated, hardware-based, front-to-back-end I/O load balancing. Both controllers in a Hitachi Adaptable Modular Storage 2000 family storage system are able to dynamically and automatically assign the access paths from the controller to a logical unit (LU). All LUs are accessible regardless of the physical port or the server from which the access is requested. Utilization rates of each controller are monitored so that a more even distribution of workload between the two controllers can be maintained. Storage administrators are no longer required to manually define specific affinities between LUs and controllers, simplifying overall administration. In addition, this controller design is fully integrated with standard host-based

multipathing, thereby eliminating mandatory requirements to implement proprietary multipathing software.

(10)

The point-to-point back-end design virtually eliminates I/O transfer delays and contention associated with Fibre Channel arbitration and provides significantly higher bandwidth and I/O concurrency. It also isolates any component failures that might occur on back-end I/O paths.

For more information about the 2000 family, see the Hitachi Adaptable Modular Storage 2000 Family Overview brochure.

Hitachi Dynamic Provisioning Software

On Hitachi Adaptable Modular Storage 2000 family systems, Hitachi Dynamic Provisioning software provides wide striping and thin provisioning that dramatically improve performance, capacity utilization and management of your environment. By deploying your Exchange Server 2010 mailbox servers using volumes from Hitachi Dynamic Provisioning storage pools on the Hitachi Adaptable Modular Storage 2500, you can expect the following benefits:

An improved I/O buffer to burst into during peak usage times

A smoothing effect to the Exchange workload that can eliminate hot spots, resulting in reduced mailbox moves related to performance

Minimization of excess, unused capacity by leveraging the combined capabilities of all disks comprising a storage pool

Hitachi Dynamic Link Manager Software

The Hitachi Dynamic Link Manager software was used for SAN multipathing and was configured with the round-robin multipathing policy. Hitachi Dynamic Link Manager software’s round-robin load balancing algorithm automatically selects a path by rotating through all available paths, thus balancing the load across all available paths and optimizing IOPS and response time.

Hitachi Compute Systems Manager

Hitachi Compute Systems Manager is suite of programs for centralized management of multiple servers. Compute Systems Manager provides functions for managing servers efficiently, including functions to manage server software configurations and monitor server operating statuses and failures.

The Server Management Module component of the Compute Systems Manager is required to implement an N+1 or N+M cold standby configuration. A license is required to enable this feature.

Exchange Server 2010

Exchange Server 2010 introduces several architectural changes that need to be considered when planning deployments on a Hitachi Adaptable Modular Storage 2000 system.

Database Availability Groups

To support database mobility and site resiliency, Exchange Server 2010 introduced Database Availability Groups (DAGs). A DAG is an object in Active Directory that can include up to 16 mailbox servers that host a set of databases; any server within a DAG has the ability to host a copy of a mailbox database from any other server within the DAG. DAGs support mailbox database replication, database and server switchovers, and failovers. Setting up a Windows failover cluster is no longer necessary for high availability; however, the prerequisites for setting up a DAG are similar to that of a failover cluster.

Hitachi Data Systems recommends using DAGs for high availability and mailbox resiliency.

(11)

Databases

In Exchange Server 2010, the changes to the Extensible Storage Engine (ESE) enable the use of large databases on larger, slower disks while maintaining adequate performance. Exchange 2010 supports databases up to approximately 16TB but Microsoft recommends using databases of 2TB or less if DAGs are used.

The Exchange Store’s database tables make better use of the underlying storage system and cache and the store no longer relies on secondary indexing, making it less sensitive to performance issues.

Solution Design

This reference architecture describes a solution designed for 48,000 Exchange 2010 users. Each user has a 1GB mailbox, and the user profile is based on sending and receiving 100 messages per day. This solution has 36 databases, with 1,333 users per database. With a two datacenter, 12 mailbox server configuration, each server supports 4,000 users. Each server houses the following:

Three active databases

Three passive databases that are copies of the active databases on the other datacenter If a datacenter fails, the other datacenter take control and handles all 48,000 users.

Table 2 lists the detailed information about the hardware components used in the Hitachi Data Systems lab.

Table 2. Hardware Components

Hardware Description Version Quantity

Hitachi Adaptable Modular Storage 2500 storage system

Dual controller

16 × 8Gbps Fibre Channel ports 16GB cache memory

244 SAS 2TB 7.2K RPM disks

0897/H-Z 2

Brocade 5300 switch 8Gb Fibre Channel ports FOS 6.4.0E 4

Hitachi Compute Blade 2000 chassis

8-blade chassis

16 × 8Gb/sec dual-port HBAs 2 × management modules 2 × 1Gbps LAN switch modules 8 × cooling fan modules

4 × power supply modules

A0154-E-5234 2

Hitachi Compute Blade 2000 E55A2 blades

Full blade

2 × 6-Core Intel Xeon X5670 2.93GHz

80GB memory

58.22 16

Hitachi dual-port HBA Dual port 8Gb/sec Fibre Channel

PCIe card 4.2.6.670 32

(12)

Table 3 lists the software components used in the Hitachi Data Systems lab.

Table 3. Software Components

Software Version

Hitachi Storage Navigator Modular 2 Microcode dependent Hitachi Dynamic Provisioning Microcode dependent Hitachi Dynamic Link Manager 6.5

Microsoft Windows Server Windows 2008 R2 Enterprise Microsoft Exchange Server 2010 SP1

Hitachi Compute Systems Manager 80-90-A

Hitachi Compute Blade 2000 Chassis Configuration

This reference architecture uses two Hitachi Compute Blade 2000 chassis, 16 X55A2 blades, four standard 1Gbps LAN switch modules, and 32 Hitachi dual port 8Gbps HBA cards. Each blade has two on-board NICs, and each NIC is connected to a LAN switch module. Each blade has two PCIe slots available, and all are populated with Hitachi dual port 8Gbps HBA cards. Hitachi HBAs are required for HVM mode. With Hitachi HBAs, when HVM is enabled, eight virtual WWNs are created on each blade.

Figure 4 shows the front and back view of Hitachi Compute Blade 2000 used in this solution.

Figure 4

(13)

Hitachi Compute Blade 2000 Blade Configuration

When designing a server infrastructure for an Exchange 2010 environment, user profiling is the first and most critical piece in the design process. Hitachi Data Systems evaluated the following factors when designing this reference architecture:

Total number of user mailboxes

Mailbox size limit

Total messages sent and received per day

Average message size

Deleted item retention

Single item recovery

With this information, Hitachi Data Systems determined the correct sizing and number of Exchange mailbox roles to deploy using the Exchange 2010 Mailbox Server Role Requirements Calculator, which is an Excel spreadsheet created and supported by the Exchange product group. The goal of the calculator is to provide guidance on the I/O and capacity requirements and a storage design. This complex spreadsheet follows the latest Exchange product group recommendations on storage, memory, and mailbox sizing. Table 4 lists the optimized configurations for the Exchange roles and servers for the environment described in this white paper.

Table 4. Hitachi Compute Blade 2000 Configuration

Blade LPAR Server Name Role

Number of CPU Cores

Memory (GB) Datacenter A Blade 0 N/A N/A N+1 failover standby 12 80

Blade 1 LPAR1 BS-DC1 Active Directory and DNS

4 8

LPAR2 BS-Mgmt1 Blade Management Server

2 4

Blade 2 LPAR1 BS-MBX1 Mailbox Server 6 64

LPAR2 BS-CASHT1 Client Access and Hub Transport Server

6 16

Blade 3 LPAR1 BS-MBX2 Mailbox Server 6 64

LPAR2 BS-CASHT2 Client Access and Hub Transport Server

6 16

Blade 4 LPAR1 BS-MBX3 Mailbox Server 6 64

LPAR2 BS-CASHT3 Client Access and Hub Transport Server

6 16

Blade 5 LPAR1 BS-MBX4 Mailbox Server 6 64

LPAR2 BS-CASHT4 Client Access and Hub Transport Server

6 16

Blade 6 LPAR1 BS-MBX5 Mailbox Server 6 64

LPAR2 BS-CASHT5 Client Access and Hub

Transport Server 6 16

(14)

Blade LPAR Server Name Role

Number of CPU Cores

Memory (GB)

Blade 7 LPAR1 BS-MBX6 Mailbox Server 6 64

LPAR2 BS-CASHT6 Client Access and Hub Transport Server

6 16

Datacenter B Blade 0 N/A N/A N+1 failover standby 12 80 Blade 1 LPAR1 BS-DC2 Active Directory and

DNS

4 8

LPAR2 BS-Mgmt2 Blade Management Server

2 4

Blade 2 LPAR1 BS-MBX7 Mailbox Server 6 64

LPAR2 BS-CASHT7 Client Access and Hub Transport Server

6 16

Blade 3 LPAR1 BS-MBX8 Mailbox Server 6 64

LPAR2 BS-CASHT8 Client Access and Hub Transport Server

6 16

Blade 4 LPAR1 BS-MBX9 Mailbox Server 6 64

LPAR2 BS-CASHT9 Client Access and Hub Transport Server

6 16

Blade 5 LPAR1 BS-MBX10 Mailbox Server 6 64

LPAR2 BS-CASHT10 Client Access and Hub Transport Server

6 16

Blade 6 LPAR1 BS-MBX11 Mailbox Server 6 64

LPAR2 BS-CASHT11 Client Access and Hub Transport Server

6 16

Blade 7 LPAR1 BS-MBX12 Mailbox Server 6 64

LPAR2 BS-CASHT12 Client Access and Hub Transport Server

6 16

For more information about the Microsoft Exchange sizing calculator, see the “Exchange 2010 Mailbox Server Role Requirements Calculator” entry on the Microsoft Exchange Team Blog.

Processor Capacity

With the release of Exchange 2010, Microsoft has new processor configuration recommendations for servers that host the mailbox role. This is due to the implementation of mailbox resiliency. It is now based on two factors: whether the server will host both active and passive database copies and the number of database copies. A passive database copy requires CPU resources to perform the following tasks:

Check or validate replicated logs

Replay replicated logs into the database

Maintain the content index associated with the database copy

(15)

Physical Memory

This reference architecture supports 48,000 users who send and receive 100 messages per day. Based on Table 5 which lists Microsoft’s guidelines for database cache, the mailbox resiliency estimate IOPS per mailbox was 0.10 with a 20 percent overhead for a total of 0.12 IOPS and 6MB for database cache per mailbox.

Table 5. Database Cache Guidelines

Messages Sent and Received per Mailbox per Day

Database Cache per Mailbox (MB)

Stand Alone

Estimated IOPS per Mailbox

Mailbox Resiliency Estimated IOPS per Mailbox

50 3 0.06 0.05

100 6 0.12 0.10

150 9 0.18 0.15

200 12 0.24 0.20

250 15 0.30 0.25

300 18 0.36 0.30

350 21 0.42 0.35

400 24 0.48 0.40

450 27 0.54 0.45

500 30 0.60 0.50

Based on Table 5, Hitachi Data Systems calculated the database cache size as 8,000 × 6 MB = 48GB.

After determining the database cache size of 48GB, Hitachi Data Systems used Table 6 which lists Microsoft’s guidelines for physical memory to determine physical memory required for this solution.

64GB is the ideal memory configuration based on this mailbox count and user profile. Table 6 lists Microsoft’s guidelines for determining physical memory capacity.

Table 6. Physical Memory Guidelines

Server Physical Memory (GB)

Database Cache Size (GB)

2 0.5

4 1.0

8 3.6

16 10.4

24 17.6

32 24.4

48 39.2

64 53.6

96 82.4

(16)

N+1 Cold Standby Configuration

Hitachi Compute Blade 2000’s N+1 cold standby feature is used in this solution to enhance the reliability of the entire Exchange environment. When combined with Exchange’s DAG feature, Hitachi Compute Blade 2000’s N+1 cold standby feature allows rapid recovery from server or database failures:

DAG allows recovery within seconds in the event of a failed database or server but all users are hosted on a single server

N+1 cold standby allows full recovery within minutes in the event of a failed server with users distributed across two servers

See the “Engineering Validation” section for relevant test results.

In this reference architecture, blade 0 is allocated as a cold standby blade. Hitachi Compute Systems Manager is required for this feature, and it is installed on the management server on blade 1. Exchange servers are installed on blades 2-7 on the Hitachi Compute Blade 2000 at each site, and they are registered as active servers in Compute Systems Manager. When hardware failure occurs, failover to the standby blade is performed automatically, and hardware related information is automatically transferred using the blade management network during the failover. You do not need to change the SAN or LAN configuration after the failover.

Hitachi Data Systems considered the following requirements when configuring the N+1 cold standby feature for this reference architecture:

The standby blade must be in the same mode (basic or HVM) as the active blades. The active blades in HVM mode cannot failover to a standby blade in basic mode. Likewise, an active blade in basic mode cannot failover to a blade in HVM mode.

The operating systems installed on active blades must be configured for SAN boot.

(17)

Active Directory Design

In a multiple datacenter design, place two Active Directory domain controllers with DNS and Global Catalog installed in each site for redundancy. Do not span Active Directory across the WAN link as it will increase bandwidth and latency. In the Hitachi Data Center lab environment, a domain controller was installed on blade 1 LPAR 1 at each datacenter.

For more information about Active Directory, see Microsoft’s TechNet article “Active Directory Services.”

Exchange Server Roles Design

In this reference architecture, the client access server and hub transport server roles are combined, while the mailbox server role was installed on a separate LPAR. The primary reasons are to minimize the number of servers and operating system instances to manage and to optimize performance for planned or unplanned failover scenarios.

Mailbox High Availability Design

In this reference architecture, only two copies of the databases are deployed: one active and one passive. The decision to use a single DAG was based on Microsoft’s recommendation to minimize the number of DAGs and to consider using more than one DAG only if one of the following conditions applies to your environment:

You deploy more than 16 mailbox servers.

You require separate DAG-level administrative boundaries.

You have mailbox servers in separate domains.

Figure 5 shows the layout for two databases copies using 12 mailbox servers configured in a single stretched DAG. When it comes to using a DAG stretched between two datacenters, it’s recommended to pre-configure an alternate witness server for redundancy. If a datacenter fails, the other one takes control and handles all 48,000 users.

Figure 5

(18)

Microsoft recommends having at least three database copies (one active and two passive) when deploying with direct-attached storage (DAS) or just a bunch of disks (JBOD). This is because both server failure and storage (hard drive) failure need to be taken into consideration. However, in this reference architecture, only two copies of the databases are deployed (one active and one passive) because the Exchange mailbox databases reside on a Hitachi Adaptable Modular Storage 2500 storage system. The Hitachi Adaptable Modular Storage 2500 provides high performance and the most intelligent RAID-protected storage system in the industry, which reduces the possibility of storage failure. Reducing database copies to two instead of three provides a number of benefits, including these:

Uses less storage

Requires less server resources

Consumes less network traffic to replicates passive databases

Reduces the number of databases to manage

The number of database copies required in a production environment also depends on factors such as the use of lagged database copies and the backup and recovery methodologies used.

Site Resiliency Design

In this reference architecture, site resiliency is ensured by placing the second Hitachi Compute Blade 2000 and Hitachi Adaptable Modular Storage 2500 in datacenter B. In the case of a site failure, that Hitachi Compute Blade 2000 can handle all 36 active databases (48,000 users).

For more information about mailbox and site resiliency, see Microsoft’s TechNet article “High Availability and Site Resilience.”

Client Access Namespace Design

Namespace design is considered an important topic when deploying Exchange with multiple datacenters. Each datacenter in a site resilient architecture is considered to be active to ensure successful switchovers between datacenters. Use a split DNS model where the same namespace is used in datacenters A and B.

For more information about understanding client access namespace, see Microsoft’s TechNet article

“Understanding Client Access Server Namespaces.”

SAN Architecture

For high availability purposes, the storage area network (SAN) configuration for this reference

architecture uses two Fibre Channel switches at each site. SAN boot volumes for the OS are connected to two HBA ports. Exchange volumes are connected to four HBA ports. Four redundant paths from the switches to the storage ports are configured for Exchange volumes, and two redundant paths from the switches to the storage ports are used for SAN OS boot volumes. Port 0A, 1A, 0E and 1E on the Hitachi Adaptable Modular Storage 2500 are used for Exchange database and log volumes, and port 0C and 1C are allocated for SAN OS boot volumes. Hitachi Dynamic Link Manager software is used for multipathing with the round-robin load balancing algorithm.

(19)

Figure 6 shows the SAN design used in this reference architecture.

Figure 6

Network Architecture

When deploying Exchange mailbox servers in a DAG, Hitachi Data Systems recommends having two separate local area network subnets available to the members: one for client access and the other for replication purposes. The configuration is similar to the public, mixed and private networks used in previous versions of Exchange. In Exchange 2010, the two networks have new names:

MAPI network — Used for communication between Exchange servers and client access

Replication network — Used for log shipping and seeding

(20)

Using a single network is a Microsoft-supported configuration, but it is not recommended by Hitachi Data Systems. Having at least two networks connected to two separate network adapters in each server provides redundancy and enables Exchange to distinguish between a server failure and a network failure. Each DAG member must have the same number of networks and at least one MAPI network.

This reference architecture uses two virtual NICs from the LAN switch modules in the Hitachi Compute Blade 2000 systems.

The MAPI and the blade management network are connected to NIC1 for blade 0-7. Blade 0 acts as an N+1 cold standby server, blade 1 as Domain Controller and Compute Systems Manager and blade 2-7 as Exchange mailbox, client access and hub transport servers. The replication network is connected to NIC2 for blade 2-7. A baseboard management controller (BMC) is built into each blade and it is internally connected to the management module. The management module, BMC and HVM work in coordination to report hardware failures. Compute Systems Manager can monitor the blade through the blade management network.

Figure 7 shows the network configuration for all 16 blades

Figure 7

(21)

Storage Architecture

Sizing and configuring storage for use with Exchange 2010 can be a complicated process, driven by many factors such as I/O and capacity requirements. The following sections describe how Hitachi Data Systems determined storage sizing for this reference architecture.

I/O Requirements

When designing the storage architecture for Exchange 2010, always start by calculating the I/O requirements. You must determine how many IOPS each mailbox needs. This is also known as determining the I/O profile. Microsoft has guidelines and tools available to help you determine this number. Two factors are used to estimate the I/O profile: how many messages a user sends and receives per day and the amount of database cache available to the mailbox. The database cache (which is located on the mailbox server) is used by the ESE to reduce I/O operations. Generally, more cache means fewer I/O operations eventually hitting the storage system. Table 7 lists Microsoft’s guidelines for estimated IOPS per mailbox.

Table 7. Estimated IOPS per Mailbox

Messages Sent and Received per Mailbox per Day

Database Cache per

Mailbox (MB) Standalone Estimated IOPS per Mailbox

Mailbox Resiliency Estimated IOPS per Mailbox

50 3 0.06 0.05

100 6 0.12 0.10

150 9 0.18 0.15

200 12 0.24 0.20

250 15 0.30 0.25

300 18 0.36 0.30

350 21 0.42 0.35

400 24 0.48 0.40

450 27 0.54 0.45

500 30 0.60 0.50

For this reference architecture, an I/O profile of 100 messages a day, or 0.1 IOPS per mailbox, was used. To ensure that the architecture can provide sufficient overhead for periods of extremely high workload, Hitachi adds 20 percent overhead for testing scenarios for a total of 0.12 IOPS.

To calculate the total number of host IOPS for an Exchange environment, Hitachi Data Systems used the following formula:

Number of users × estimated IOPS per mailbox = required host IOPS

For example:

48,000 users × 0.12 IOPS = 5, 760 IOPS

Because this reference architecture uses 12 servers, the host IOPS calculation is divided by 12. This means that each of the 12 mailbox servers in this reference architecture must be able to support 480 IOPS.

(22)

This calculation provides the number of application IOPS required by the host to service the

environment, but it does not calculate the exact number of physical IOPS required on the storage side.

Additional calculations were performed to factor in the read-write ratio used by Exchange Server 2010 and the write penalty incurred by the various types of RAID levels.

The transaction logs in Exchange Server 2010 require approximately 10 percent as many I/Os as the databases. After you calculate the transactional log I/O, Microsoft recommends adding another 20 percent overhead to ensure adequate capacity for busier-than-normal periods.

Log Requirements

As message size increases, the number of logs generated per day grows. According to Microsoft, if message size doubles to 150K, the logs generated per mailbox increases by a factor of 1.9. If message size doubles again to 300K, the factor of 1.9 doubles to 3.8, and so on.

Hitachi Data Systems considered these additional factors when determining transaction log capacity for this reference architecture:

Backup and restore factors

Move mailbox operations

Log growth overhead

High availability factors

The transaction log files maintain a record of every transaction and operation performed by the Exchange 2010 database engine. Transactions are written to the log first then written to the database.

The message size and I/O profile (based on the number of messages per mailbox per day) can help estimate how many transaction logs are generated per day. Table 8 lists guidelines for estimating how many transaction logs are generated for a 75K average message size.

Table 8. Number of Transaction Logs Generated per I/O Profile for 75K Average Message

I/O Profile Transaction Logs Generated per Day

50 10

100 20

150 30

200 40

250 50

300 60

350 70

400 80

450 90

500 100

(23)

If you plan to include lag copies in your Exchange environment, you must determine capacity for both the database copy and the logs. The log capacity requirements depend on the delay and usually require more capacity than the non-lagged copy. For this reference architecture, no lag copy was used.

The amount of space required for logs also depends on your backup methodology and how often logs are truncated.

RAID Configuration

To satisfy 48,000 users needing 1GB of mailbox capacity and an I/O profile of 0.12 IOPS, this reference architecture uses two Hitachi Adaptable Modular Storage 2500 systems with a RAID-1+0 (2D+2D) configuration of 2TB 7200 RPM SAS drives. RAID-1+0 is primarily used for overall performance. RAID- 1+0 (2D+2D) was used for this solution because Hitachi Data Systems testing shows it is an optimal configuration for this disk size and speed. Three Hitachi Dynamic Provisioning pools were created on each Adaptable Modular Storage 2500 for the SAN OS boot volumes, Exchange database volumes and transaction log volumes. SAN OS boot volumes are stored in DP Pool 0, Exchange database volumes are stored in DP pool 1 and log volumes are stored in DP pool 2. Table 9 lists the details of the storage configuration used in this lab.

Table 9. Dynamic Provisioning Configuration for SAN OS Boot and Exchange Databases and Logs

Site

Dynamic Provisioning Pool

Number of RAID Groups

Number of Drives

Usable Pool Capacity (TB)

Datacenter A 0 1 4 3.5

1 60 240 218.3

2 6 24 21.3

Datacenter B 0 1 4 3.5

1 60 240 218.4

2 6 24 21.3

(24)

Drive Configuration

Figure 8 shows the drive configuration on the Hitachi Adaptable Modular Storage 2500 used for this solution. The configuration is the same for both datacenters.

Figure 8

(25)

Pool Configuration

Pool 1 is used for Exchange databases. Pool 2 is used for Exchange logs. Both pools are connected to storage ports 0A, 1A, 0E and 1E for MPIO. Hitachi Data Systems recommends keeping the database and log on separate pools for performance reasons. Table 10 shows LUN allocation for each of the Dynamic Provisioning pools used in the Hitachi Data Systems lab.

Table 10. LUN Allocation for Exchange Databases and Logs

Site Host Purpose LUN Size (GB) Dynamic Provisioning Pool

Datacenter A BS-MBX1 Database 200 2000 1

Database 201 2000 1

Database 202 2000 1

Log 203 100 2

Log 204 100 2

Log 205 100 2

Database copy 206 2000 1

Database copy 207 2000 1

Database copy 208 2000 1

Log copy 209 100 2

Log copy 210 100 2

Log copy 211 100 2

BS-MBX2 Database 212 2000 1

Database 213 2000 1

Database 214 2000 1

Log 215 100 2

Log 216 100 2

Log 217 100 2

Database copy 218 2000 1

Database copy 219 2000 1

Database copy 220 2000 1

Log copy 221 100 2

Log copy 222 100 2

Log copy 223 100 2

BS-MBX3 Database 224 2000 1

Database 225 2000 1

Database 226 2000 1

Log 227 100 2

Log 228 100 2

Log 229 100 2

Database copy 230 2000 1

Database copy 231 2000 1

(26)

Site Host Purpose LUN Size (GB) Dynamic Provisioning Pool

Database copy 232 2000 1

Log copy 233 100 2

Log copy 234 100 2

Log copy 235 100 2

BS-MBX4 Database 236 2000 1

Database 237 2000 1

Database 238 2000 1

Log 239 100 2

Log 240 100 2

Log 241 100 2

Database copy 242 2000 1

Database copy 243 2000 1

Database copy 244 2000 1

Log copy 245 100 2

Log copy 246 100 2

Log copy 247 100 2

BS-MBX5 Database 248 2000 1

Database 249 2000 1

Database 250 2000 1

Log 251 100 2

Log 252 100 2

Log 253 100 2

Database copy 254 2000 1

Database copy 255 2000 1

Database copy 256 2000 1

Log copy 257 100 2

Log copy 258 100 2

Log copy 259 100 2

BS-MBX6 Database 260 2000 1

Database 261 2000 1

Database 262 2000 1

Log 263 100 2

Log 264 100 2

Log 265 100 2

Database copy 266 2000 1

Database copy 267 2000 1

Database copy 268 2000 1

Log copy 269 100 2

(27)

Site Host Purpose LUN Size (GB) Dynamic Provisioning Pool

Log copy 270 100 2

Log copy 271 100 2

Datacenter B BS-MBX7 Database 109 2000 1

Database 110 2000 1

Database 111 2000 1

Log 112 100 2

Log 113 100 2

Log 114 100 2

Database copy 115 2000 1

Database copy 116 2000 1

Database copy 117 2000 1

Log copy 118 100 2

Log copy 119 100 2

Log copy 120 100 2

BS-MBX8 Database 121 2000 1

Database 122 2000 1

Database 123 2000 1

Log 124 100 2

Log 125 100 2

Log 126 100 2

Database copy 127 2000 1

Database copy 128 2000 1

Database copy 129 2000 1

Log copy 130 100 2

Log copy 131 100 2

Log copy 132 100 2

BS-MBX9 Database 169 2000 1

Database 170 2000 1

Database 171 2000 1

Log 172 100 2

Log 173 100 2

Log 174 100 2

Database copy 175 2000 1

Database copy 176 2000 1

Database copy 177 2000 1

Log copy 178 100 2

Log copy 179 100 2

Log copy 180 100 2

(28)

Site Host Purpose LUN Size (GB) Dynamic Provisioning Pool BS-

MBX10

Database 133 2000 1

Database 134 2000 1

Database 135 2000 1

Log 136 100 2

Log 137 100 2

Log 138 100 2

Database copy 139 2000 1

Database copy 140 2000 1

Database copy 141 2000 1

Log copy 142 100 2

Log copy 143 100 2

Log copy 144 100 2

BS- MBX11

Database 145 2000 1

Database 146 2000 1

Database 147 2000 1

Log 148 100 2

Log 149 100 2

Log 150 100 2

Database copy 151 2000 1

Database copy 152 2000 1

Database copy 153 2000 1

Log copy 154 100 2

Log copy 155 100 2

Log copy 156 100 2

BS- MBX12

Database 157 2000 1

Database 158 2000 1

Database 159 2000 1

Log 160 100 2

Log 161 100 2

Log 162 100 2

Database copy 163 2000 1

Database copy 164 2000 1

Database copy 165 2000 1

Log copy 166 100 2

Log copy 167 100 2

Log copy 168 100 2

(29)

SAN OS Boot Configuration

Each active server blade is divided into two logical partitions. Each LPAR operates as an independent server. To use LPARs on the blades, SAN OS boot is required. This reference architecture uses one Dynamic Provisioning pool on each of the Hitachi Adaptable Modular Storage 2500 for the OS boot volumes. Each boot volume is 100GB in size and mapped to storage ports 0C and 1C. Table 11 lists LUN allocations for the SAN OS boot volumes.

Table 11. LUN Allocation for the SAN OS Boot

Site Server Server Role LUN

Datacenter A Blade 1 LPAR 1 Active Directory 1 10 Blade 1 LPAR 2 Compute Systems

Manager 1

11

Blade 2 LPAR 1 Mailbox 1 12

Blade 2 LPAR 2 Client access 1 Hub transport 1

13

Blade 3 LPAR 1 Mailbox 2 14

Blade 3 LPAR 2 Client access 2 Hub transport 2

15

Blade 4 LPAR 1 Mailbox 3 16

Blade 4 LPAR 2 Client access 3 Hub transport 3

17

Blade 5 LPAR 1 Mailbox 4 18

Blade 5 LPAR 2 Client access 4 Hub transport 4

19

Blade 6 LPAR 1 Mailbox 5 20

Blade 6 LPAR 2 Client access 5 Hub transport 5

21

Blade 7 LPAR 1 Mailbox 6 22

Blade 7 LPAR 2 Client access 6 Hub transport 6

23

Datacenter B Blade 1 LPAR 1 Active Directory 2 24 Blade 1 LPAR 2 Compute Systems

Manager 2

25

Blade 2 LPAR 1 Mailbox 7 26

Blade 2 LPAR 2 Client access 7 Hub transport 7

27

Blade 3 LPAR 1 Mailbox 8 28

Blade 3 LPAR 2 Client access 8 Hub transport 8

29

Blade 4 LPAR 1 Mailbox 9 30

Blade 4 LPAR 2 Client access 9 Hub transport 9

31

Blade 5 LPAR 1 Mailbox 10 32

Blade 5 LPAR 2 Client access 10 Hub transport 10

33

(30)

Site Server Server Role LUN

Blade 6 LPAR 1 Mailbox 11 34

Blade 6 LPAR 2 Client access 11 Hub transport 11

35

Blade 7 LPAR 1 Mailbox 12 36

Blade 7 LPAR 2 Client access 12 Hub transport 12

37

Engineering Validation

The following sections describe the testing used for validating the Exchange environment documented in this guide.

Exchange Jetstress 2010 Test

Exchange Jetstress 2010 is used to verify the performance and stability of a disk subsystem prior to putting an Exchange 2010 server into production. It helps verify disk performance by simulating Exchange database and log file I/O loads. It uses Performance Monitor, Event Viewer and ESEUTIL to ensure that the storage system meets or exceeds your performance criteria. Jetstress generates I/O based on Microsoft’s estimated IOPS per mailbox user profiles.

The test was performed on 12 servers, against three databases for each server, for a total of 36 databases for a 24-hour period. The goal was to verify the storage was able to handle the required I/O load for a long period of time. Table 12 lists the Jetstress parameters used in testing.

Table 12. Jetstress Test Parameters

Parameter Value

Number of databases 36

User profile 0.12

Number of users per database 1,333 Total number of users 48,000

Mailbox size 1GB

A 24-hour test was run concurrently against the 12 servers and 36 database instances. All latency and achieved IOPS results met Microsoft requirements and all tests passed without errors.

(31)

Table 13 lists the transaction I/O performance for Server 1-12.

Table 13. Transactional I/O Performance

MBX DB

I/O DB Reads Average Latency (msec)

I/O DB Writes Average Latency (msec)

I/O DB Reads/sec

I/O DB Writes/sec

I/O Log Writes Average Latency (msec)

I/O Log Writes/

sec

1 1 16.340 2.694 137.668 81.679 0.619 69.111

2 15.138 2.973 137.769 81.728 0.631 68.709

3 18.805 2.701 137.941 81.722 0.630 69.369

2 4 15.685 2.784 156.703 92.652 0.586 77.481

5 15.045 2.610 156.199 92.340 0.632 77.258

6 15.167 2.742 155.766 92.058 0.600 76.863

3 7 15.302 4.009 153.049 90.633 0.637 75.676

8 15.089 3.713 153.761 91.048 0.587 76.475

9 14.086 2.710 153.141 90.677 0.780 74.761

4 10 16.661 4.735 152.283 90.092 0.691 75.059

11 14.199 3.462 152.506 90.302 0.646 75.008

12 15.068 3.221 152.182 90.113 0.586 75.699

5 13 14.915 3.709 149.777 88.758 0.562 74.325

14 16.121 4.451 149.784 88.737 0.722 73.731

15 14.634 3.141 149.998 88.883 0.543 74.535

6 16 14.989 1.444 182.674 91.487 0.533 77.188

17 15.220 3.370 182.612 91.797 0.395 77.958

18 15.044 2.487 183.152 92.050 0.400 78.085

7 19 15.762 2.738 158.289 92.845 0.593 76.147

20 14.597 2.669 158.164 92.839 0.598 75.630

21 14.634 2.459 157.916 92.681 0.634 75.901

8 22 14.651 2.646 173.283 102.214 0.659 85.038

23 13.284 2.477 173.278 102.246 0.594 84.761

24 13.770 2.634 172.998 102.055 0.647 84.688

9 25 13.922 2.713 181.327 107.008 0.575 87.558

26 13.256 2.586 181.731 107.190 0.601 86.933

27 13.001 2.501 181.803 107.243 0.625 87.355

10 28 13.938 2.594 180.219 106.297 0.658 87.794

29 12.943 2.500 180.192 106.231 0.594 87.510

30 13.382 2.745 180.470 106.363 0.593 87.652

11 31 14.222 2.722 170.152 100.393 0.583 83.435

32 14.142 2.671 171.032 100.856 0.602 83.152

33 13.945 2.539 170.170 100.308 0.645 83.386

(32)

MBX DB

I/O DB Reads Average Latency (msec)

I/O DB Writes Average Latency (msec)

I/O DB Reads/sec

I/O DB Writes/sec

I/O Log Writes Average Latency (msec)

I/O Log Writes/

sec

12 34 14.263 2.652 169.105 99.715 0.569 82.818

35 13.782 2.459 169.259 99.778 0.596 82.836

36 14.234 2.728 169.523 99.926 0.599 83.068

Table 14 lists the I/O throughput for 36 databases for Server 1-12.

Table 14. I/O Throughput

MBX Metric Result

1 Achieved transactional I/O per second 658.508 Target transactional I/O per second 480 2 Achieved transactional I/O per second 745.718

Target transactional I/O per second 480 3 Achieved transactional I/O per second 732.309

Target transactional I/O per second 480 4 Achieved transactional I/O per second 727.478

Target transactional I/O per second 480 5 Achieved transactional I/O per second 715.937

Target transactional I/O per second 480 6 Achieved transactional I/O per second 741.03

Target transactional I/O per second 480 7 Achieved transactional I/O per second 752.734

Target transactional I/O per second 480 8 Achieved transactional I/O per second 826.075

Target transactional I/O per second 480 9 Achieved transactional I/O per second 866.303

Target transactional I/O per second 480 10 Achieved transactional I/O per second 859.773

Target transactional I/O per second 480 11 Achieved transactional I/O per second 812.911

Target transactional I/O per second 480 12 Achieved transactional I/O per second 807.306

Target transactional I/O per second 480

(33)

Exchange Loadgen 2010 Test

Exchange Load Generator is a pre-deployment validation and stress-testing tool that introduces various types of workloads into a test (non-production) Exchange messaging system. Exchange Load

Generator lets you simulate the delivery of multiple MAPI client messaging requests to an Exchange server. To simulate the delivery of these messaging requests, you run Exchange Load Generator tests on client computers. These tests send multiple messaging requests to the Exchange server, which causes a mail load.

The simulation was set for a normal eight hours test run with 36 databases, 12 mailbox servers and an Outlook 2007 Online Mode profile. The goal was for all tests to achieve a CPU utilization rate of less than 80 percent. All tests passed without errors.

Datacenter Switchover Test

In this test, the objective was to validate that the 12-member DAG design can sustain a single

datacenter failure using a switchover. The switchover test was conducted by shutting down six mailbox servers on datacenter A to determine if the other datacenter B can handle the full 48,000 users with CPU utilization of less than 80 percent.

Table 15 lists the DAG test results.

Table 15. DAG Test Results

Test Criteria Target Result

Datacenter switchover N/A Passed

Switchover CPU utilization <80% 73%

Switchover Time N/A 20 seconds

N+1 Cold Standby Switchover and Failover Tests

The objective of these tests was to demonstrate that in the case of a failure of an active blade, the N+1 blade can take over for the failed blade.

To perform these tests, Hitachi Compute Systems Manager and the Server Management Module were installed on the blade management server (BS-Mgmt1). Each blade server was registered to the Server Management Module, and an N+1 group was created. The standby failover blade and active blades were added into this N+1 group. The manual switchover test was performed from the Server

Management Module. The failover test was performed from the management module by issuing a false hardware failure alert.

Table 16 shows the N+1 cold standby test results.

Table 16. N+1 Cold Standby Test Results

Test Recovery Time

Switchover 6 minutes, 33 seconds Failover 9 minutes, 2 seconds

(34)

Conclusion

This reference architecture guide documents a fully tested, highly available Exchange 2010 deployment for 48,000 users. This solution provides simplified planning, deployment and management by using hardware components from a single vendor.

The solution uses the Hitachi Compute Blade 2000 that extends the benefits of virtualization to new areas of the enterprise data center with minimal cost and maximum simplicity. It also allows rapid recovery — within minutes instead of hours or days — in the event of a server failure.

The Hitachi Adaptable Modular Storage 2500 is ideal for a demanding application like Exchange and delivers enterprise-class performance, capacity and functionality at a midrange price. It’s a midrange storage product with symmetric active-active controllers that provide integrated, automated, hardware- based, front-to-back-end I/O load balancing.

Hitachi Data Systems Global Services offers experienced storage consultants, proven methodologies and a comprehensive services portfolio to assist you in implementing Hitachi products and solutions in your environment. For more information, see the Hitachi Data Systems Global Services web site.

Live and recorded product demonstrations are available for many Hitachi products. To schedule a live demonstration, contact a sales representative. To view a recorded demonstration, see the Hitachi Data Systems Corporate Resources web site. Click the Product Demos tab for a list of available recorded demonstrations.

Hitachi Data Systems Academy provides best-in-class training on Hitachi products, technology, solutions and certifications. Hitachi Data Systems Academy delivers on-demand web-based training (WBT), classroom-based instructor-led training (ILT) and virtual instructor-led training (vILT) courses.

For more information, see the Hitachi Data Systems Academy web site.

For more information about Hitachi products and services, contact your sales representative or channel partner or visit the Hitachi Data Systems web site.

(35)

Appendix A: Bill of Materials

This bill of materials lists the Hitachi Adaptable Modular Storage 2500 components and Hitachi Compute Blade 2000 components deployed in this solution.

Table 17 shows the part number, description and quantity of the storage system components used for this solution. Note: part number is subject to change, verify beforehand.

Table 17. Hitachi Adaptable Modular Storage 2500 Components

Part Number Description Quantity

DF800-RKEH2.P AMS2500 Base Unit 1

DF-F800-RKAKX.P AMS2000 High Density Disk Tray 5

DF-F800-AWE2KX.P AMS2000 2TB SAS 7.2K Disk 244

DF-F800-DKF84.P AMS2500 FC Interface 8Gbps Adapter 8

DF-F800-C4GK.P 4GB Cache Module 8

043-100209-01.P Power Cable 125VAC 15A NEMA 5-15P 22

043-100210-01.P Power Cable 250VAC 10A IEC320-C14 22

7846630.P 42U AMS2000 Rack 1050m Deep 1

7846464.P 3U Universal Rail Kit 4

Table 18 shows the part number, description and quantity of the required Hitachi Compute Blade 2000 components.

Table 18. Hitachi Compute Blade 2000 Components

Part Number Description Quantity

GVX-SRE2A122NX1.P Compute Blade 2000 Chassis Type 1

GVAX55A2-KNNX14X.P Compute Blade X55A2 Xeon Blade 8

GVX-EC22932X1EX.P Compute Blade Xeon X5670 6-Core 2.93Ghz 16 GVX-CC2N8G2X1BX.P Compute Blade Dual Port 8Gb PCIe Card 16

GVX-MJ2N4G1X1EX.P Compute Blade 4GB memory 128

GVX-MJ2N8G2X1EX.P Compute Blade 8GB memory 16

GVX-BE2LSW1X1BX.P Compute Blade 1Gb LAN Switch Module 2

GVX-BP2PWM1X1BX.P Compute Blade Power Supply 4

EZ-WTR-DVD-C24.P USB DVD/CDRW Drive 1

(36)

Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd., in the United States and other countries. All other trademarks, service marks and company names mentioned in this document are properties of their respective owners.

Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems Corporation

© Hitachi Data Systems Corporation 2010-2011. All Rights Reserved. AS-080-01 October 2011 Corporate Headquarters Regional Contact Information

750 Central Expressway, Americas: +1 408 970 1000 or info@hds.com

Santa Clara, California 95050-2627 USA Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@hds.com www.HDS.com Asia Pacific: +852 3189 7900 or hds.marketing.apac@hds.com

Figure

Updating...

References

Related subjects :