• No results found

HP 3PAR StoreServ 7400 reference architecture for Microsoft SQL Server OLTP databases

N/A
N/A
Protected

Academic year: 2021

Share "HP 3PAR StoreServ 7400 reference architecture for Microsoft SQL Server OLTP databases"

Copied!
27
0
0

Loading.... (view fulltext now)

Full text

(1)

Technical white paper

HP 3PAR StoreServ 7400

reference architecture for

Microsoft SQL Server OLTP

databases

Storage subsystem reference architecture

Table of contents

Executive summary ... 3 Introduction ... 4 Overview ... 4 Workload structure ... 5 Testing ... 5

HP 3PAR StoreServ Storage subsystem components and features ... 5

Hardware ... 5

Software ... 6

HP 3PAR StoreServ 7400 array capacity and sizing ... 10

Capacity and sizing for HP 3PAR StoreServ 7400 reference architecture ... 10

HP 3PAR StoreServ hardware sizing and configuration ... 11

Hardware configuration ... 12

Storage system physical view ... 13

Host environment ... 13

HP 3PAR StoreServ software configuration ... 14

SQL Server 2012 deployment ... 15

Database server instance and database deployment ... 15

Storage provisioning ... 16

Reference architecture testing ... 18

Adaptive Optimization ... 18

HP 3PAR Recovery Manager for SQL Server ... 21

Space reclamation ... 21

(2)

Virtual LUNs ... 24

Adaptive Optimization ... 24

SQL deployment recommendations ... 24

Adaptive Optimization ... 24

Zero detect ... 25

SQL database file space reclamation ... 25

Summary ... 25

Appendix bill of materials ... 26

Implementing a proof-of-concept ... 26

(3)

Executive summary

High availability and high performance are key requirements for Microsoft® SQL Server OLTP deployments since data access is considered business-critical for many organizations. For example, storage response time is critical for the performance of database transactions. Efficient storage is yet another key requirement as it reduces complexity in fast growing Microsoft SQL Server environments and eliminates costs of unnecessary capacity. Furthermore, with the massive growth of data management platforms, many Microsoft SQL Server instances are deployed on dedicated and frequently underutilized server hardware. IT organizations have long turned to server virtualization as a consolidation strategy to better utilize resources and reduce the amount of physical hardware running Microsoft SQL Server instances, but there was no simple solution to address data stranded in older storage devices caused by rigid architectures.

Today, HP 3PAR StoreServ Storage arrays extend resource virtualization beyond servers as another key enabling technology that continues to reduce costs, improve agility, and enhance business continuity.

The objective of this storage reference architecture is to address these IT challenges by designing, evaluating and testing an HP 3PAR StoreServ Storage configuration that serves as a tier 1 enterprise SQL Server 2012 storage platform capable of:

• Entry to middle level tier 1 enterprise OLTP performance

• Concurrently host mission and business critical workloads (QoS)

• High availability consistent with SQL Server 2012

• Balanced performance and capacity efficiency

• Flexible configuration and growth options for multi-host tenancy

• Ease of management and capacity control

Based on an estimated 60,000-120,000 host IOPS OLTP I/O performance typically referenced for entry tier 1 enterprise SQL workloads, the HP 3PAR StoreServ 7400 is chosen as the storage array for this storage reference architecture. The

performance rating of this StoreServ system configured with two nodes provides approximately between 60,000 and 120,000 host IOPS depending on RAID type used, leaving additional I/O, disk, and physical space in place for growth, effectively extending hardware refresh cycles. The HP 3PAR StoreServ 7400 can be further upgraded to a four node configuration that can scale up to 320,000 backend IOPS.

Testing performed with this reference architecture demonstrates how HP 3PAR StoreServ features uniquely deliver a flexible, efficient and always-on storage platform ideal for both physical and virtual SQL Server 2012 deployment configurations.

For example, SQL Server 2012 workloads are tested to analyze array performance-related functionality such as Adaptive Optimization (AO) and capacity management-related features such as thin provisioning to illustrate the array’s ability to balance performance and capacity efficiency. This paper additionally provides best practice guidance for proof-of-concept implementations.

Last, but not least the ease of use of the HP 3PAR StoreServ management console is instrumental in quickly and easily providing the flexibility to adjust the system to new performance demands or additional capacity requirements. When this is combined with Peer Motion data migration, the configuration can achieve a shorter, lower risk hardware refresh migration to the platform with minimum SQL Server database disruption.

Target audience: This storage reference architecture is intended to familiarize IT decision makers, database and solution

architects, and system administrators with the capabilities and features of the HP 3PAR StoreServ 7400 and provide a tested configuration and best practice guidance for deploying SQL Server 2012 in an HP 3PAR StoreServ environment. Technical sections in this document assume a basic understanding of SAN storage concepts, and familiarity with SQL Server deployment topologies.

This storage reference architecture is provided as a reference only since customer configurations will vary depending on their requirements or specific needs. The referenced configuration (number of nodes, disk enclosures, disks, etc.) represents a minimum configuration recommended for optimum high availability, although smaller configurations can be deployed. This white paper describes testing performed in May-July 2013.

(4)

Introduction

Note

SQL Server value is highlighted throughout this document to make it easier for readers who require high-level technical storage information.

Storage systems are key and high growth elements of today’s data driven IT landscape. The addition of flash media options to this environment increases the complexity and manageability of different storage devices, making a purchase decision or implementation a non-trivial task. These challenges are compounded when multiple hosts and data workloads exist across many storage devices and there is a large ROI in consolidating multiple systems into fewer, easier to manage ones. To that end, this storage reference architecture has been specifically tested with OLTP database workloads to provide customers in the process of evaluating storage options with a proven and sized storage reference architecture for their Microsoft SQL Server 2012 data storage needs. The sizing and workload tests are based on OLTP workloads with an 80/20 read/write mix.

The term “host IOPS” is used in this document to represent I/O operations per second as measured at the host ports of the StoreServ system, also commonly known as frontend IOPS. Host IOPS represent aggregate I/O operations per second being issued by all the SQL Server hosts connected to the test environment. StoreServ I/O performance measurement chart labels do not specify frontend or backend and interpretation depending on the point of measurement is needed. Host ports and virtual volume measurements typically represent host I/O characteristics while disk port measurements represent backend I/O.

This storage reference architecture is specifically designed to service a range of database workloads that can be characterized as entry to middle level of enterprise storage systems, with host I/O workloads typically ranging in the 60k-90k host IOPS range for RAID5 and 60k-90k-120k host IOPS for RAID1 configurations.

Overview

The HP storage system chosen to meet these performance, management and reliability requirements is the HP 3PAR StoreServ 7400. In this particular configuration the HP 3PAR StoreServ 7400 is sized to approximately two thirds of the two-node capability of 160k backend IOPS, leaving room for increased workload and capacity growth

Given the varied nature of performance requirements within an organization, it is not easy or economical to service all applications with a single high performance level of service. In order to meet multi-instance and multi-workload I/O demands HP designed the StoreServ family to provide three I/O service levels (high, middle, and low), based on the use of Solid State (SSD), Fibre Channel (FC), or Nearline (NL) media drives. In this particular configuration, only high and middle performance levels are provided; although, Nearline level of service, typically used for archived SQL data, can be achieved by adding NL media.

A static multi-tier media driven approach to storage aids in consolidating databases with varied performance requirements but it increases the complexity of deploying databases, forcing administrators to manually place files according to their I/O needs. This approach clearly is not easy to manage and does not scale.

The HP 3PAR StoreServ family architecture improves this approach with a highly virtualized provisioning model that provides two adaptive (LUN and sub-LUN) features that automatically move data according to their I/O needs without complex administrative oversight. This virtualization and adaptive data movement approach is used in this configuration to provide variable levels of service and reduces the effort SQL database or SAN administrators would have to perform in optimally deploying SQL Server databases.

In addition to the specific performance and quality of service criteria for this storage reference architecture, the thoughtful hardware and software engineering design of the HP 3PAR StoreServ family provides a feature rich data management environment and console that provides several storage management and capacity efficiency functions that, as shown in this

(5)

minimum keystrokes. The storage reference architecture testing documented in this paper did not necessarily test the console interface but the console interface was used frequently to set up and manage the environment.

To further familiarize the reader with HP 3PAR StoreServ architecture concepts, features in this paper are separated into hardware and software based features. The specific storage array configuration tested represents a recommended minimum configuration that provides the maximum high availability supported by enabling features such as cage level availability. Smaller and larger valid configurations can be used while maintaining many of the same SQL Server benefits outlined in this white paper.

Workload structure

Several SQL Server 2012 databases used to characterize the configuration are divided into two main service levels:

• Premium high performance (mission critical) – set of large OLTP databases deployed on physical servers

• Regular performance (business critical) – set of smaller OLTP and BI databases deployed on virtual machines The premium database data is serviced by an adaptive virtualized volume that uses a combination of SSD and FC drives while the regular database data is serviced by a regular virtualized volume based only on FC media.

Testing

The reference architecture is set up to host data for multiple Microsoft SQL Server 2012 OLTP database instances. Transactional testing is performed using an OLTP workload that generates slightly more I/O than what is typically found in enterprise OLTP workloads to ensure the system is tested to its capacity.

Note

For an in-depth description of all performance building features, the 3PAR architecture overview white paper is an essential document in providing a detailed description of the 3PAR systems. The link to this document is provided in the “For more information” section of this white paper.

HP 3PAR StoreServ Storage subsystem components and features

To meet and exceed increasing demand on storage, HP 3PAR StoreServ Storage has been architected from the ground up to provide scalable performance, high reliability and efficient capacity management, key requirements of enterprise grade storage platforms.

The following sections provide a high level description of the core hardware and software technology features used or tested as part of this reference architecture project.

Hardware

Gen4 ASICs

Gen4 ASICs are at the core of several innovating hardware features related to data processing. They provide very fast controller node interconnect speeds and data management features such as mixed workload support, thin provisioning conversion algorithm and built-in RAID, CRC data integrity calculations along with zero detection at very low latencies.

Cache coherency

Cache coherency is an enabler for additional reliability and resiliency functions such as persistent cache, on-demand node cache re-mirroring, and persistent port presentation.

Mixed workload architecture

Mixed workloads are supported by processing data independently from control information through separate hardware paths. Control information is processed through a dedicated control processor that incorporates its own control memory cache while data is processed and cached separately in the HP 3PAR StoreServ’s Gen4 ASICs.

(6)

Adaptive cache

The cache adaptation algorithm can dynamically re-allocate up to 100% of the cache for reads during heavy read periods without neglecting write needs. During periods of higher write activity, the cache can re-allocate up to 50% of cache memory for writes helping keep write latencies down.

In a typical OLTP SQL Server environment, the I/O demands placed on the array are not static in terms of the read/write ratio so a single setting may work well at some times of the day but may negatively impact performance at other times. This makes the adaptive cache algorithm a very useful feature for database administrators.

Zero detect

The zero-detect algorithm built into the Gen4 ASICs increases disk space utilization by not writing large amounts of repeating zeroes to disk. This is particularly valuable when used with SSD media, as it keeps costs down by allowing databases to be deployed over smaller capacity SSDs.

SQL Server value

SQL Server 2012 directly benefits from the features implemented in the Gen4 ASICs. Test cases in this reference architecture document identify SQL Server benefits and best practices related to these features, such as:

• Cache Coherency provides service level consistency to SQL Server instances during node downtime.

• Mixed workload architecture keeps SQL transactions from slowing down during large data transfers.

• Zero detect improves SQL disk space utilization when a database is created.

Software

Virtualized provisioning and wide striping

This virtualization approach to provisioning is the foundation for the Dynamic and Adaptive Optimization features of the array.

HP 3PAR StoreServ virtualizes data access by mapping physical disk space using a three-tier mapping approach. First, each physical disk is mapped into fine grained disk allocations called chunklets. Secondly, the chunklets are mapped to create logical disks essentially striping data across many disks ensuring uniform performance levels and eliminating hot spots associated with older designs. The third mapping layer creates Virtual Volumes (VVs) from fractions of a logical disk or an entire logical disk.

User data in an HP 3PAR StoreServ resides in Virtual Volumes that are exported to hosts as LUNs. These VVs are defined within a virtual construct called a Common Provisioning Group or CPG. The enclosing CPG has a fixed redundancy RAID type and can have other parameters defined such as capacity warnings that will apply to every VV built inside the CPG.

The CPG can only be defined with one type of media, and HP 3PAR StoreServ Storage currently has three media types partitioning the overall provisioning space into three tiers: Solid State (SSD) media, Fibre Channel (FC) media, and Nearline (NL) media. Looking at the logical view of this virtualization, we can see how media types, CPGs, LUNs and disks are related. Figure 1. Logical view of HP 3PAR StoreServ provisioning virtualization

(7)

Service levels

Different data service levels are provided by the StoreServ arrays through two automated service level adjustments available to administrators: Dynamic Optimization (DO), which migrates all the data residing in a LUN, and Adaptive Optimization (AO), which migrates data at a block level (sub-LUN).

Figure 2. Tiered HP 3PAR StoreServ service level architecture

Dynamic Optimization

Dynamic Optimization (DO) of LUN contents is an important performance enabler for SQL Administrators. Once a database is in place, DO allows data to be migrated to a faster or slower service level tier on demand and online without disrupting SQL Server uptime. For example, if a SQL logfile is deployed on a RAID1 FC LUN and it is determined it is better served in a RAID1 SSD LUN due to increased update and write demands, the administrator can tune the LUN to a faster SSD based CPG automatically without disrupting the database operations and without having to copy files.

Adaptive Optimization

Adaptive Optimization (AO) works on a sub-LUN basis and this works very well for SQL OLTP data files as it migrates highly active portions of data files to SSD and migrates less active portions to Nearline drives if desired.

It is important to know that AO is not always necessary and CPGs can exist without an AO policy. In this case, the data would be statically pinned to the underlying media and redundancy type tier of the CPG.

For example, log or tempDB files don’t benefit from AO as much since the log file writes to the entire log file space and tempDB tends to have transient data access that is not recurrent. Those SQL data files are better served in an appropriate fixed CPG without an AO policy, such as a RAID5 SSD CPG for tempDB and a RAID1 FC CPG for logs.

In this paper, we evaluate Adaptive Optimization of a CPG containing SQL database LUNs; the optimization works on the basis of migrating heavily accessed areas to faster media and least accessed areas to slower media.

The three-tier architecture in HP 3PAR StoreServ Storage supports multiple AO policies that optimize data within all three tiers or just within two media type tiers, even if the system has disk drives of all three media types installed.

For example an AO policy can be defined between SSD and FC CPGs to create a VV and export a LUN to host a key SQL data filegroup, while another AO policy can be defined between an FC CPG and an NL CPG to host a filegroup that contains large amounts of archive data that occasionally needs to be accessed such as end of month reporting.

Setting up and using AO is simple and a system can keep more than one Adaptive Optimization policy active at the same time, thus allowing service level adjustments of several different SQL databases according to their own I/O service needs. An AO Policy consists of the two or three CPG tiers that define the optimization along with two operational schedules. First, the system needs to know how often to collect performance metrics on the source CPG (Measurement hours). Second, the system needs to know how often to perform the actual migration of block data from the source CPG to the target CPG (Schedule).

There is flexibility in the performance metric collection by allowing metrics to be taken during scheduled measurement windows. This approach can help isolate adaptation data to peak traffic hours while excluding nightly backup windows honing in the adaptation to hot data access areas in the array. Figure 3 shows the AO management screen where the AO tier sizes and metric scheduling is configured.

(8)

Figure 3. Adaptive Optimization policy settings in System Reporter

Priority Optimization (QoS)

Priority Optimization is a new licensable feature of the StoreServ operating system and provides a low overhead data overload protection in terms of both IOPS and bandwidth limits that can be set for volume sets. There are many use cases where QoS becomes a key enabler in large enterprise storage systems hosting SQL databases:

• Prevent run-away queries from slowing the rest of the system down (Ad-hoc query sandbox).

• Throttle data loads or sequential workload stressors such as maintenance plans.

• Throttle Peer Migration.

• Safely share a production storage environment with development and test workloads. The HP 3PAR Priority Optimization implementation provides several benefits:

• Very simple to set up, and easy-to-use.

• Can be enabled or disabled in real-time.

• QoS rules only need four attributes to be defined: name, state, I/O limit, and bandwidth limit.

• Requires no special pre-planning.

• Virtual Volume set based in initial release.

• Does not require host agents or drivers.

• VV set overlay allows the implementation of nested QoS rules helpful for multi-tenant scenarios.

• Compatible with Adaptive Optimization and all other HP 3PAR features.

Assigning a volume to multiple volume sets (overlay) allows more than one QoS rule to nest resulting in finer grained application level QoS for a given tenant and establishes the ability to over-provision I/O resources over time yet controls concurrent workload resources to prevent system saturation and cross tenant interference. The nesting capability enables workload optimization under multiple tenant scenarios as shown in Figure 4.

For detailed information regarding Priority Optimization implementation see the HP 3PAR Priority Optimization white paper referenced in the “For more information” section of this document.

(9)

Figure 4. Nested Priority Optimization rules in a multi-tenancy scenario

Thin storage technologies

HP 3PAR StoreServ Storage implements several thin data capacity-related features to enable users to create, convert, maintain, and reclaim space in efficient and cost effective ways. Our evaluation tests these features and shows how efficient SQL Server 2012 can be when deployed using these HP 3PAR StoreServ thin technologies.

Thin provisioning. Thin provisioning creates small Virtual Volumes on the array that are presented as fully allocated to Windows hosts. The Virtual Volume will re-size as data is written or deleted from it providing free space for other thin VVs to use, maximizing the data utilization of the media present in the array.

When SQL Server is deployed on thinly provisioned LUNs with zero detect, databases themselves essentially become thin as large database files can be created yet occupy a minimum of disk space until they grow, keeping the storage Virtual Volume thin, further maximizing media utilization and deferring scaling costs.

This is due to the zero detect feature of the Gen4 ASICs which essentially eliminates stranded capacity in newly created database files as they are zero-filled upon creation.

SQL Server benefit

Thin provisioning and Zero Detect are valuable features that can be used together extending thin-like behavior to the SQL Server database provisioning layer by keeping SQL datafiles trim, making more efficient use of storage media.

Thin conversion. Existing SQL database deployments on legacy system volumes can be converted to thinly provisioned during migration to 3PAR volumes by using thin conversion. When combined with zero detect, unused and stranded (zero-filled) internal to SQL datafile space does not strand capacity in the converted 3PAR volume.

Thin persistence. Using zero detect hardware technology thin volumes stay thin over time in a SQL Server environment when database internal space is released. For example, after a SQL datafile shrink operation the containing thin volume will shrink and release space for other thin volumes to use.

Thin reclamation. New features such as the UNMAP command implemented in Windows Server 2012 provide additional operating system tools to automate thin space reclamation. For example, when datafiles and databases are deleted, the system automatically reclaims the space without administrator intervention.

(10)

SQL Server benefit

The combination of the above thin technologies and hardware-based zero detect result in a congruent approach to capacity management that when used as shown in the following test sections, increases the capacity efficiency of SQL Server database deployments. These efficiency gains reduce data growth scaling costs and improve ROI. The management ease of use and OS integration approach to capacity management also simplify administrator roles in capacity control.

HP 3PAR StoreServ 7400 array capacity and sizing

The HP 3PAR StoreServ 7400 storage array used in this SQL OLTP reference architecture is based on a two-node HP 3PAR StoreServ 7400 Storage array.

This storage array shares the same hardware and software architecture as other arrays in the HP 3PAR StoreServ family, differing from other models by its node processing power, scaling, cache, and capacity. HP 3PAR StoreServ 10800 Storage is the only other array in the HP 3PAR StoreServ family that scales further up in performance and capacity.

The two-node StoreServ 7400 array is an enterprise class array capable of processing 160,000 backend IOPS, placing the array in a position to serve the high-transaction OLTP workloads commonly found in enterprise environments. This configuration can be upgraded to a four-node configuration capable of 320,000 backend IOPS.

Figure 5. HP 3PAR StoreServ 7400 specifications among HP 3PAR StoreServ family

Capacity and sizing for HP 3PAR StoreServ 7400 reference architecture

This HP 3PAR StoreServ 7400 Storage reference architecture for SQL Server 2012 focuses on mixed SQL Server 2012 database workloads due to its inherent design suited for mixed I/O loads.

The database server tier was implemented using HP BladeSystem servers in a combination of both physical servers and virtual servers in order to cover and characterize the configuration performance and efficiency under varied deployment models that typically exist in mixed database integration/consolidation platforms.

(11)

current storage system host ports, or from the Windows logical disk performance counter called Transfers/sec. This metric can be broken down into read/sec and writes/sec to better understand the system I/O load under your databases. The configuration uses an 80/20 read/write Host I/O ratio as a representative sizing ratio. Solutions with higher read ratios like 90/10 will also work as more reads with fewer writes can be served under the same total I/O size factor.

Given those metrics considerations, the configuration is sized to safely provide over 70,000 host IOPS.

Table 1 provides aggregate database size and estimated performance metrics for each tier; however, there is no relationship between size and performance. The amount of storage available in each tier will vary depending on the redundancy type (RAID level) chosen for the virtual volumes.

Table 1. Estimated database deployment size and I/O characteristics per tier

Tier Total DB size (aggregate) 80/20 R/W Host IOPS @ RAID5

Premium service level (SSD) 4TB 58,400 Normal service level (SAS) 6TB 14,600 Total 10TB 73,000

HP 3PAR StoreServ hardware sizing and configuration

Capacity planning

Each component in the system is sized to ensure the configuration meets the target I/O requirements for OLTP. Disk drive quantities are sized first as they are the key I/O driving component of the system. Once the optimal disk configuration is established, the correct controller node and drive shelf configurations needed to support it are identified. RAID5 is used for this estimate due to the improved write performance in HP 3PAR StoreServ RAID5 implementation. Table 2. SQL OLTP 80/20 host 8k I/O target

SQL host IOPS

8k random SQL host reads 56,000 8k random SQL host writes 14,000 SQL OLTP 80/20 aggregate host 8k I/O target 70,000

Table 3. Total backend array IOPS based on RAID 5

StoreServ System IOPS

RAID 5 Reads 56,000 RAID 5 Writes (14,000 X 4) 56,000 Total backend IOPS required 112,000

(12)

Disk sizing

This I/O load can be serviced using a combination of 88 SAS drives and 32 SSD drives. Table 4. Total configuration backend IOPS

Configuration backend IOPS

FC tier backend (88 X 260 IOPS) 22,880 SSD tier backend (32 X 3000 IOPS) 96,000 Total configuration backend exceeds target 118,880

Under RAID5, the 80/20 r/w host I/O workload translates to a 50/50 r/w array workload due to extra RAID5 writes. This estimate assumes 260 IOPS per FC drive and 3000 IOPS per SSD drive.

Note

This represents a minimum RAID5 I/O rate expectation for this configuration. It does not factor-in additional I/O

performance derived from controller node cache hits or from the use of RAID1 for SQL Logs, making the actual SQL host I/O performance higher than estimated here (see the Adaptive Optimization performance section). Your HP representative can assist in determining I/O sizing estimates for your needs using HP sizing tools.

Controller node sizing

The array configuration needed to support 70k host IOPS would require 112k backend IOPS which is well within the 160k backend IOPS safe limit for the two-node configuration of the 3PAR StoreServ 7400.

Additional drives can be added to this configuration increasing the performance until a maximum of 160k backend IOPS (or 120k+ host IOPS, based on an 80/20 RAID5 workload) is reached. This represents more than 33% backend IOPS headroom within this particular HP 3PAR StoreServ 7400 configuration.

Disk shelf (cage) sizing

Each disk shelf can house a total of 24 SFF drives. This configuration is installed using 24 drives in the two-node controller enclosure and 24 drives in each of four additional drive shelves.

Physical disk configuration

The array is sized with a two-node controller enclosure with 24 drives and four disk cage enclosures with 24 drives per enclosure. Table 5 shows the breakdown of disks used in this configuration.

Table 5. Reference architecture drive configuration Drive Type Speed Capacity Quantity

SAS SFF 15K rpm 300GB 88 SSD SFF N/A 100GB 32

Hardware configuration

The following section outlines the hardware configuration, in terms of component size and physical configuration needed to satisfy the project target OLTP I/O rate of 70k SQL host IOPS in an 80/20 read/write pattern. The configuration presented is based on a SAN approach although the same performance can be achieved in a direct attach topology between the

(13)

Storage system physical view

The HP 3PAR StoreServ 7400 storage used in our SQL 2012 reference architecture consists of two controller nodes hosted in a single rack. The two controller nodes physically reside in a single 2U enclosure that also houses up to 24 disk drives and is connected to four disk shelves as shown in Figure 6.

Figure 6. HP 3PAR StoreServ 7400 reference architecture

(14)

The choice of HP blade servers deployed with a mix of both physical and virtual operating systems is used to test the system in a mixed host/mixed service level environment similar to what may be typically found in a consolidation or pre-consolidation production system.

The use of a blade-based configuration in this analysis does not mean the HP 3PAR StoreServ Storage is limited to blade host configurations. For example, similar benefits can be obtained when connecting HP 3PAR StoreServ 7400 Storage to high performance standalone physical servers such as HP ProLiant DL980 enterprise servers.

Figure 7. SQL Server database servers

Host environment settings

HP servers configured with the following RBSU (BIOS) settings:

• Hyper-Threading OFF

• NUMA ON (memory interleave OFF)

• Static High Performance power profile SQL Server 2012 configured with:

• MAXDOP = 1

Windows Server 2012 configured with:

• Indexing service disabled

• Defrag schedule disabled

Direct Attach versus SAN network

The HP 3PAR StoreServ 7400 storage array can be deployed using either a SAN or direct attach (DAS) topology. In this reference architecture we used a SAN topology, although the same performance can be achieved in a DAS configuration. The HP Virtual Connect FlexFabric 10Gb/24-port modules provide a seamless and easy to manage way to attach a StoreServ array to BladeSystem c7000 enclosures further reducing costs in SQL architectures compatible with DAS connectivity such as Always On availability groups.

HP 3PAR StoreServ software configuration

(15)

• Thin Conversion

• Thin Persistence

• Thin Copy Reclamation

• Thin Provisioning

• VSS provider for Microsoft Windows

• Recovery Manager for SQL Server

SQL Server 2012 deployment

The latest releases of SQL Server and Windows Server have new features that further facilitate virtualization and improve capacity management from an operating system perspective. Features such as SCSI TRIMM function, thin provisioning for Hyper-V, and Offload Data Transfer support (ODX) work seamlessly with the HP 3PAR StoreServ 7400 arrays to deliver a thin and virtualization friendly environment ready to efficiently host SQL Server 2012 databases.

The SQL Server 2012 test environment for the reference architecture is set up according to the following host configuration, storage provisioning, and database layout.

Database server instance and database deployment

The c7000 blade enclosure has four BL660c Gen8 blades and four BL460c Gen8 blades.

The four BL660c Gen8 blades are configured as physical SQL Server hosts to drive the premium service level tier of the array simulating dedicated high performance/mission-critical use.

The four BL460c Gen8 blades are configured as Hyper-V SQL Server hosts to drive the normal service-level tier simulating the business-critical database typically consolidated in a virtualized environment.

The host connectivity to the HP 3PAR StoreServ 7400 in this reference architecture is accomplished via a SAN network. After provisioning the array for SQL Server databases, several instances and databases are created and deployed according to the deployment shown in Figure 8.

The allocation of four large OLTP databases for high performance and over 10 smaller OLTP and BI databases for normal performance was chosen to stress the system in a similar way a typical hybrid environment would experience.

The high performance databases are hosted on the high performance BL660c Gen8 servers and have their datafiles deployed on the premium service-level tier defined in the array.

The normal performance databases are hosted on the virtualized BL460c Gen8 servers and have their datafiles deployed on the normal service-level tier in the array.

(16)
(17)

In addition, we deployed SQL 2012 availability groups to provide a realistic workload that included synchronous secondary mirroring I/O activity concurrent with the primary OLTP workloads.

The Common Provisioning Groups (CPG) and Adaptive Optimization policies are the constructs used to set up VVs needed to implement each of the two service levels. Additional service levels can be configured by defining separate sets of CPGs and AO policies and lower levels of service can be set up by adding Nearline drives. For the purposes of this reference

architecture, two levels provide a test environment representative for the project objectives.

The following section describes high level steps used to provision storage on the storage system and identifies the resulting CPGs and VVs.

Premium service-level tier

The premium service-level tier is implemented using a RAID5 FC CPG called Tierable_FC_Data created with an AO policy enabled. The SSD space needed for the data is defined in the AO Policy.

From this CPG two (or more) Virtual Volumes are created and exported to each of the two physical BL660c Gen8 servers for high performance OLTP data LUNs.

A second RAID1 CPG called FC_Logs is defined for SQL Log files. From this CPG two VVs are created and exported to each BL660c Gen8 for SQL Log LUNs.

A separate fully provisioned RAID5 SSD CPG called SSDtempDB is defined for fixed SSD performance needs. From this CPG two VVs are created and exported to each BL660c Gen8 for tempDB LUNs.

Normal service-level tier

The normal service-level tier is simpler in terms of CPG definitions, however more VVs are created and exported to each Hyper-V host. Guest OS systems then defined the disks as pass-through devices and are available online from each guest. A single RAID5 CPG called FC_Data is defined for data. From this CPG multiple data VVs are created and exported to each BL460c Gen8 Hyper-V host server. Notice how this CPG does NOT have an AO policy, resulting in a fixed level of service capable of RAID1 Fibre Channel performance.

From the FC_Logs CPG defined above, multiple VVs are created and exported to each BL460c Gen8 Hyper-V host for virtual SQL instance log LUNs.

From the SSDtempDB CPG defined above, multiple tempDB VVs are created and exported to each BL460c Gen8 Hyper-V host for virtual SQL instance tempDB LUNs.

Note

The definition of a CPG does not require a capacity, as this is allocated during Virtual Volume creation. The CPG listing in Figure 9 shows total allocation capacity for each CPG as the total aggregate allocation of all Virtual Volumes created from each CPG.

(18)

Reference architecture testing

The following engineering tests were designed to verify the reference architecture meets or exceeds the 70,000 host IOPS performance target it was designed to service. In addition, testing is performed to validate the configuration provides a dynamic balance between performance and capacity. Finally, the reference architecture is tested to verify mixed workloads can be serviced concurrently.

During this evaluation, the premium service level is first tuned by enabling Adaptive Optimization. Once the test workload data was optimally migrated to the SSD tier, the array was subject to normal service level tier workload stressors such as database backups and maintenance jobs in order to evaluate service level interference, if any, at the premium service level tier.

Adaptive Optimization

Configuration

The Adaptive Optimization test is used to set up and verify that the optimization algorithm can adapt data under OLTP workloads.

Four large OLTP databases were deployed on two physical database servers (HP ProLiant BL660c Gen8) and stored on fully provisioned VVs inside a FC Tier CPG. This CPG had an AO policy defined to sample performance data every hour and perform data movement every three hours.

Note that the SSD tier 0 limit drives how much data is migrated to that tier, essentially migrating data right up to that limit. In this case, the four OLTP databases residing in the FC source CPG have an aggregate data file footprint of approximately 4TB and the SSD target CPG was set to 1TB. This effectively improved response times for half of the data accessed under a uniform data access pattern in the data files.

The range of adaptation goes from 0 (all data in the FC tier) to 100% (all data migrated to the SSD tier), depending on the service level requirement and the available SSD space in the array. In this particular test case we used a 25% data file to SSD tier target size ratio.

The AO data migration is captured in the charts that follow. The IOPS in the FC tier drop as data pages migrate to SSD and a proportional amount of IOPS begin to be serviced by the SSD tier.

Although the test OLTP workloads are paced to a fixed I/O rate, the aggregate system wide IOPS observed after data movement increased due to queued I/O requests being serviced with the lower service times achieved in the adaptation. In order to test the AO functionality an AO policy was defined and enabled. Once the initial 3 hour sampling period lapsed, every hour the system reevaluates data I/O to see if data migration is needed again. This initial adaptation is set up to only place less than 30% of actual data onto the SSD tier, excluding log files which are not part of the AO policy.

This allocation will be used to evaluate the Adaptive Optimization between tiers as the FC tier is loaded with surge workloads from normal business-critical workload databases.

Adaptive Optimization performance

The AO optimization test settled at a 10% SSD to FC disk capacity usage ratio, migrating 400GB of data to tier 0 SSD from a total space utilization of 4TB.

While this is not a benchmark, the workload placed a fairly high load on the array without saturating controllers or SSD/FC disk tiers.

(19)

Backend IOPS

Figure 10 shows backend IOPS as measured, with a backend read/write ratio closer to 60/40 and an average of 110,000 backend IOPS. The chart covers the initial AO adaptation, so we see the lower initial IOPS increase over a few AO cycles as frequently accessed data is migrated to the SSD tier. The vertical axis represents I/O operations per second and the horizontal axis represents time.

Figure 10. 8K Backend IOPS measurement

Figure 11 shows how Adaptive Optimization also results in lower service times as frequently accessed data is migrated to the SSD tier. A marked reduction in service time occurs when adaptation is complete.

Figure 11. Service times

Front end IOPS

The front end IOPS are also in line with the estimated 70k IOPS as measured from the Windows hosts.

Estimating I/O performance

From a component perspective, the I/O is limited by the drive configuration. HP has sizing tools to accurately model anticipated performance for a given drive configuration. A rough estimate can also be obtained assuming 260 drive IOPS per SAS drive and 3000 drive IOPS per SSD drive. With those values we add up potential backend drive totals and then calculate the front end IOPS based on RAID5. The I/O performance estimated for the test configuration is shown in Table 6.

(20)

Table 6. Higher end of configuration with 88 SAS and 32 SSD drives

Tier Backend IOPS

FC SAS tier (88 X 260 IOPS) 22,880 SSD tier (32 X 3000 IOPS) 96,000 Total tier backend IOPS 118,880

50/50 backend read write ratio yields are shown in Table 7. Table 7. Read/write ratio

Operation Backend IOPS

Read 59,440 Write 59,440

Based on this backend ratio and RAID 5 provisioning, the front end IOPS disk limit is estimated at: RAID 5 Front end IOPS = 59,440 + (59,440/4) = 74,300 Front end IOPS.

In the 3PAR StoreServ cache design, more cache is allocated per GB of SSD than per GB of FC drives. When combined with the Adaptive Cache algorithm, the higher cache allocation for SSD in the system results in larger cache adjustment swings, useful for varying read/write ratios.

Cache adaptability is important for SQL Server workloads that experience daily variations of read/write ratios, such as surges when a report is run (read), or a large sequential data load (write). During database sequential reads the read cache allocation increases and the array internal sequential pre-fetch activates to quickly get ahead of SQL Server physical reads, dynamically lowering response times. During heavy writes, the array adapts and allocates more cache for writes, once again reducing the service times. From a SQL Server perspective this means faster reports and shorter table loads or backups without needing an administrator to change cache settings before nightly backup jobs.

SQL Server benefit

The system is designed to minimize concurrency impact of large I/O over small I/O. This results in fair isolation between service level tiers, helping sustain level performance despite surges. For SQL Server deployments, this translates into uniform SQL transaction and application response times. This also reduces or eliminates the need for database

administrators to manually develop complex SQL Server maintenance job concurrency schedules to avoid degrading end user experience.

Adaptive Optimization and I/O density

In sizing a storage system that will use Adaptive Optimization, I/O density is a another area that needs to be considered in terms of database sizes and the SSD/FC media capacities initially installed. 32 100GB SSD drives will provide very similar I/O performance compared to 32 x 200GB drives, but only half the capacity and at a lower cost. We recommend using the HP Storage Sizer (see link in the “For more information” section of this white paper) to estimate the drive quantities needed to meet IOPS requirements. Once the minimum drive quantities are established verify capacity will be sufficient to hold expected databases.

(21)

compression and datafile reorganization with lower page FILL percentages more data can be migrated to the SSD tier before having to increase the number of SSD drives.

HP 3PAR Recovery Manager for SQL Server

Recovery Manager for SQL is a backup solution that brings together HP 3PAR StoreServ snapshot technologies and Microsoft Volume Shadow Service (VSS). This integration provides a reliable framework for taking and managing SQL database snapshots and incorporates storage object features such as thin-awareness and space reclamation. One of the key features of HP 3PAR snapshot technology is the implementation of copy-on-write. Unlike other implementations, HP 3PAR snapshots do not suffer performance degradation after multiple snapshots are taken. The white paper HP 3PAR Recovery Manager for Microsoft SQL Server linked in the “For more information” section describes in further detail the benefits and usage of this backup solution.

Recovery Manager for SQL has three components, the database server backup agent, and the backup server. The following screenshot shows the Recovery Manager graphical interface.

Figure 12. HP 3PAR Recovery Manager for SQL interface

Testing was performed to measure snapshot creation and evaluate space reclamation upon snapshot deletion and capacity savings (thin-awareness) of snapshots taken from thinly provisioned volumes.

The performance test consisted of taking 20 consecutive snapshots and verifying no increase in snapshot creation time over the number of copies. The time it took to take each snapshot did not increase or decrease as the number of snapshots stored increased. This consistency is the result of 3PAR’s non-duplicative copy-on-write algorithm.

Autonomic snapshot space reclamation is achieved when snapshots are deleted. The space reclamation is a regularly scheduled background process so space reclamation is not immediately observed until sometime after the deletion has been made.

(22)

Volume and snapshot deletion

Virtual Volumes are sometimes deleted when no longer needed, and SQL Server snapshots, taken using Recovery Manager for SQL, also use un-exported Virtual Volumes for snapshot storage. When those snapshots are deleted Virtual Volume space is recovered automatically by the system. Overtime data is distributed in different disk areas and HP recommends running the compactCPG command to ensure data resides in contiguous areas, further improving the reclamation of space.

Data deletion inside a volume

Files and data within files are often deleted such as the case of SQL Server datafiles and data pages when tables are dropped or rows are deleted from tables. This space can be reclaimed and used by the Virtual Volume to use for new data as long as it is zeroed out. Windows Server 2012 now supports SCSI unmap commands and when files are deleted they are automatically reclaimed by 3PAR StoreServ. Older versions of Windows require the use of SDELETE to zero fill the deleted files. To reclaim space within a SQL datafile, a datafile shrink operation with page relocation is needed, followed by an index reorganization to ensure the remaining pages are properly sorted in the data file. The following test shows the commands and space reclaimed within the VV in this specific SQL data reclamation scenario. When data is deleted by SQL Server, a ghost cleanup process marks the page unallocated but the actual data bytes remains in the page rendering it ineligible for reclamation. SQL provides two procedures to zero-fill deleted data from the data file, sp_clean_db and

sp_clean_db_file_free_space.

Table truncation

In this evaluation, identical databases, deployed in normal TPVV and zero detect TPVV, had large tables truncated to verify that space reclamation occurs after executing the sp_clean_db_file_free_space command.

Note

The test databases are overprovisioned in that the SQL datafiles have unused space/zeros in them, in other words we disable database instant initialization so the files are zero filled.

Database shrink operation

This test verifies the new SCSI UNMAP implementation in Windows Server 2012 works with the HP 3PAR StoreServ Storage and SQL Server shrink/drop data file operations.

Prior to the UNMAP implementation, space reclamation was possible in Windows only by writing zeros to the file prior to operating system deletion.

In this test, the same databases used for table truncation where shrunk using the DBCC SHRINKFILE command. Figure 13 shows space reclaimed automatically by the array and available for provisioning after a database shrink operation. The reclamation occurred regardless of the zero detect capability of each LUN.

Figure 13. Space utilization after database shrink operation

Both databases were identical in data size, and had very little data in them but had large log files. Once again zero detect is more efficient in storing the uncompressed log portion of the database.

(23)

Database drop operation

The last test regarding thin space reclamation in Windows Server 2012 evaluates if space is reclaimed when a database is completely deleted from the host LUN.

Figure 14 shows that both zero detect-enabled VVs and zero detect-disabled VVs reclaim all space after the test databases are deleted from each instance.

Figure 14. Space utilization after database deletion

After the delete, volumes with and without zero detect reclaimed space as expected, due to the implementation of the SCSI UNMAP command.

SQL Server benefits

Zero detect provides space compression from a SQL Server database perspective during initial database creation, and the implementation of standard SCSI UNMAP in Windows Server 2012 keeps SQL databases thin after datafile deletion, without administrator intervention. Over time, typical database defragmentation jobs should still be used to compact data rows and release unused internal space that is no longer zero-filled (by using shrink, or preferably rebuilding indexes in alternate datafiles).

HP 3PAR StoreServ deployment settings and best practices

The following hardware and software configuration settings outline key deployment recommendations for optimum performance and availability of HP 3PAR StoreServ storage systems. Please refer to the HP 3PAR StoreServ Storage best practices guide (see link in the For more information section) for in-depth implementation details.

These recommended settings were used to configure the reference architecture tested. Consult with HP prior to

implementing a proof-of-concept or production deployment as these best practices may change in new hardware/software releases.

Front end port cabling

• Each system node should be connected to both fabrics (assumes a redundant fabric topology).

• Odd-numbered ports should be connected to fabric 1 and even-numbered ports connected to fabric 2.

FC zoning

• Single initiator to single target zoning is preferred (1-1 zoning).

• Zoning should be done using 3PAR World Wide Port Numbers (WWPN).

A host needs a minimum of two connections, one to each node of a node pair, for example, 3PAR node ports 0:2:1 and 1:2:2.

Common Provisioning Groups (CPGs)

• Keep the number of CPGs defined in the system to a minimum.

• For data that requires multiple AO policies, use multiple CPGs. (A given CPG can only be associated with one AO policy when Adaptive Optimization is used.)

• SSD-based CPGs need to be defined using RAID5 redundancy with a RAID set size of 3+1.

• FC-based CPGs need to be defined using RAID5 redundancy unless heavy write utilization exceeds 50% and write performance requirements justifies using RAID1.

(24)

Virtual Volumes (VVs)

• VVs must be created from CPG structures, not directly from physical disks.

• Virtual Volumes need to have User CPG and Copy CPG checked.

• Zero detect needs to be enabled on TPVVs that are periodically “zeroed out.” (Zero detect is enabled by default in HP 3PAR OS version 3.1.2 and later.)

• Thinly provisioned VVs can have an allocation warning, but must not have an allocation limit, not even 100%.

Virtual LUNs

• Use volume sets when exporting multiple Virtual Volumes to a host or host set.

• Virtual Volumes need to be exported to host objects, not to ports for all hosts.

Adaptive Optimization

When using thin provisioning volumes along with Adaptive Optimization, select a CPG using FC disks for the User CPG of the thin provisioning volumes. This means that when new data is written, it will be on a good performance tier by default.

Key Point

Ensure that the default tier (FC) has enough capacity and performance to accommodate the requirement of new applications until data is migrated to other tiers.

When new data is created (new VVs or new user space for a thin volume), it will be created in the FC tier, and Adaptive Optimization will not migrate regions of data to other tiers until the next time the Adaptive Optimization configuration is executed.

It is therefore important that the FC disks have enough performance and capacity to accommodate the performance or capacity requirements of new applications (or applications that are in the process of being migrated to HP 3PAR StoreServ) until the moment when the regions of data will be migrated to the other tiers.

If SSDs are used in Adaptive Optimization configurations, no thin provisioning volumes should be directly associated with SSD CPGs. The thin provisioning volumes should only be associated with FC CPGs. This will help ensure that SSD capacity is consumed by Adaptive Optimization and will allow this capacity to be safely used to 95 percent or even 100 percent. A different AO configuration and its associated CPGs should be created for every 100 TB of data or so. (Each configuration has a 125TB aggregate limit.)

Key Point

Schedule the different Adaptive Optimization configurations to run at the same time, preferably at night. This method is recommended because Adaptive Optimization will execute each policy in a serial manner but will calculate what needs to be moved at the same time.

It is preferable to not set any capacity limit on the Adaptive Optimization configuration level, or on the CPG (no allocation warning or limit). Enter a value of “999999” (999 TB) for each tier.

SQL deployment recommendations

The following SQL Server deployment considerations are derived from the testing performed and architecture information available at the time of this writing.

(25)

Zero detect

In order to maximize zero detect compression benefits over time, do not size the initial datafiles excessively above the size of the actual data. Over time, as data is inserted and deleted from a datafile, zero-filled pre-allocation content is no longer zero and the compression factor will decline. Typical SQL index maintenance rebuilds on alternate datafiles will help restore the original compression ratio.

SQL database file space reclamation

Use the sp_clean_db and the sp_clean_db_file_free_space procedures to zero out deleted SQL data pages internal to database data files. The array will automatically reclaim space shortly after this cleanup is performed.

Summary

The HP 3PAR StoreServ 7400 Storage is an excellent storage platform for SQL Server OLTP and mixed workloads. This white paper has expanded on many of the features of the HP 3PAR StoreServ Storage family and the reference architecture tests have shown how key features increase performance of SQL Server deployments while reducing costs.

HP 3PAR Adaptive Optimization has proven easy to use and powerful in managing quality of service for demanding workloads. The ability to use different media and RAID types for different data file types in SQL Server is a very compelling approach that enables administrators to initially deploy and later adjust deployments without costly migration downtime. The new Priority Optimization QoS feature released with StoreServ 3.1.2 MU2 brings a key multi-tenancy performance control function that allows multiple SQL databases to reside in a shared environment without experiencing resource contention and the associated management and planning issues that arise with uncontrollable workloads.

SQL Server 2012 incorporates changes to their scalability model by adding Availability Groups, allowing additional servers to be read-only secondary availability group nodes. An HP 3PAR StoreServ Storage can host data for both a primary and secondary servers, maximizing performance while minimizing costs. Adaptive Optimization and the adaptive cache works well with a read only host and a read/write host, allowing administrators to reduce cost by allowing Adaptive Optimization to migrate infrequently read data to less expensive media, maximizing the utilization of more expensive SSD media for other data portions.

Additionally, we have also seen many of the thin technologies enabled by the Gen4 ASICs work extremely well with SQL Server 2012 databases and data files. The zero fill, and zero padding internal to the data files seamlessly thin out with zero detect and are maintained thin even during use. When both SQL database compression and zero detect are used together, large cost savings related to capacity and media type are achieved.

From a high availability perspective, the array snapshot system is well integrated with SQL Server via the SQL Server remote snapshot management suite. The hardware foundation for snapshots is very effective when compared to other solutions in that the write on copy algorithms only fire once even if multiple snapshots are defined for the same data, reducing the performance penalty for hardware snapshots.

Finally from a resiliency view, the HP 3PAR StoreServ Storage offers excellent redundancy at every level of the platform, including cache mirroring, cage level redundancy, among others that minimize risk while providing fast RAID recovery on the Gen4 ASICs.

All these benefits aggregate to provide a flexible enterprise platform that can be easily leveraged to host multiple databases of mixed workload characteristics. The federation capabilities such as Peer Motion further extend these benefits by allowing businesses to migrate from multiple storage arrays as part of either hardware refresh cycles or consolidation initiatives.

(26)

Appendix bill of materials

The following bill of materials represents the parts as used in this HP 3PAR StoreServ system.

Note

Part numbers are at time of publication and subject to change. The bill of materials does not include complete support options or other rack and power requirements. If you have questions regarding ordering, please consult with your HP Reseller or HP Sales Representative for more details. hp.com/large/contact/enterprise/index.html

Table A-1. Bill of materials

Quantity Part number Description

1 BW904A HP 642 1075mm Shock Intelligent Rack 1 QR483A HP 3PAR StoreServ 7400 2-N Storage Base 8 QR486A HP 3PAR 7000 4-pt 8Gb/s FC Adapter 88 QR492A HP M6710 300GB 6G SAS 15K 2.5in HDD 32 QR502A HP M6710 100GB 6G SAS 2.5in SLC SSD 1 BC795A HP 3PAR 7400 Reporting Suite LTU 1 BC773A HP 3PAR 7400 OS Suite Base LTU 120 BC774A HP 3PAR 7400 OS Suite Drive LTU 4 QR490A HP M6710 2.5in 2U SAS Drive Enclosure 1 QR516B HP 3PAR 7000 Service Processor 8 QK734A HP Premier Flex LC/LC OM4 2f 5m Cbl 1 BW946A HP 42U Location Discovery Kit 1 BW906A HP 42U 1075mm Side Panel Kit

Implementing a proof-of-concept

As a matter of best practice for all deployments, HP recommends implementing a proof-of-concept test environment that matches the planned production environment as closely as possible. In this way, appropriate configuration and solution deployment can be obtained. For help with a proof-of-concept, contact an HP Services representative or your HP partner (hp.com/large/contact/enterprise/index.html).

(27)

For more information

HP 3PAR Architecture Overview white paper

http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-3516ENW HP 3PAR StoreServ Storage best practices guide

http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA4-4524ENW HP Storage Sizer

hp.com/storage/sizer

HP 3PAR Windows Server 2012 and Windows Server 2008 Implementation Guide

http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c03290621/c03290621.pdf HP 3PAR Priority Optimization

http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA4-7604ENW HP 3PAR V400 Reference Configuration

http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA4-6016ENW HP 3PAR Recovery Manager for Microsoft SQL Server

http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c03605179/c03605179.pdf HP SAN design guide

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00403562/c00403562.pdf

To help us improve our documents, please provide feedback at hp.com/solutions/feedback.

Sign up for updates

References

Related documents

Nevertheless, the furniture does play a central symbolic role in the book, since its dilapidated state at first shows how the family itself is run down and falling apart, but in

Wilson, Parks, and Smith enable reading and viewing audiences to make connections between trauma theory and diaspora theory, particularly in light of trauma’s social nature,

In Suzan-Lori Parks’s The America Play, the black father who resembles Abraham Lincoln abandons his family so that he can fulfill his dream of success by moving out west.

Cross-sectional and time-series studies of the relationship of patents to R&D expenditures are reviewed, as well as scattered estimates of the distribution of patent values and

For a combination of OLTP and data warehousing workloads from three different host servers, we found that the HP 3PAR StoreServ 7450 All-flash storage system easily handled a peak

Unlike other purpose-built arrays on the market that solve SSD performance issues by introducing an entirely new architecture, the HP 3PAR StoreServ 7450 provides an array of

HP 3PAR StoreServ Priority

All inclusive channel-only solution for primary storage and information protection. HP StoreSystem: 3PAR StoreServ