• No results found

NetApp Storage Arrays: A Viable Option for Exchange Server Deployments?

N/A
N/A
Protected

Academic year: 2021

Share "NetApp Storage Arrays: A Viable Option for Exchange Server Deployments?"

Copied!
10
0
0

Loading.... (view fulltext now)

Full text

(1)

This White Paper contains statements that may be related to the future development and direction of Avanade Inc. These statements may represent only current plans or goals of Avanade as of the date of publication and are subject to change without notice based on our technical and business judgment. Any reference to third party companies does not imply any involvement with or endorsement of or by Avanade Inc. Other company, product, and service names mentioned in this 2211 Elliott Avenue

Suite 200

Seattle, Washington 98121 seattle@avanade.com www.avanade.com

A global IT consultancy dedicated to using the Microsoft platform to help enterprises achieve profitable growth. Additional information can be found at www.avanade.com.

NetApp Storage Arrays:

A Viable Option for Exchange Server

Deployments?

By Patrick Cimprich, Chief Solutions Architect

Additional contribution: Jeffrey Chen, Solutions Engineer

As a Microsoft®-focused systems integrator, Avanade designs and implements

a wide variety of storage solutions, from file servers to Exchange Server to SQL Server. Our implementations range from simple to extremely complex, driving high transaction rates as well as relying on advanced features and functionality available in the storage platforms.

Most of Avanade’s large messaging installations involve Microsoft Exchange Server and enterprise-class hardware from leading storage vendors. Our customers and consultants have expressed growing interest in the potential use of Network Appliance® (NetApp) storage solutions in these configurations.

In order to provide the best guidance to our customers, Avanade’s infrastructure experts have performed an in-depth laboratory study of NetApp’s products to validate their suitability as storage platforms for Exchange Server deployments. We devised a series of benchmarks based on the typical storage scenarios we see, focusing primarily on Microsoft Exchange Server configurations but also touching on general file-server scenarios. This white paper describes the tests we performed on NetApp’s mid-range storage array, along with the results and our conclusions. As part of this Exchange Server test suite, we also assessed iSCSI connectivity options for storage transport, and compared blade servers to traditional servers.

(2)

Storage as Part of an Integrated

Infrastructure

A critical component of any well-designed Microsoft Exchange Server environment is its storage

architecture. Exchange is extremely demanding of the storage subsystem, generating punishing I/O demands atypical of most other application systems.

The net performance of your Exchange architecture is directly related to your storage infrastructure, so storage solutions should be a core component of your IT plan, not “bolted on” as your operational needs grow. By considering current and future storage needs as part of your platform strategy, you can build a comprehensive long-term roadmap that will allow you to streamline management, build in desired

functionality, maximize connectivity and productivity, and migrate and protect your ever-growing collections of enterprise data—all with a desirable TCO (total cost of ownership).

Test Environment

To assess the suitability of NetApp’s products in Exchange Server 2003 deployments, we studied its mid-range storage array, the FAS3050c (the top-end offering at the time). During our testing, we evaluated various aspects of performance, ease of configuration, and data backup and recovery. We focused primarily on Exchange configurations, but also touched on general file-serving scenarios.

Since we had appropriate test scenarios in place, we also looked at iSCSI connectivity options for storage transport, and assessed blade servers vs. traditional servers.

This storage test suite was conducted over a nine-month period (October 2005 to June 2006) in the Avanade Global Market Development (GMD) Lab in Seattle, WA.

Testing Protocol. We created a rigorous plan to test

relevant scenarios and configurations thoroughly and consistently. This included the storage array itself, as well as related testing tools, server platforms, and transports. Our goal was to obtain defendable, repeatable results that accurately reflected the

product’s performance. Our guiding principles included: Following NetApp’s recommended system

configurations. We adopted this approach because it’s what our consultants do when introduced to new technology. We wanted to see how things

performed when following “vanilla” recommendations.

Using only components and configurations in the Windows® Server Catalog, which lists products that

have been successfully tested for Windows Server compatibility.

Configuring all components and systems to be realistic and 100-percent supported by NetApp and Microsoft Product Support Services (PSS).

Using only released products for all elements, no experimental or beta release code. Again, our goal was to establish and test supported environments that our customers would be likely to implement.

Storage Array. The Network Appliance FAS3050c

supplied for testing included: Two controllers

3 GB RAM per controller

168 x 144 GB 15,000 rpm Fibre Channel drives 8 x 2 Gbs Fibre Channel SAN connections 8 x 1 Gbs Ethernet connections

Data ONTAP 7.0.2 software

Exchange Environment. Our Exchange setup

mimicked a realistic enterprise environment: Single-server configuration

6,000 Exchange users

4 Exchange storage groups, with 5 databases per group (20 databases total)

1 transfer request (I/O) per user per second Mailbox limit of 150 MB per user

We used a fairly simple Exchange implementation that included a domain controller, a separate Exchange server for public folders, an Exchange mailbox server, and various load-simulation clients. All machines were connected to the lab’s gigabit Ethernet network.

(3)

DB1 SG1 DB2 DB3 DB4 DB5 Logs DB1 SG2 DB2 DB3 DB4 DB5Logs DB1 SG3 DB2 DB3 DB4 DB5 Logs DB1 SG4 DB2 DB3 DB4 DB5Logs Public Folders Exchange AD Loadsim Clients 1 Gbs Ethernet

Servers. We tested on two Windows® servers: the

IBM® BladeCenter HS40 (a blade server with four

single-core Intel Xeon® MP processors) and the Sun

Fire® V40z (a classic rack-mount server with four

dual-core AMD Opteron® processors). These servers were

chosen based on their popularity with our customers and consultants, and on our desire to test blade servers vs. traditional servers and compare the differing form factors and I/O architectures.

Storage Transport: Because NetApp supports iSCSI as well as Fibre Channel, most tests were performed using multiple connectivity configurations to assess the best transport options. On the IBM HS40, Fibre Channel was compared to iSCSI. On the Sun V40z, Fibre Channel was compared to three different iSCSI options. (For details, see the “Transport Alternatives” section.)

General Performance

The first test pass had two goals: to validate that our environment was configured properly, and to assess overall performance for the NetApp FAS3050c.

Test Setup. We configured 20 drives, per NetApp’s

best practices, to yield the optimal performance and maximum useable capacity. The setup consisted of a 20-drive RAID-DP aggregate, configured with 144 GB, 15,000 rpm Fibre Channel drives, which yielded 18 drives of useable space (2,370 GB). A single 1 terabyte (TB) LUN (logical unit number, or disk volume) was created inside this aggregate and presented to the Windows host.

To execute the tests, we used Iometer, an I/O stress tool that creates synthetic loads to test disk and network configurations. When testing disk

configurations, Iometer operates like a database: It creates a large file on the disk and then executes all read and write operations inside this file. Our overall parameters were:

100 GB database file, fully utilized

Transfer-size requests ranging from 0.5 to 64 KB, resembling a typical file server

I/O queue depth steadily increasing from 1 to 512 requests, to identify the peak throughput for each configuration and to see how it reacted when saturated

We ran all tests three times and averaged the results, then analyzed the Iometer log files to identify patterns or trends.

Results. These tests show overall performance for a

file server load given a fixed number of disk drives, and also provide a relative comparison of different

transport mechanisms. The key indicators are

throughput (megabytes) and latency, or disk-response time (milliseconds).

The FAS3050c performed admirably throughout. It achieved maximum throughput on the IBM HS40 server, reaching 48.1 MBps, with a respectable response time of 29.1 ms, at a queue depth of 128 requests.

On the Sun V40z server, peak performance at a queue depth of 128 was 47.4 MBps, with a response time of 29.6 ms. Given the Sun server’s greater processing capacity, the slightly better results on the IBM HS40 tells us that CPU performance was not a bottleneck.

Under saturation conditions (a queue of 128+ requests), throughput flattened out and response time increased. Overall peak throughput was 48.7 MBps at a queue depth of 256 requests, and dropped thereafter. Response time rose steeply,

(4)

from 29.1 ms at 128 requests to 126.8 ms at 512 requests.

Load Stress Testing (Storage

Subsystem)

As mentioned earlier, Exchange is extremely demanding of the storage subsystem, with high I/O demands that require an extremely responsive configuration. To discover whether a storage system is configured properly for Exchange, Microsoft provides the Jetstress load-simulation tool, which mimics the I/O profile and load for a specified Exchange configuration. We used Jetstress to see whether the NetApp

FAS3050c array would provide adequate disk-response time in our Exchange test environment (single server, 6,000 users, etc.). Our parameters included a hardware storage cache size of 3,000 MB, to ensure the array would overwhelm its cache and provide a true indication of performance under load.

Test Setup. We asked NetApp to specify a storage

configuration appropriate for this scenario. The resulting setup contained 64 drives for Exchange data and logs, with 32 drives residing on each filer head (144 GB, 15,000 rpm Fibre Channel drives were used throughout).

We configured four Exchange storage groups with the maximum of 5 databases each (20 databases total). A separate 384 GB data LUN was created for each storage group of five databases, along with a corresponding 115 GB log LUN.

Fifty-two of the drives were configured to house the Exchange data, half on each filer. The 26 data drives on each filer were configured as a single RAID-DP aggregate. Two FlexVols (virtualized disk volumes) were then created inside this aggregate, each with a single LUN to house a storage group. A 6-disk RAID-DP aggregate was created on each filer exclusively for logs, with each aggregate containing 2 log LUNs corresponding to the two storage groups on that filer. Once the NetApp array was configured per vendor specifications, we executed the Jetstress performance tests for a default two hours. We ran each test three times and averaged the results, then analyzed the Jetstress reports to identify patterns or trends.

Results. The key factors here are disk-response time,

I/Os per second, and CPU utilization.

Disk-response time (seconds/read): Exchange’s maximum read latency is 20 ms, after which it begins to error. All FAS3050c configurations

returned passing scores—average disk-response time was 15.5 ms across the six configurations, and no test run exceeded 17 ms. Fibre Channel and iSCSI delivered comparable results. In particular, after taking into consideration the differences in reads/second, iSCSI equaled or bettered Fibre Channel on the IBM HS40 server.

I/Os per second (IOPs): The goal was to achieve close to 6,000 IOPs without going under. (This figure represents the anticipated storage load for 6,000 users running at 1 IOP.) All the FAS3050c configurations successfully exceeded 6,000 IOPs. CPU utilization: The transport mechanism significantly affected these results, particularly on the IBM HS40 server, which had less CPU capacity. Fibre Channel produced fairly low CPU utilization (5.5%). However, when the IBM HS40 ran iSCSI over its embedded network-interface card (NIC), CPU utilization escalated to 9.6%, which outweighed the greater IOPs produced (7,155, about 10% greater than other configurations). Thus, this configuration may not be appropriate in scenarios with constrained CPU resources. On the Sun V40z, however, CPU utilization remained low regardless of transport type, ranging from 3.5% to 5.7%, thanks to its greater processor capacity.

Load Stress Testing (End-to-End)

To obtain a more thorough understanding of Exchange Server performance in conjunction with the NetApp FAS3050c array, we performed a series of tests with Loadsim, a Microsoft tool designed to stress an Exchange configuration end-to-end by focusing on activities related to email loads.

Test Setup. We used our standard Exchange

environment and the vendor-specified storage configurations created for the Jetstress testing (for

(5)

details, see “Test Setup” in the previous section). We also needed to specify a Loadsim profile, a series of actions that mimic real-world Exchange tasks such as reading, writing, and deleting messages; calendaring activities; etc. For this test suite, we selected the MMB3 profile, which is commonly used in the industry and mimics a fairly heavy load that was closely aligned with our metric of 1 I/O per user per second.

All Loadsim tests were run for 7 “daytime” hours. As with other test waves, multiple transport configurations were tested on both the IBM HS40 and Sun V40z servers. Additionally, the 6,000 users were evenly distributed across the 20 Exchange databases, with 300 users in each.

Results. Loadsim provides a score based on weighted

average response time (in milliseconds) across a host of activities, such as the time required to open a mail message or to create a new calendar item. Once again, the key factors are disk-response time, I/Os per second, and CPU utilization.

Average response time: Microsoft defines a passing score as 1,000 ms (1 second), and all configurations handily beat this ceiling, with none exceeding 500 ms. Despite the difference in processing power of the two servers, results were very similar, with five scores in a narrow 375–390 band. Only the IBM HS40 running iSCSI lagged slightly (451 ms). This essentially tells us that the systems were not CPU-constrained.

Database read: As mentioned previously, disk-response time (latency) must remain below 20 ms or Exchange will deliver unpredictable results. All configurations delivered outstanding performance, with none exceeding even 10 ms within a single Exchange storage group (1500 users across five databases).

Database write: Again, all configurations responded admirably, well under the 20 ms latency limit, with little difference between Fibre Channel and iSCSI setups, particularly on the Sun V40z server. The lowest response time was 2.2 ms, delivered by both the IBM HS40 via Fibre Channel and the Sun V40z via the Alacritech iSCSI card.

Log write: There were no issues related to log performance, and the effect of cached array writes along with sequential writes was very apparent. The lowest latency was 0.5 ms—500 microseconds!—on the IBM HS40 server with Fibre Channel. And the longest latency was a mere 1.6 ms on the Sun server running embedded iSCSI.

CPU utilization: Overall CPU demand was quite low; the highest was 33% on the IBM server running iSCSI. On the larger-capacity Sun server, CPU utilization varied from 13% to 14.8%. This highlights the fact that, while Exchange is very I/O-intensive, modern computers have plenty of CPU capacity for even moderately large configurations such as our 6,000-user setup.

(6)

High Availability

For most organizations, server availability is a key concern, particularly for mission-critical applications such as email. This often warrants additional investment to ensure high availability. A common method of implementing Exchange Server in a highly available configuration is to use Microsoft Cluster Server (MSCS) software, which enables multiple servers to work together as one machine.

When a failure occurs on one server within the cluster, the operations that the failing server was hosting will automatically restart themselves on another server in the same cluster. This process of transferring services is called failover.

Test Setup. Our goal in this test wave was to validate

cluster performance during failovers for the NetApp FAS3050c array. We used the same Exchange environment and vendor-specified storage

configurations as in previous tests, with one change: We only tested a single server setup, a clustered pair of IBM HS40 blade servers. Connectivity to the FAS3050c cluster was through iSCSI over the HS40’s embedded Ethernet NIC.

Two types of tests were conducted: graceful failovers and hard failovers. To provide a realistic environment, all failovers were executed under load. In each case, 3,000 Loadsim users were running before the failovers were initiated.

A graceful failover was initiated through a Move group action in the Cluster Administrator MMC. This triggers a safe and orderly shutdown of Exchange on the affected server, transitions the cluster resources to the second server, and then brings the Exchange resources back online. Graceful failovers were performed three times each.

A hard failover is produced by a critical system failure, such as an electrical outage, failure of the network, or loss of storage connectivity. We simulated a hard failure by pulling the active HS40 blade server from its chassis. This is obviously not a recommended approach and carries with it the potential for data corruption. A single hard failover was executed.

Results. We captured two data points for both types

of failover: cluster move duration and total user downtime.

Graceful failover: In this scenario, move duration was 1 minute, 40 seconds, and user outage time was 2 minutes, 25 seconds.

Hard failover: The cluster recovered quickly (1 minute, 14 seconds), and impact to end users was 2 minutes, 27 seconds. (Hard failovers recover faster than graceful failovers because no time is spent shutting services down.)

Both results are very respectable, and indicate that cluster integration with the FAS3050c operated as expected.

Data Backup and Recovery

So far, we have shown that the NetApp FAS3050c array can meet the storage and processing demands of Microsoft Exchange in a typical enterprise environment. An additional limiting factor—often the greatest constraint for any Exchange Server implementation—is the capability to back up and recover Exchange data within a given maintenance window.

Test Setup. Our final wave of tests explored the

array-based backup and recovery features included with the FAS3050c storage array. We used the same Exchange environment as in previous tests, and implemented NetApp’s recommended backup configuration, giving the vendor an opportunity to showcase its products and differentiation. Tests were performed at the storage group level, first for a single group (1,500 users), then for two storage groups (3,000 users).

To obtain meaningful test results, a 5-day test was executed for each configuration, using Loadsim to simulate the user load throughout. No backups were done the first day to provide a performance baseline. Backups were then performed three times per day for the next four days, with a full verify after the evening backup. At any given time, there were seven rolling snapshot copies of 3,000 mailboxes residing in two storage groups along with their logs.

(7)

Snapshot Technology. NetApp believes its snapshot

technology is a key market differentiator, and indeed, its approach is quite different from that of many other storage vendors. When NetApp’s Data ONTAP software writes data to disk during normal activity, it does not update changed blocks, it writes the data—be it new data, or updates to existing data—to free disk space. Thus, Data ONTAP is constantly writing to new space, and uses a pointer structure to determine which blocks constitute an object (e.g., a file) on disk. Furthermore, instead of deleting old data, Data ONTAP can leave it in place and create another pointer table that points to the old blocks: instant snapshot.

For integration of these snapshots, as well as creation and execution of snap-based backups and restores, NetApp supplies SnapManager for Microsoft Exchange. Storage space for snapshots must be provided within their respective NetApp volumes, and NetApp provides calculators for use in determining appropriate volume sizes based on factors such as number of Exchange users, mailbox size, rate of change, and number of snapshots to retain.

Results. NetApp’s snapshot-based approach produced

superior results, with no measurable impact on performance. Latency was the key performance indicator, and we measured two types:

Database latency (disk reads and writes): The ceiling was 20 ms, and the FAS3050c handily beat this, remaining essentially unchanged over the test period at 4 ms.

Client latency (end-user impact over the network): The ceiling was 1,000 ms, and again the FAS3050c outperformed, averaging 167 ms.

Transport Alternatives

A secondary goal of these tests was to assess various iSCSI options for storage transport, to determine their fit under different loads in an Exchange environment and to compare their performance to that of Fibre Channel.

Fibre Channel has long been the standard storage interconnect due to its excellent performance characteristics. However, Network Appliance strongly supports the iSCSI protocol, contending that it is a perfectly viable storage transport for many scenarios. iSCSI, a block-level storage communications protocol, has been around for years but traditionally was relegated to low- to mid-tier scenarios. iSCSI uses traditional Ethernet components for its infrastructure, and thus is very cost effective on a per-port basis. Additionally, existing investments in Ethernet

management and infrastructure can be leveraged in an iSCSI environment.

Test Setup. To measure transport performance, we

executed the Iometer and Exchange tests via both Fibre Channel and several iSCSI setups. Data presented here is taken from our Iometer tests (see the “General Performance” section), where we configured six connectivity options.

On the IBM HS40 blade server, we compared Fibre Channel to iSCSI run over the embedded network-interface card (NIC).

On the Sun V40z server, we evaluated Fibre Channel against three iSCSI setups: Sun’s embedded NIC, a function-built Alacritech iSCSI TOE (TCP offload engine) card, and a QLogic iSCSI HBA (host bus adapter).

For the Fibre Channel tests, we used the popular 2-Gbs format, leveraging QLogic HBAs in both the IBM HS40 and Sun V40z servers. The FAS3050c array has eight 1-Gbs copper Ethernet connections; for the iSCSI tests,

(8)

four of these were connected to our gigabit Ethernet switching infrastructure.

Results. Comparing the six connectivity configurations

required extensive test runs, but yielded interesting insights.

iSCSI and Fibre Channel turned in very similar performances: 48.2 MBps for iSCSI vs. 47.2 MBps for Fibre Channel (see chart in the “General Performance” section). The only reportable difference occurred when the system hit the wall at 256+ requests on the IBM HS40 server; then we saw iSCSI results take a slight dip before leveling out.

For the three iSCSI options on the Sun V40z, CPU utilization varied the most, ranging from 2.0% (QLogic) to 4.1% (native NIC). However, a maximum utilization of 4.1% is still low and quite acceptable for almost any scenario.

The network-interface card (NIC) built into the Sun V40z server was a top performer, equaling or besting the purpose-built iSCSI cards for throughput, response time, and CPU utilization. Remember, this is essentially a free connection! When you consider cost relative to performance (third-party cards cost USD $800–$1000), the

embedded NIC becomes even more appealing for a NetApp setup.

Conclusions

Our goal in this lab comparison was to evaluate the suitability of Network Appliance’s storage arrays for large-scale Microsoft Exchange Server 2003 implementations.

The outcome is clear: The NetApp FAS3050c storage array and its accompanying software passed all tests, many with outstanding results. NetApp’s products are viable options for Exchange Server deployments, and are likely interchangeable with those of other vendors in most scenarios.

In deciding whether to deploy NetApp storage solutions, you should base your decision on factors such as differentiating features, ease of use,

configuration, flexibility, price, and cultural alignment. Throughout the testing, we documented our

experiences, best practices, problems, and other considerations. Below are our key takeaways for the NetApp array and for our secondary evaluations of blade servers and iSCSI transport options.

NetApp

Architecture: NetApp’s approach to storage

virtualization and configuration is different from that of most other storage vendors. So, if your

experience is with traditional storage architectures, you may spend time ramping up on NetApp’s technology.

Consistency: The NetApp FAS3050c array passed

all tests and could deliver impressive performance marks. However, superior results were interspersed with less impressive numbers; variability could be as much as 10%.

SnapDrive: This client-side software package

greatly simplifies many configuration tasks associated with disk setup, and is a significant differentiator for NetApp. If you implement a NetApp storage solution, you should probably include SnapDrive.

SnapManager for Microsoft Exchange: It

makes array-based backups easy to configure and operate, and was exceedingly simple to install and configure. During our testing, we had it set up and performing backups within three hours.

Quick and easy configuration: A significant

bonus of the NetApp architecture. Once you have a

NetApp iSCSI Variations

30 32 34 36 38 40 42 44 46 48 50 128 Queue Depth T hr o ug hp ut ( M B ps ) 0 1 2 3 4 5 6 7 CP U U ti l. ( % ) MBps (alacritec) MBps (qlogic) MBps (native) % CPU Utilization (alacritec) % CPU Utilization (qlogic) % CPU Utilization (native)

(9)

basic understanding of the architecture and configuration options, it’s easy to create large, complex configurations rapidly. Our test setups bore this out; the architecture’s simplicity, coupled with the SnapDrive software, made the process straightforward and quick to execute.

Single Mailbox Recovery: While not discussed

during the tests, this software warrants mention. SMR is an adjunct to SnapManager that leverages Exchange snapshots and can mount them to restore individual users’ mailboxes and even individual messages. Other products can do this, but the integration of SMR with NetApp’s software suite is noteworthy.

Blade vs. Traditional Servers

Comparing blade servers to traditional servers was an important aspect of these trials. Both Avanade and industry experts have debated the viability of blade servers for storage-intensive roles, and these tests seemed a perfect opportunity to find out.

Based on our results, blade servers appear powerful enough for I/O-intensive applications such as Exchange Server. Given many of the other advantages of blade servers—density, power and cooling benefits, simplified connectivity—they should be considered for almost any computing role.

Note that the architectures of the blade and traditional servers used in these tests were very different. The traditional Sun Fire V40z server had AMD Opteron processors, while the IBM BladeCenter HS40 had Intel Xeon processors. No doubt this accounted for some of the differences in results.

iSCSI Connectivity

Going into these tests, we expected Fibre Channel to beat iSCSI as the optimal transport for storage communications. Instead, we found that iSCSI is a perfectly viable protocol given the right situation. With one exception, the data throughput did not exceed the 1 Gbs capacity of our iSCSI Ethernet links, effectively marginalizing any benefit from the superior 2 Gbs throughput of the Fibre Channel transport. In fact, iSCSI bested the Fibre Channel results for both the blade server and traditional server in a number of the NetApp tests.

When you factor in iSCSI’s cost benefits, flexibility, and ease of use, it becomes a very compelling alternative, and one that may warrant in-depth consideration for your storage network.

However, note that we spent some time architecting an optimal iSCSI environment in the lab, approaching its design with the same philosophy and mindset as a Fibre Channel infrastructure. The good news is that iSCSI equipment is cheaper and more readily available, and the skills required to build IP-based infrastructures more readily available, than those required for Fibre Channel architectures.

(10)

Avanade’s Expertise

Avanade is the leading technology integrator specializing in the Microsoft enterprise platform. Our people help customers around the world maximize their IT investment and create comprehensive solutions that drive business results.

Avanade consultants have the highest concentration of Microsoft certifications in the industry, and have collectively passed more than 8,000 Microsoft certification exams.

We have 11 professionals who have earned the Microsoft Certified Messaging Architect certification, the most advanced industry credential in IT architecture.

We take a rigorous engineering approach to solutions development, and have a team dedicated to designing, developing, and testing customer solutions.

Our delivery model links onsite, near-shore, and a Global Delivery Network of offshore resources to leverage our global expertise and save customers money.

Avanade is passionate about providing the highest value to each customer, and our customer satisfaction has reached 97 percent.

Biography:

Patrick Cimprich, Chief Solutions Architect, Infrastructure & Security, Avanade

Patrick has global responsibility for designing and delivering assets and tools used by Avanade customers and consultants to deliver infrastructure and security solutions.

Patrick has 17 years experience in the IT industry in systems consulting, software development, and IT operations. His experience includes large-scale application infrastructure environments, storage expertise, SAP systems, and application development with a wide range of technologies including Microsoft Windows, UNIX, and mainframe environments. He has worked within multi vendor environments with complex systems for a broad range of industry sectors.

Patrick has been with Avanade since 2000 and holds a degree in Management Information Systems from the University of Dayton.

References

General

Iometer: http://www.iometer.org/

Jetstress (Microsoft Exchange Server Jetstress Tool):

http://www.microsoft.com/downloads/details.aspx?f

amilyid=94b9810b-670e-433a-b5ef-b47054595e9c&displaylang=en

LoadSim (Microsoft Exchange Server 2003 Load Simulator):

http://www.microsoft.com/downloads/details.aspx?f

amilyid=92eb2edc-3433-47ca-a5f8-0483c7ddea85&displaylang=en

Microsoft Exchange Best Practices Analyzer v2.7: http://www.microsoft.com/downloads/details.aspx?f

amilyid=dbab201f-4bee-4943-ac22-e2ddbd258df3&displaylang=en

Network Appliance

NetApp Technical Library:

http://www.netapp.com/library/tr/fulllib

RAID-DP: Network Appliance Implementation of RAID Double Parity for Data Protection (PDF file): http://www.netapp.com/library/tr/3298.pdf FlexClone Volumes: A Thorough Introduction (PDF file): http://www.netapp.com/library/tr/3347.pdf Introduction to Data ONTAP, Release 7G (PDF file): http://www.netapp.com/library/tr/3356.pdf

SnapManager 3.2 for Microsoft Exchange: Best Practices Guide (PDF file):

References

Related documents

CHARTRAND, “Modeling the Hydrogen Solubility in Liquid Aluminum Alloys”, Metallurgical and Materials Transactions.. CHARTRAND

Most datacenters rely heavily on Fiber Channel SAN, iSCSI arrays, and NAS arrays for network storage needs. Servers use these technologies to access shared storage net- works.

The simplicity of this technology is that iSCSI initiator software on the blade server, paired with iSCSI target software on SAN disk storage arrays or tape storage, performs the

VMware’s EVO:RAIL software also runs converged infrastructures from NetApp, with clustered Data Ontap storage software, NetApp FAS 2552 storage arrays and commodity servers; HP

This solution brief provides information on a Microsoft® Exchange Server 2010 (Exchange) storage solution based on HP ProLiant DL380p Gen8 servers utilizing internal server

Understanding Transport Server Storage 547 Defining the Processor and Memory Requirements 548 Planning the Number of Transport Servers 549 Deploying the Exchange Server

The case study is presented here in steps so that other practitioners and academics can follow the process to build a custom environmental value stream map (EVSM).. The

In accordance with your instructions, Green Country Testing conducted the analysis shown on the following pages on samples submitted by your company.. The results relate only to