• No results found

Optimize VDI with Server-Side Storage Acceleration

N/A
N/A
Protected

Academic year: 2021

Share "Optimize VDI with Server-Side Storage Acceleration"

Copied!
5
0
0

Loading.... (view fulltext now)

Full text

(1)

Optimize VDI with Server-Side

Storage Acceleration

Eliminate Storage Bottlenecks for Fast, Reliable Virtual

Desktop Performance

(2)

2

Virtual Desktop Infrastructures (VDI) give users easy access to applications and data, while giving IT an efficient way to manage desktop resources.

However, Citrix XenDesktop, VMware Horizon View, and other VDI solutions are susceptible to storage per-formance limitations that can hamper VDI perper-formance. This often makes it difficult to deploy and scale virtual desktops effectively.

To date, the best way to solve this challenge has been to add storage performance by buying more storage capacity. This can be quite costly, substantially increasing the cost per virtual desktop – especially in large deployments. In addition, it may not solve the VDI performance issue, as the storage device is often several hops away from the user and the host devices. Any of the hops between the VM and the data could become a bottleneck, causing overall performance to drop.

PernixData FVP™ software solves the problem of cost-effective storage performance for VDI. It is a 100% software solution that accelerates both reads and writes issued to primary storage by clustering high speed server resources, such as flsah. The result is an 80% reduction in VDI I/O latency at a fraction of the cost of a storage upgrade, making PernixData FVP software strategic to any VDI deployment.

Challenges with VDI

VDI projects fail when the switch from physical to virtual desktops results in a noticeable decrease in responsiveness of the users’ virtualized desktop. This decrease in performance can often be traced to the storage array. Below are some unique VDI issues that exacerbate the situation:

I/O Storms (Boot, Login, Logout, Shutdown)

An I/O storm is a huge spike in I/O load that arises when many end-users create a significant amount of storage activity at the same time. One example is a “boot storm”, which happens when many users start VDI sessions simultaneously (e.g. when first arriving at work every

morning). Other reasons for I/O storms include large numbers of concurrent logins, logouts or shutdowns. To put the impact of a boot storm into perspective, a dedicated hard drive (typically a 7.2K rpm SATA drive) in a physical PC can deliver 75 to 100 IOPS. At boot up, all the IOPS are completely used for a few minutes. Multiplied across hundreds or thousands of virtual desktops, this creates a tremendous load on the shared storage array. This, in turn, creates storage bottlenecks that slow boot ups and cause desktops to become unresponsive, resulting in increased complaints to the IT help desk.

Companies could “size” their storage array to handle these storms – i.e. buy performance (i.e. capacity) to handle this worst-case scenario. Unfortunately, this is both expensive and inefficient. During normal operations, a typical VDI user requires only a fraction of the IOPS needed at boot time (e.g. < 25 IOPS on average). So, rather than buying unnecessary IOPS, companies often suffer through poor VDI performance at peak times.

AntiVirus (AV) Scan and Update

Many companies have AV agents installed on virtual desktops that constantly scan for viruses and download new definitions. When the desktop is a physical computer, these processes only tax the resources of that individual machine. When desktops run in a VDI environment, the AV agents tax the shared storage array, thereby impacting the performance all VMs accessing that array.

KEY BENEFITS

Reduce VDI latency by more

than 80%

(3)

Read / Write Mix

VDI environments have a mix of both reads and writes. Writes are especially interesting as they can place a substantial burden on storage performance. In RAID environments, for example, a single write from a virtual desktop can result in multiple writes to storage, also know as the RAID Penalty for Writes. The simplest setup is RAID 1, which is essentially a mirror, so there are two writes executed for every write from the VM. Other RAID configurations result in even more processing. As writes from the VM increase, the storage is taxed even more, which causes slowdowns. The read/write mix with VDI is generally 40% reads and 60% writes, but companies have seen writes to be 80 to 90% in practice.

Flash to the Rescue

Many companies are looking at adding flash storage to improve the performance of their VDI deployments. One option is to put flash in the storage array. This provides a substantial performance improvement over spinning disks, but is quite expensive and may not resolve VDI latency issues. This is because all VMs accessing or writing data on a storage array must traverse the storage fabric and the storage processor before reaching the storage media (in this case, flash). Even if flash in the storage array increases the speed of reads from and writes to media, bottlenecks in other areas still exist that cause I/O latency.

An alternative approach is to use server flash to accelerate storage performance. Like flash in the storage array this approach leverages the massive IOPS of flash to increase storage performance. But unlike flash in the array, server flash is much more cost effective, easy to scale-out (along with CPU and memory), and promises better response times by placing the flash in the hosts, closest to the application.

Decoupled Storage Architecture

PernixData FVP™ software optimizes server flash for VDI, delivering the best possible performance in the simplest and most cost-effective manner.

FVP software clusters high-speed server resources, like flash, into a logical pool of resources to accelerate reads and writes to primary storage. This can optimize both read- and write-intensive workloads, making it ideal for VDI desktops. FVP software installs inside the hypervisor in minutes, and requires no changes to VMs, hosts, or the storage array.

Essentially, FVP puts storage intelligence into the server, creating a true scale-out architecture for storage perfor-mance. By decoupling storage performance from storage capacity, IT staff can cost effectively grow their storage environment to meet the requirements of VDI.

Below are examples of how a decoupled storage architecture solves the VDI challenges mentioned above:

I/O Storms (Boot, Login, Logout, Shutdown)

(4)

4

For logout or shutdown storms, the key issue is writing all the data from the VDI VMs to the storage array. In this case, the FVP acceleration layer absorbs all the writes from the VMs as soon as they are issued and then asynchronously destages the data in a consistent, predictable manner to the storage array. In a sense, the FVP platform insulates the VDI VMs from the storage array so their writes get acknowledged at the speed of server flash while buffering them to the storage array so that it doesn’t get overwhelmed.

AntiVirus (AV) Scan and Updates

AV agents all point to the same location for updates. The first time this file is accessed, it comes from the storage array. Subsequent access is delivered from server flash, reducing latency by avoiding the trip back to the array.

Read / Write Mix

As mentioned earlier, the key to avoiding the RAID penalty for writes is to insulate the VDI VMs from the storage array. VM writes are acknowledged at the speed of server flash while destaging them to the storage array at a consistent, predictable rate so the storage array doesn’t get overwhelmed. Even if the storage array uses a RAID setup that has a large write penalty (e.g. RAID 6 with 5 additional writes), FVP software acknowledges the writes from the VDI VMs when the data is written to the acceleration tier and then destages the writes at a rate the storage array can manage in a predictable, consistent manner. So, even if VDI VMs are over-weighted with writes, the impact of the RAID penalty on user experience is negligible.

Real World Examples

PernixData FVP software is deployed by companies of all sizes to accelerate the performance of Citrix XenDesktop, VMware Horizon View and other VDI applications. Here are some examples:

Regional Bank

A regional bank was having performance issues as they grew their Citrix XenDesktop deployment. They looked at upgrading their storage array, but it would cost over $100,000 and not guarantee an improvement in VDI latency. Instead, the bank invested in PernixData FVP software. For 1/6 the cost, they got 100K IOPS per host and 1K IOPS per desktop (based on a 100:1 consolidation ratio). This dropped VDI latency from 3 ms to 0.4 ms, enabling the company to grow their VDI deployment to meet end user demand.

FVP Software lowered VDI latency from 3 ms to .4 ms for a regional bank.

Latency without FVP Software

(5)

Healthcare Provider

A healthcare provider was having issues delivering adequate VDI performance to over 700 VMware Horizon View desktops. PernixData FVP software solved the company’s VDI performance problems, reducing Windows boot times from 45 seconds to 8 seconds (an 82% reduction in latency). In addition, it saved the provider substantial money, costing $0.10 per IOPS as compared to ~$5.00 per IOPS with a storage upgrade.

20 45 40 35 30 25 15 10 5 0 VDI LA TENC Y - SECONDS Without PernixData FVP Software With PernixData FVP Software 82% Reduction in Latency

FVP Software lowered Windows boot times from 45 to 8 seconds for a healthcare provider.

Summary

VMware Horizon View, Citrix XenDesktop, and other VDI applications need responsive storage I/O to meet end user expectations. Given the nature of VDI, this can be difficult to achieve in a cost effective manner.

References

Related documents

Each NetApp E-Series storage array is configured as eight independent RAID groups of seven disks that can be set up in a RAID 5 (2 x 6+1) configuration. The remaining four disks

Each array allocated eight volume groups (RAID sets). The supported RAID types were RAID 5 and RAID 6, and the tests used RAID 5. Each volume group was composed of seven disks.

G-RAID mini incorporates the latest 5400 or 7200 RPM SATA hard drives each with 8MB of cache and features a triple interface - FireWire 400, FireWire 800 and USB 2.0 ports -

• Write coalescing and full stripe writes – If a write is as large as or larger than the RAID stripe size and properly aligned, it is written to the disks as a single

The rebuild time is related to drive capacity and number of drives in the RAID array or RAID sub-arrays (RAID 5+0 or RAID 6+0), further to the computing power of the

RAID Card I/O Processor S-ATA Controller Storage Appliance LAN Internal Disk Array Fibre Channel LAN Server External RAID Enclosure FC-to-S-ATA External RAID Card.. Disk Array

This section presents the details of the VDI solution hardware used to determine the performance of the SC4020 all-flash storage array for the XenDesktop-based persistent desktop

With RAID 5, CLARiiON write cache flush operations begin with old data and parity being read from the disk by the storage processor.. What is the