Providing the Best-of-Breed Private Cloud with NetApp and Windows Server 2012 R2







Full text


Providing the

Best-of-Breed Private

Cloud with NetApp and

Windows Server 2012 R2


Providing the

Best-of-Breed Private

Cloud with NetApp and

Windows Server 2012 R2

Many organizations have adopted virtualization as a standard for server workloads. They have seen cost savings through reduced hardware, through reduced datacenter space, and through improvements to server provisioning. But vir-tualization has not proven to be the “game changer” many organizations had envisioned. IT departments face huge challenges related to server sprawl, which has actually increased with virtualization. In addition, IT departments must deal with increased complexity of the storage and network fabric, a lack of automa-tion, and long provisioning times for new vir-tual instances because complex processes must be followed to correctly identify the correct compute. Although virtualization has brought benefits beyond the physical paradigm of one operating system per server, the true optimal infrastructure for organizations comes with adopting a private cloud infrastructure.

The Benefits of the Private Cloud

The private cloud brings numerous capabilities to the adopting organization. These capabilities build on a virtualization foundation and add infrastructure optimization, high cost savings through increased resource utilization, and automated provisioning of server instances. Although exact definitions of a private cloud vary, key capabilities that bring the greatest value are universally accepted:

• Pooling resources—even with virtualization most companies must deal with highly siloed environments, with virtualization hosts isolated from other hosts, and highly segregated


The Benefits

of the Private Cloud ...2 Deploying the Best Private Cloud with Microsoft

and NetApp ...3 Windows Server 2012 R2 Hyper-V Key Features ...3 Windows Server 2012 R2 Key Features ...6 NetApp Key Intelligent Storage Array Capabilities ...9 Management Experience with a Microsoft

and NetApp Solution ...12 NetApp’s Depth of Partnership with Microsoft...13 Microsoft Private Cloud Fast Track ...13 Case Studies ...14


storage, which leads to underutilized resources. A private cloud pools all resources to-gether, leading to a much higher utilization of resources

• Elasticity of services—many organizations are moving beyond individual virtual machines and are focused on services with workloads that may vary at different times of the year but that are limited in scalability because of the siloed nature of resources. With all re-sources pooled together under a single management infrastructure the elasticity possible for services is far beyond that of basic virtualized environments. Additional instances of a service can be added as load demands and removed when no longer required.

• Abstraction of fabric resources—One of the biggest pain points in provisioning a new virtual environment is identifying the correct networking configuration at different locations and finding available storage that matches the performance characteristics and location of the virtual machine being created. A private cloud abstracts all elements of the fabric, allowing virtual machines to be created by simply stipulating the storage tier required, the logical networks to be connected to (e.g., “Production”), and a location. The private cloud man-agement infrastructure automatically performs all the actions to meet the requirements • Self-Service—a major benefit of public cloud services for business units is the easy, and

almost instant, provisioning of new virtual environments. The private cloud allows the same end-user self-service capability within the boundaries of defined quotas, resources, and templates on internal resources.

• Showback and chargeback—with server workloads standardized on a private cloud the ability to show business units the exact resources being used is vital, especially when self-service is utilized by business users. The ability to charge business units for the resources used is invaluable for many organizations

These capabilities allow business users of a private cloud to have their resource require-ments filled in near real time, giving the business great agility in responding to changing requirements and offering new services. The business can also focus on its applications and services without having to consider underlying infrastructure.

Deploying the Best Private Cloud with Microsoft and NetApp

Windows Server 2012 R2, System Center 2012 R2, and the latest solution offerings from NetApp provide organizations with a best-of-breed private cloud solution that offers the highest level of scalability and functionality for today’s and tomorrow’s IT needs. A num-ber of enhancements have made this possible and organizations adopting the Microsoft/ NetApp partner solutions enjoy an unparalleled integrated solution.

Windows Server 2012 R2 Hyper-V Key Features

Microsoft’s hypervisor, Hyper-V, has added significant features since its initial introduction in Windows Server 2008. Windows Server 2008 R2 introduced features such as Live Migration,


which allowed zero downtime migration of virtual machines between hosts, and hot-add of disk storage. Windows Server 2008 R2 Service Pack 1 introduced key capabilities such as memory optimization via Dynamic Memory and virtualized GPU capabilities with RemoteFX. Windows Server 2012 Hyper-V represented the greatest technological leap, making Hyper-V an industry-leading, enterprise-ready hypervisor suitable for the largest and most complex workloads. Win-dows Server 2012 R2 builds on the work done in WinWin-dows Server 2012 with a focus on cloud scenarios. Rather than making incremental updates across the entire Operating System, Micro-soft focused on key areas to bring capabilities to bring even more value to customers.

Today’s organizations want to standardize on how server instances are deployed. But a major pain point has been the need to maintain both physical and virtual processes because the virtualization platform is not capable of scaling to large workloads. Windows Server 2012 R2 Hyper-V resolves this issue, enabling the largest workloads to be virtualized with the following new scalability metrics:

• 64 virtual CPUs per virtual machine

• Full NUMA topology pass-through of the physical topology to the guest operating system. This is critical because modern servers have multiple physical processors with banks of memory directly connected to specific processors. The highest performance is enabled by processes using memory directly attached to the processing cores used for computation, which is a NUMA node. As virtual machines are configured with more virtual processors than can be sourced from a single NUMA node, the NUMA topology pass-through enables NUMA-aware applications within the guest to make optimal resource usage decisions based on the physical server’s NUMA topology

• 1TB of memory per virtual machine

• 64TB new VHDX virtual disk format, allowing even the largest NTFS volumes to be vir-tualized and avoiding the need for pass-through storage because of volume size require-ments. Windows Server 2012 R2 introduces the ability to dynamically resize a VHDX file while being used by a virtual machine.

• 64-node clusters containing up to 8,000 virtual machines

• A new Generation 2 virtual machine that is UEFI instead of BIOS based and provides a vir-tual environment without legacy virvir-tualized hardware such as the IDE controller and legacy network adapter. Generation 2 virtual machines can boot from the SCSI controller and from the synthetic network adapter in addition to supporting UEFI features such as Secure Boot. Windows Server 2012 R2 Hyper-V builds on the large scalability capabilities with features enabling organizations to architect a virtual environment that has full mobility for virtual workloads and that can take advantage of new capabilities in the latest generation of NetApp solutions. A key aspect of virtualization is breaking the coupling between the operating system and the underlying hardware, giving mobility to the operating system instance. Traditionally,


the mobility of a virtual machine has been limited to the boundary of a cluster, which has shared storage. While the benefits of a cluster for high availability have not diminished in Windows Server 2012 R2, it is no longer a boundary of mobility. Live Storage Move is a new feature that allows a virtual machine’s storage to be moved between supported media without any interruption to the virtual machine’s running state. For example, a virtual machine’s stor-age could be moved between different SANs or from local storstor-age to a SAN with no downtime. Organizations strive to increasingly leverage shared storage through SANs, and Live Storage Move enables a completely seamless migration from local to SAN storage as part of a migra-tion strategy. Live Migramigra-tion was enhanced in Windows Server 2012 R2 to more efficiently utilize network bandwidth and speed up migrations by enabling compression of the migra-tion data which greatly reduces the time to perform migramigra-tions at the expense of some CPU cycles. Additionally Live Migration can alternatively take advantage of networking hardware that supports RDMA to provide the fastest possible migrations without requiring compression and therefore does not use additional processor resources.

The ability to mirror a virtual machine’s storage and move it, combined with the exist-ing mirrorexist-ing technology for virtual machine memory and state present in Live Migration, enables Shared Nothing Live Migration. Shared Nothing Live Migration allows a virtual machine and its storage to be moved between any two Windows Server 2012 Hyper-V hosts with no downtime and without any shared infrastructure. The boxes do not need to be part of a cluster and they can be connected to completely separate shared storage solutions. The virtual machine’s storage, memory, and state are all replicated as part of the Shared Noth-ing Live Migration process. This capability enables complete mobility of virtual workloads within the datacenter, allowing virtual machines to be moved with no downtime between stand-alone Hyper-V hosts, between different Hyper-V clusters, and between stand-alone hosts and clusters. As organizations work to balance resources and achieve the highest lev-els of optimization for both storage and compute, the ability to move workloads between all hosts and all storage subsystems with no interruption to service is a huge benefit.

With the numerous enhancements to storage related capabilities in Windows Server 2012 R2 Hyper-V it’s critical to ensure a balanced use of the available IOPS available through the storage fabric by various virtual workloads. Storage QoS is a core Hyper-V capability enabling a maximum IOPS value to be configured on a per virtual hard disk level in addition to configuring a minimum IOPS value to notify administrators where required IOPS are not being provided. Resource metering in Windows Server 2012 R2 provides detailed storage metrics to enable reporting on storage utilization.

A key benefit of the Windows Server Datacenter SKU has been it included unlimited Win-dows Server guest OS instances to run on the server however processes such as KMS or


Active Directory Based Activation had to be in-place to activate the virtual machines. Win-dows Server 2012 R2 introduces Automatic Virtual Machine Activation, AVMA. With AVMA if a guest operating system is any version of Windows Server 2012 R2 it will automatically activate if running on an activate Windows Server 2012 R2 Datacenter host. This greatly simplifies key management and also mobility of virtual machines between different envi-ronments without concerns over key usage.

It is clear that the compute and storage technologies of Hyper-V are comprehensive and Windows Server 2012 R2 Hyper-V also brings complete network fabric abstraction through Network Virtualization. Network Virtualization allows tenants of the private cloud to use IP schemes of their choosing, completely abstracted from the physical network fabric pro-viding maximum flexibility and compatibility. Windows Server 2012 R2 includes an inbox gateway to enable connectivity from the virtual networks to physical corporate networks, to the Internet and even to different sites.

Windows Server 2012 R2 Key Features

Although the new version of Hyper-V in Windows Server 2012 R2 brings many new features to the hypervisor role, other changes make Windows Server 2012 F2 a key component of private cloud architecture.

A key element of any infrastructure component is its reliability and serviceability. Windows Server 2012 made the Server Core configuration the default for installation. Server Core installs only a subset of the operating system, which can be used for most infrastructure roles (e.g., Hyper-V), in addition to application server roles. The Server Core configuration has no graphical interface or local management infrastructure, which drastically reduces the amount of patches required and, therefore, the associated reboots, keeping the server online longer. Windows Server 2012 built on the Server Core concept, allowing the graphical inter-face and management infrastructure to be added and removed at any stage of the Windows Server’s life. This gives administrators far more flexibility and enables Server Core to be adopted without the concern for managing a server without a graphical interface. Windows Server 2012 R2 supports almost every role on Server Core in addition to reducing the Server Core disk footprint by 1GB through the use of compression.

As organizations strive to drive down costs and increase efficiency, key are three fabric elements of the infrastructure: computer, storage, and network. Windows Server 2012 R2 with Server Core and the new version of Hyper-V provide a highly optimal compute platform, and Windows Server 2012 R2 delivers new capabilities and integration with key partners such as NetApp to ensure the most efficient and high-performing storage and network experience.


Server Message Block (SMB) has been a key file-based protocol in Windows since the earli-est versions. However, SMB has not been considered a true, enterprise-ready protocol and it is typically utilized for responsibilities such as hosting user documents or software libraries and not as a protocol for running enterprise workloads. Windows Server 2012 introduced support for SMB version 3.0, which enables SMB to be used as an enterprise file-based protocol through a number of enhancements. NetApp storage with SMB 3.0 support allow workloads running on Windows Server 2012 and above to fully leverage the capabilities of NetApp storage over existing network infrastructure with lower hardware requirements. In addition to high performance, an enterprise-level protocol needs to ensure resiliency, which SMB 3.0 achieves through two additional capabilities: SMB Transparent Failover and SMB Scale-Out. Typically, important file-share workloads are hosted in a file server cluster, which enables a file share to be hosted on any node in a cluster by moving the LUN associated with a file share between the nodes as required. Traditionally, if a file share moves between nodes in the cluster, handles and locks are lost. SMB Transparent Failover allows a file share to move between nodes in the cluster without any loss of handles or locks, even in unplanned fail-ures, by storing the information on the disk in addition to the server memory and client state. This is a critical capability for enterprise workloads. SMB Transparent Failover also supports a Witness Service, which is fully implemented on NetApp highly available configurations to remove the brownout in connectivity. Typically, a brownout occurs if a TCP/IP client loses connectivity to a server because the TCP/IP timeout must be exceeded before the client takes action—which could be as long as 30 seconds; not a suitable pause for enterprise workloads. The Witness Service provides the ability for another node in the cluster to monitor the server being used by an SMB client. If the target server fails, the Witness Service proac-tively notifies the SMB client, which can then connect to an alternate server and avoid a prolonged brownout. For the most critical enterprise workloads, SMB Scale-Out is utilized. This allows an SMB file share to be served by all nodes in a cluster simultaneously, avoiding the need to move the LUN between nodes. All nodes have access to the LUN concurrently, providing the highest level of availability. Utilizing SMB Scale-Out and SMB Transparent Failover allows workloads such as Hyper-V virtual machines and SQL Server databases to be stored on SMB 3.0 file shares. SMB 3.0 is supported with NetApp unified storage archi-tecture in Data ONTAP 8.2, providing customers with additional storage access options for Windows Server 2012 and above based services.

The new VHDX virtual hard disk format, mentioned earlier, provides a 64TB virtual drive; however, the new format provides more than just increased scalability. VHDX provides near native disk performance, removing the need to use pass-through storage, which blocks features of Hyper-V such as Live Migration, Hyper-V snapshots, and Hyper-V


Replica. Additionally, organizations previously had to be concerned about alignment between a VHD file and the physical underlying storage. Misalignment would lead to per-formance degradation because access to blocks of a VHD file would cause multiple blocks on the underlying storage to be accessed. When created, VHDX files will automatically align with the underlying storage, which removes the risk of performance degradation for virtual workloads.

All the Windows Server 2012 and above storage features can be fully realized through NetApp unified storage architecture, as well as additional features available with NetApp’s Data ONTAP storage operating system. The storage subsystem becomes increasingly criti-cal as storage is centralized and the underlying disk actions, such as moving and copy-ing data, become more frequent. To provide the most responsive and efficient solution, the manner in which data is handled is overhauled when Windows Server 2012 utilizes NetApp storage. Traditionally, when Windows needs to copy or move data the operating system reads the data from disk into its memory and then writes the data to the new loca-tion. This uses up bandwidth to the storage subsystem, uses memory, and uses a signifi-cant amount of processor cycles on the host, which is a very inefficient process. Windows Server and NetApp enable Offloaded Data Transfer (ODX). ODX lets Windows Server take advantage of the native storage performance of NetApp intelligent storage arrays. When Windows Server, using ODX, wishes to copy data a token is generated by the storage array that represents the current point-in-time representation of the data. The token is passed to the server instead of the actual data. The server can use that token as if it were the actual data, moving it to other parts of the storage array, between storage arrays, and sending it to other Windows 2012 and above servers. The underlying storage array performs the actual copy and move data operations, typically multitudes of scale faster than the host performing the action.

Consider copying a virtual machine template VHDX file. Without ODX the host would be reading and writing the data, using up a lot of host resource, and the operation could take minutes because the host bottlenecks the process. With ODX, the storage array would per-form the data transfer locally, reducing the total time from minutes to seconds and saving the host resources for intended purposes. NetApp builds on the ODX premise to enable a level of performance and efficiency unique to NetApp to remove the copying of data at the storage altogether. NetApp leverages its sub-LUN cloning capability to fulfill ODX requests. Instead of physically copying storage, the NetApp storage array creates two pointers to the same blocks containing the data and enables a write-forward snapshot so changes to the data are stored in a separate space. As data is copied no additional disk space is utilized. And because data is not actually physically copied the copy operation finishes near instan-taneously. Another unique capability from NetApp is cross-protocol ODX, which enables


the efficient data move and copy capability even if different protocols are leveraged for access to the source and target location; for example, if copying data from a source accessed over SMB to a target accessed over iSCSI.

This capability is front and center in their latest Microsoft Private Cloud Fast Track validated architecture FlexPod DataCenter with Microsoft Private Cloud. Using the new Fast File Copy VM deployment mechanism in Virtual Machine Manager 2012 R2. This deployment model replaces the traditional BITS copy with a SMI-S enlightened ODX copy operation. Resulting in extremely fast efficient Virtual Machine copies even when using the native inbox tools to manage Virtual Machine deployments. Furthermore because VMM is the workhorse for the Microsoft Private Cloud those same speed and efficiency capabilities are extended to App Controller, Service Manager Service Portal, and even the new Windows Azure Integration Pack, and its tenant portal.

Of course these features are native to Windows Server 2012 R2, Systems Center 2012 R2, and NetApp’s intelligent Storage Array Data ONTAP. For customers who are looking for a head start in designing or deploying those features today the latest FastTrack submission supplies such guidance in its FlexPod DataCenter with Microsoft Private Cloud solution. Featuring full featured Microsoft Private Cloud integration as well as enterprise class imple-mentation with no single point of failure anywhere in the solution stack.

NetApp Key Intelligent Storage Array Capabilities

NetApp provides industry-leading storage array and network-attached storage technology and with its partner Cisco, delivers FlexPod, a complete server, storage, and networking solution. By utilizing NetApp solutions, additional capabilities are enabled with a private cloud implementation beyond the inherent capabilities that are standard with the Microsoft stack, including the highest levels of System Center integration, ease of deployment, and automation available in the market.

NetApp recently unveiled Clustered Data ONTAP which builds on the intelligent, scalable and industry leading ONTAP 7 solution but has been re-engineered from the ground up to provide a solution that offers non-disruptive operations for every component of the solution. Clustered Data ONTAP leverages a semi-virtualized architecture which abstracts components enabling seamless maintenance but also full isolation of workloads and full Quality of Service.

With the new scalability limits supported by VHDX the need to use pass-through storage has been removed. However, scenarios still exist where shared storage is required between virtual machines that cannot be achieved with a VHDX file. A common example: a guest


cluster is required for a number of virtual machines, which requires the virtual machines to all have access to the same NTFS LUNs. Prior to Windows Server 2012 the iSCSI protocol was the only option for accessing an external storage array. With Windows Server 2012 and NetApp solutions you can now leverage Fibre Channel connected storage through the use of virtual Fibre Channel adapters, which each have their own unique World Wide Port Name (WWPN). This allows the highest level of security because only the WWPM of the VM is given access to storage and not the virtualization host itself. Additionally, using virtual Fibre Channel does not preclude the use of Live Migration, which allows virtual machines to still be moved between hosts with no down time—even when using virtual Fibre Channel. Server Message Block, SMB, is a key protocol for Microsoft workloads including Hyper-V and SQL Server and NetApp features full SMB 3.0 protocol. This enables SMB to be used from Hyper-V directly to NetApp storage which provides the simplicity of a file-based proto-col but with the reliability typically associated with block-level protoproto-cols. NetApp supports key capabilities of SMB 3.0 that provide seamless and uninterrupted access from Hyper-V including transparent failover and active-active services between members of the NetApp high availability pair. The common bottlenecks associated with SMB on a server related to single processor utilization do not apply to NetApp solutions which fully distribute all incoming traffic over all available resources even over a single SMB connection. The afore-mentioned ODX capability also works on NetApp storage over SMB providing a true no-compromise solution.

Focusing on the core storage array features because a private cloud promotes the use of centralized storage, it is critical that the storage optimizes the use of available space—and NetApp solutions ensure optimal space usage through a number of mechanisms. NetApp utilizes thin provisioning, enabling the storage array to intelligently assign space as needed and potentially redistribute to ensure the highest levels of performance. Traditionally, many organizations have faced challenges related to duplicated data. Users store copies of docu-ments that other users have already stored, wasting large amounts of storage. NetApp stor-age array data deduplication works at a block level to identify duplicate data between files, and even within the same file, to minimize wasted disk space without impacting disk performance. Because centralized storage is heavily used as a key component of a private cloud solution for the storage of virtualized operating systems the same data deduplication becomes even more critical.

Many organizations standardize on a small set of operating system images, which are then deployed hundreds and thousands of times, all of which contain an almost identical operating system image. NetApp deduplication removes the wasted space associated with many similar operating system virtual machines. This capability can actually improve


disk performance because the common set of disk blocks between the virtual machines will be cached in the NetApp cache, which provides the highest throughput. You may be familiar with the deduplication capability that was new to Windows Server 2012 and you might wonder whether this capability could remove the need for array-level dedu-plication. The reality is that while the Windows Server 2012 R2 deduplication is a useful feature for fairly static data, it is not a real-time data deduplication solution and while in Windows Server 2012 R2 it can it deduplicate in-use data, which is the case for assets such as virtual machine storage, the only supported virtual workload for deduplication is for VDI environment virtual machines and not other workloads such as server OS virtual machines, SQL instances and so on. NetApp storage array deduplication offers deduplica-tion of all types of data without incurring performance penalties and should always be used above Windows deduplication.

While the implementation of a private cloud and its capabilities may be new to many orga-nizations, the concept of virtualization likely already is part of the IT architecture, possibly using alternate hypervisors such as ESX. NetApp provides powerful, yet simple solutions to help in the migration of virtual machines between hypervisors. While ESX and Hyper-V use different virtual hard disk formats, the actual content is stored in a similar manner while the header information of the various formats differs. NetApp provides utilities to allow the in-place conversion between hard disk formats that changes the header information only, providing very fast conversion without the need for additional disk space. This is a hugely beneficial feature to organizations migrating to HyV by greatly reducing the time to per-form conversion and additionally removing resource usage on hosts during the conversion process. This process is reversible because the original VMDK remains untouched.

Micro-soft and NetApp have a video highlighting this better together approach at

mat4shift which while a little tongue-in-cheek, really emphasizes the migration capability the joint solution brings.

NetApp sub-lun cloning was previously mentioned in relation to how NetApp storage arrays enable the most efficient ODX implementation possible. However, sub-lun cloning is uti-lized in many other areas to quickly duplicate data. Sub-lun cloning allows part of a LUN to be cloned to a new LUN, which is typically used in the creation of a new virtual machine from a template. With the write-forward capability previously described the cloning is also space efficient. With sub-lun cloning the management infrastructure simply requests that the NetApp storage array duplicate the part of the LUN containing the virtual machine tem-plate to the newly created virtual machine location. This integration with the management infrastructure is enabled through FlexClone, which allows the NetApp capabilities such as the sub-lun cloning capability to be utilized through Microsoft system management compo-nents such as System Center 2012 R2 Orchestrator.


NetApp storage intelligence brings additional benefits to Hyper-V environments, specifi-cally around virtual hard disk management. There are a number of different types of virtual hard disk in Windows Server 2012 R2. A common type is a fixed VHD, which allocates all space on disk at creation time and zero’s all the content that normally breaks any thin provisioning on the SAN. NetApp provides PowerShell cmdlets that create any type of virtual hard disk format, including fixed. However, under the covers the VHD is thin provisioned on the SAN and it also creates a volume on the VHD and formats it, all using a single command. This reduces provisioning time from minutes using large amounts of host resource to seconds. NetApp tools also allow the growing and shrinking of virtual hard disk files, which also grows and shrinks partitions within the virtual hard disk, automatically giving administrators even more flexibility. The thin provisioning of the fixed-size virtual hard disks also avoids IO conflicts between other users of the storage because zeroing of every block, which can typically degrade performance on non-NetApp solutions, is removed.

Management Experience with a Microsoft and NetApp Solution

Ultimately, the success of any technology solution pivots on its manageability. A private cloud solution built on Microsoft and NetApp components provides organizations with a consistent manageability platform that embraces modern standards. A key NetApp differ-entiator is that NetApp focuses on providing storage solutions and integrating storage man-agement with existing manman-agement tools, rather than selling manman-agement tools. This is a benefit for administrators and the larger organization because simplification and unification reduces management complexity, which is a major cause of problems in an environment. Microsoft System Center 2012 R2 is the principal component that enables tightly integrated management of the private cloud and NetApp storage arrays. System Center Virtual Machine Manager utilizes the new Windows’ SMAPI, which can utilize SMI-S (a standard storage management interface) to query and manage storage. NetApp’s Data ONTAP SMI-S Agent fully supports Windows Server 2012’s and therefore Virtual Machine Manager’s use of SMI-S, but the integration goes much deeper.

NetApp provides an OnCommand Plug-in for Microsoft (OCPM) that supports integration between the key System Center components: Operations Manager for integrated monitoring, Virtual Machine Manager for rapid provisioning, including extending the Virtual Machine Manager graphical interface, and PowerShell cmdlets and Orchestrator Integration Packs that can be leveraged in Orchestrator runbooks. The PowerShell cmdlets can also be used outside of System Center as part of an organization’s automation process, or used manually within a PowerShell environment. The Data ONTAP PowerShell Toolkit provides over 1,400 PowerShell cmdlets that enable automation of workflows without having to manually


launch storage-specific management tools. The OnCommand Plug-In for Microsoft also integrates tightly with Service Manager, which brings knowledge of the NetApp storage into the Configuration Manager Database (CMDB). Building on the tight integration, NetApp provides Integration Packs for System Center Orchestrator that expose a number of common NetApp storage operations which simplify including these operations in runbooks compared to utilizing multiple PowerShell Cmdlets. This close integration also allows storage states to trigger actions in System Center Operations Manager; for example, if high IO on the NTAP storage array is detected and reported in SCOM, this could trigger Virtual Machine Manager to auto-remediate the issue.

One of the most critical aspects of any environment is protecting the systems and applica-tions. Typically, Microsoft workloads are protected using Volume Shadow Copy Service (VSS), which enables application-consistent backups to be taken even when initiated at the Hyper-V host through a VSS pass-through process. NetApp provides its own SnapManager solutions for Microsoft applications including SQL, Exchange, SharePoint, and Windows Server Hyper-V that bring additional capabilities to organizations. A common request from large deployments is the ability to provide fast, frequent backups. Even with optimization, VSS typically takes about 30 seconds for each protected virtual machine because of the sequence of communications related to VSS. This means that if a large number of virtual machines need to be backed up the process may not meet required Service Level Agree-ments. SnapManager for Hyper-V allows non-quiesed backups of virtual machines, which enables hundreds of virtual machines to be backed up in minutes.

NetApp’s Depth of Partnership with Microsoft

Although Microsoft has a number of partners in different aspects of its business, it is impor-tant to understand the depth of the Microsoft and NetApp partnership. NetApp was named the Microsoft Private Cloud Partner of the Year in 2012 and also the 2013 Server Platform Partner of the Year, which is a testament to the leadership, expertise and investment in Microsoft integration by NetApp. At Microsoft’s TechEd 2013 North America event, NetApp and Cisco’s FlexPod won Best of TechEd for Systems Management and Operations. NetApp’s partnership with Cisco on FlexPod delivers a converged data center stack validated under the Microsoft Private Cloud FastTrack program.

Microsoft Private Cloud Fast Track

Microsoft’s Private Cloud Fast Track validation instills customer confidence and helps to accelerate deployment. In addition, reference implementation documentation, includ-ing validated design, deployment, and solution guides, help to ensure a smooth deploy-ment. With all of the pre-testing and engineering that goes into a Fast Track validated solution, customers benefit from reduced risk, lowering time to value and standardization


of infrastructure for better service levels. The Private Cloud Fast Track reference architecture program elevates the combination of technology to the next level by assuring customers an even greater level of integration, beyond technology. FlexPod Datacenter with Private Cloud based on Windows Server 2012 R2 and System Center 2012 R2 provides customers a best-of-breed solution that delivers on the promised benefits of a private cloud.

Case Studies

To learn more about some of NetApp’s clients, please click the links below or go to


Success Story: ActioNet Sets the Stage for a Versatile FlexPod Datacenter Solution with Microsoft Private Cloud

Hymans Robertson

Success Story: Leading Scotland Pension and Benefits Consultancy Builds Future Growth on NetApp and Microsoft

ING Direct

Success Story: ING Direct Innovates Faster with NetApp, Cisco, and Windows Server Hyper-V

Jack Wolfskin

Technical Case Study: An Adventure in Automated Warehouse Operations

King County, Washington

Success Story: King County Creates Shared Infrastructure on FlexPod with Microsoft Private Cloud and Saves $700,000 Annually