Celerra UBER VSA Guide v1

98 

Loading.... (view fulltext now)

Loading....

Loading....

Loading....

Loading....

Full text

(1)

Celerra UBER Virtual Storage Appliance Guide v1

EMC Corporation

vSpecialist Team February 2011

(2)

Table of Contents

1. Introduction

a) System Requirements b) Initial Deployment c) Troubleshooting

2. Provisioning for vSphere 4.1 a) NFS provisioning b) iSCSI provisioning c) Troubleshooting

3. Deploying a 2nd VSA for replication a) NFS replication

b) iSCSI replication c) Troubleshooting

4. Installing and Testing EMC Plugins

a) Installing Virtual Storage Integrator (vCenter Plugin) b) Using Virtual Storage Integrator

(3)

SECTION 1.0 – Introduction

This document is intended as a step-by-step guide for downloading and configuring the EMC Celerra UBER Virtual Storage Appliance (VSA) version 3.0. This “UBER VSA” was developed by Nicholas Weaver with contributions by Clint Kitson both from EMC’s virtualization-focused vSpecialist team. This is a modified version of the EMC Celerra VSA provided by EMC Celerra Engineering. This is intended to be a comprehensive reference document and the definitive guide to installing the UBER VSA. The UBER VSA was created for educational purposes and is not meant, nor is it supported by EMC, in a production environment. However, the UBER VSA contains all the basic functionality of an actual EMC Celerra running the operating system, DART 6.0.

The first section is a step-by-step guide to assist the reader to download, install and operate the UBER VSA as a virtual machine running on VMware Workstation. This document assumes no prior knowledge of the UBER VSA. However, it does assume that the user has familiarity and knowledge of VMware Workstation. At a minimum, the user should have access to VMware Workstation to operate the UBER VSA. Having access to VMware vSphere 4.1 in addition to VMware Workstation is preferred…in fact, VMware vSphere is a requirement for some of the sections that follow the basic deployment section. The sections that follow the initial deployment section (Section 1) provide step-by-step guides for advanced deployments of the UBER VSA:

 Provisioning in a vSphere 4.1 environment (NFS and iSCSI)

 Deploying a 2ndVSA for replication (NFS and iSCSI)

 Installing and Testing the EMC plug-in for vCenter (Virtual Storage Integrator)

 Advanced use cases and further reading for applications using the UBER VSA.

To summarize, the requirements and recommendations for installing the UBER VSA are as follows:

 Microsoft Windows machine running VMware Workstation (recommend Windows 7 running VMware Workstation 7).

 Celerra VSA can run on VMware Fusion on Mac OS X or VMware Player but this document will focus on operating on VMware Workstation and VMware vSphere.

 At least 1GB RAM dedicated to the UBER VSA (in addition to the RAM required to run the host OS and VMware Workstation). So 4GB minimum recommended for the host machine.

 VMware vSphere (to run the UBER VSA on ESX and/or to perform some of the advanced functions detailed in later sections). VMware vSphere v4.1 was used for this document.

 Network connectivity (to connect the VSA to your network, to download the required software, to access the links referenced throughout the document, to function in modern society)

This document was developed as a joint project by members of EMC’s vSpecialist team. The project was completed in February 2011. We would like to thank the following people for their contribution to this document:

 Heather Boegeman, Tarik Dwiek, Asif Khan, Paul Manning, Jase McCarty, Joel Sprouse and Thomas Twyman for writing and editing.

 Nicholas Weaver for managing the development of the Celerra UBER VSA.

(4)

SECTION 1.1 - Downloading and Installing the EMC UBER VSA

1. Download the installation file directly from Nick Weaver’s blog:

http://nickapedia.com/category/celerravsa/

NOTE: There is both a Workstation ZIP version and a vSphere OVA version. For the scope of this section, we will focus on installing the VSA on VMware Workstation. Upcoming sections will cover downloading, installing and operating the VSA in a VMware vSphere environment.

 Workstation Version (MD5: a2136179d4d9544e4f8e3b43b7cc182e)  vSphere Version OVA (MD5: c3d8abfb536aecca34c83d318c2c3e5f) 2. Save the Workstation Version .zip file to your local machine.

3. Browse to the saved file and extract the contents of the .zip file.

4. Within VMware Workstation, Select File > Open > Browse to the location of the .vmx file

5. Before powering on the VM, change both of the Network Adapters to `Host Only` if you want to keep the VSA entirely Isolated to your laptop/desktop. If you want the VSA to be connected to the network set the adapters appropriately.

6. Power on the virtual machine.

SECTION 1.1 - Downloading and Installing the EMC UBER VSA

1. Download the installation file directly from Nick Weaver’s blog:

http://nickapedia.com/category/celerravsa/

NOTE: There is both a Workstation ZIP version and a vSphere OVA version. For the scope of this section, we will focus on installing the VSA on VMware Workstation. Upcoming sections will cover downloading, installing and operating the VSA in a VMware vSphere environment.

 Workstation Version (MD5: a2136179d4d9544e4f8e3b43b7cc182e)  vSphere Version OVA (MD5: c3d8abfb536aecca34c83d318c2c3e5f) 2. Save the Workstation Version .zip file to your local machine.

3. Browse to the saved file and extract the contents of the .zip file.

4. Within VMware Workstation, Select File > Open > Browse to the location of the .vmx file

5. Before powering on the VM, change both of the Network Adapters to `Host Only` if you want to keep the VSA entirely Isolated to your laptop/desktop. If you want the VSA to be connected to the network set the adapters appropriately.

6. Power on the virtual machine.

SECTION 1.1 - Downloading and Installing the EMC UBER VSA

1. Download the installation file directly from Nick Weaver’s blog:

http://nickapedia.com/category/celerravsa/

NOTE: There is both a Workstation ZIP version and a vSphere OVA version. For the scope of this section, we will focus on installing the VSA on VMware Workstation. Upcoming sections will cover downloading, installing and operating the VSA in a VMware vSphere environment.

 Workstation Version (MD5: a2136179d4d9544e4f8e3b43b7cc182e)  vSphere Version OVA (MD5: c3d8abfb536aecca34c83d318c2c3e5f) 2. Save the Workstation Version .zip file to your local machine.

3. Browse to the saved file and extract the contents of the .zip file.

4. Within VMware Workstation, Select File > Open > Browse to the location of the .vmx file

5. Before powering on the VM, change both of the Network Adapters to `Host Only` if you want to keep the VSA entirely Isolated to your laptop/desktop. If you want the VSA to be connected to the network set the adapters appropriately.

(5)

7. After the VSA is powered on, a script will walk you through the rest of the configuration.

SECTION 1.2 - Configuring the Network

The below configuration is designed to operate the VSA local to the host of the VSA installation. This allows for a `lab in a box` configuration using VMware Workstation. Please configure your network appropriately if planning to use external internet.

8. IP Addresses: Management IP: 192.168.235.25 Subnet Mask: 255.255.255.0 Management Gateway: 255.255.255.0 9. Network Credentials: Hostname: PRODVSA1

Domain Name: .local

DNS Server: 192.168.235.1 NTP Server: Keep blank

10. Once the configuration has completed and the VSA has rebooted you will be prompted with the following:

7. After the VSA is powered on, a script will walk you through the rest of the configuration.

SECTION 1.2 - Configuring the Network

The below configuration is designed to operate the VSA local to the host of the VSA installation. This allows for a `lab in a box` configuration using VMware Workstation. Please configure your network appropriately if planning to use external internet.

8. IP Addresses: Management IP: 192.168.235.25 Subnet Mask: 255.255.255.0 Management Gateway: 255.255.255.0 9. Network Credentials: Hostname: PRODVSA1

Domain Name: .local

DNS Server: 192.168.235.1 NTP Server: Keep blank

10. Once the configuration has completed and the VSA has rebooted you will be prompted with the following:

7. After the VSA is powered on, a script will walk you through the rest of the configuration.

SECTION 1.2 - Configuring the Network

The below configuration is designed to operate the VSA local to the host of the VSA installation. This allows for a `lab in a box` configuration using VMware Workstation. Please configure your network appropriately if planning to use external internet.

8. IP Addresses: Management IP: 192.168.235.25 Subnet Mask: 255.255.255.0 Management Gateway: 255.255.255.0 9. Network Credentials: Hostname: PRODVSA1

Domain Name: .local

DNS Server: 192.168.235.1 NTP Server: Keep blank

10. Once the configuration has completed and the VSA has rebooted you will be prompted with the following:

(6)

11. After it has booted, from your local machine, open a browser to the IP of the VSA listed above. If you do not already have Java, or it is outdated, you will be prompted to install.

12. Login to the VSA via Unisphere: Login: NASADMIN Password: NASADMIN

13. Installation and configuration of the VSA is complete! Pat yourself on the back.

11. After it has booted, from your local machine, open a browser to the IP of the VSA listed above. If you do not already have Java, or it is outdated, you will be prompted to install.

12. Login to the VSA via Unisphere: Login: NASADMIN Password: NASADMIN

13. Installation and configuration of the VSA is complete! Pat yourself on the back.

11. After it has booted, from your local machine, open a browser to the IP of the VSA listed above. If you do not already have Java, or it is outdated, you will be prompted to install.

12. Login to the VSA via Unisphere: Login: NASADMIN Password: NASADMIN

(7)

Section 2 - Provisioning Celerra VSA Storage on VMware vSphere 4.1

Presenting IP based storage from the Celerra VSA is comprised of simple tasks. This document will demonstrate the steps required to provision storage, make that storage available as an NFS export, or iSCSI target, and attach that storage to VMware vSphere.

Celerra VSA configuration and administration is managed using EMC Unisphere, EMC’s unified storage management platform. Unisphere has straightforward navigation buttons and links to provide quick and easy access to different functions and features on Celerra systems (as well as most other EMC systems).

SECTION 2.1 - Prerequisites

For the purposes of this section, the Celerra VSA can run on any of the following VMware products:  VMware vSphere

 VMware Workstation/Fusion/Player  VMware Server

Additionally, a VMware vSphere server must also be available for the purpose of attaching storage. VMware vSphere may be run as a virtual machine on VMware Workstation, provided the host machine supports VT/AMD-V.

The following instructions will detail how to log into the Celerra VSA. For the purposes of these steps the following will be true:

 The Celerra VSA has a management IP address of 192.168.1.190 with a hostname of ubervsa  The Celerra VSA will have a storage interface (cge0) with an IP address of 192.168.1.191

 The vSphere host will have a Management IP address of 192.168.1.189. Attaching storage to this host will be done through this management address.

 This configuration will facilitate the process of adding storage, but is not recommended for production systems.

SECTION 2.2 - Provisioning Storage

Before a vSphere host can use storage, it must be provisioned and presented on the Celerra VSA. Once provisioned and presented, storage must be configured on the host(s) that will be accessing it. Log into the Celerra VSA to begin the provisioning process.

Launch Firefox or other supported browser and enter the IP address of the Celerra VSA configured during the initial deployment of the VSA.

NOTE: Different browsers handle untrusted certificates differently. Steps 2-4 demonstrate how to proceed when using Firefox 3.x. The certificate error presented is as a result of the Celerra VSA

(8)

generating a self-signed certificate. Because the certificate is self signed workstations accessing the Unisphere interface cannot verify them against a trusted Certificate Authority.

1. A security exception is presented upon first login. Click “I Understand the Risks”

2. Click Add Exception… to accept the certificate.

3. Click Confirm Security Exception. This will allow the browser to load without presenting a certificate error.

generating a self-signed certificate. Because the certificate is self signed workstations accessing the Unisphere interface cannot verify them against a trusted Certificate Authority.

1. A security exception is presented upon first login. Click “I Understand the Risks”

2. Click Add Exception… to accept the certificate.

3. Click Confirm Security Exception. This will allow the browser to load without presenting a certificate error.

generating a self-signed certificate. Because the certificate is self signed workstations accessing the Unisphere interface cannot verify them against a trusted Certificate Authority.

1. A security exception is presented upon first login. Click “I Understand the Risks”

2. Click Add Exception… to accept the certificate.

3. Click Confirm Security Exception. This will allow the browser to load without presenting a certificate error.

(9)

4. The primary page displayed when launching the Celerra VSA opens a pop-up window to manage the Celerra. Most browsers block pop-ups. To allow the Unisphere pop-up, click options, and choose Allow pop-ups for the Celerra VSA (ubervsa below)

5. For subsequent launches, the pop-up will launch automatically. Because pop-ups had to be allowed for this session, click the Start a new EMC Unisphere session.

6. The first time a browser accesses the Celerra VSA, a User Agreement acceptance window is presented. Click Continue to accept the user agreement. This only occurs the first time a browser accesses the VSA.

7. Enter the user credentials in the Unisphere login window, followed by clicking Login to gain access to the Celerra VSA. This is required for all logins to the Celerra VSA.

4. The primary page displayed when launching the Celerra VSA opens a pop-up window to manage the Celerra. Most browsers block pop-ups. To allow the Unisphere pop-up, click options, and choose Allow pop-ups for the Celerra VSA (ubervsa below)

5. For subsequent launches, the pop-up will launch automatically. Because pop-ups had to be allowed for this session, click the Start a new EMC Unisphere session.

6. The first time a browser accesses the Celerra VSA, a User Agreement acceptance window is presented. Click Continue to accept the user agreement. This only occurs the first time a browser accesses the VSA.

7. Enter the user credentials in the Unisphere login window, followed by clicking Login to gain access to the Celerra VSA. This is required for all logins to the Celerra VSA.

4. The primary page displayed when launching the Celerra VSA opens a pop-up window to manage the Celerra. Most browsers block pop-ups. To allow the Unisphere pop-up, click options, and choose Allow pop-ups for the Celerra VSA (ubervsa below)

5. For subsequent launches, the pop-up will launch automatically. Because pop-ups had to be allowed for this session, click the Start a new EMC Unisphere session.

6. The first time a browser accesses the Celerra VSA, a User Agreement acceptance window is presented. Click Continue to accept the user agreement. This only occurs the first time a browser accesses the VSA.

7. Enter the user credentials in the Unisphere login window, followed by clicking Login to gain access to the Celerra VSA. This is required for all logins to the Celerra VSA.

(10)

8. The default Name and Password combination is nasadmin and nasadmin.

9. Upon successful authentication, the main administration screen (All Systems) will appear. This interface can be used to manage multiple Celerra VSAs, in the same manner as using Unisphere to manage multiple Celerra and CLARiiON systems.

SECTION 2.3 – Configuring Networking for Presenting Storage

To present IP-based storage to vSphere; a network interface must be available to associate with NFS and iSCSI storage resources. The Celerra VSA comes preconfigured with 2 virtual NICs. The primary vNIC is used for Control Station traffic, while the secondary is used for storage traffic.

10. Load the Celerra VSA’s system configuration by selecting the system name or IP address

(ubervsa shown) under the All Systems drop down menu. This process selects a specific VSA on which to perform actions.

8. The default Name and Password combination is nasadmin and nasadmin.

9. Upon successful authentication, the main administration screen (All Systems) will appear. This interface can be used to manage multiple Celerra VSAs, in the same manner as using Unisphere to manage multiple Celerra and CLARiiON systems.

SECTION 2.3 – Configuring Networking for Presenting Storage

To present IP-based storage to vSphere; a network interface must be available to associate with NFS and iSCSI storage resources. The Celerra VSA comes preconfigured with 2 virtual NICs. The primary vNIC is used for Control Station traffic, while the secondary is used for storage traffic.

10. Load the Celerra VSA’s system configuration by selecting the system name or IP address

(ubervsa shown) under the All Systems drop down menu. This process selects a specific VSA on which to perform actions.

8. The default Name and Password combination is nasadmin and nasadmin.

9. Upon successful authentication, the main administration screen (All Systems) will appear. This interface can be used to manage multiple Celerra VSAs, in the same manner as using Unisphere to manage multiple Celerra and CLARiiON systems.

SECTION 2.3 – Configuring Networking for Presenting Storage

To present IP-based storage to vSphere; a network interface must be available to associate with NFS and iSCSI storage resources. The Celerra VSA comes preconfigured with 2 virtual NICs. The primary vNIC is used for Control Station traffic, while the secondary is used for storage traffic.

10. Load the Celerra VSA’s system configuration by selecting the system name or IP address

(ubervsa shown) under the All Systems drop down menu. This process selects a specific VSA on which to perform actions.

(11)

11. Place the mouse over the System drop down, and several items will become available. Select

Network to access the network configuration menu.

12. On the Interfaces tab, select Create from the bottom

13. A pop-up window will be presented to enter appropriate settings for the storage interface

14. Enter the settings and select OK.

11. Place the mouse over the System drop down, and several items will become available. Select

Network to access the network configuration menu.

12. On the Interfaces tab, select Create from the bottom

13. A pop-up window will be presented to enter appropriate settings for the storage interface

14. Enter the settings and select OK.

11. Place the mouse over the System drop down, and several items will become available. Select

Network to access the network configuration menu.

12. On the Interfaces tab, select Create from the bottom

13. A pop-up window will be presented to enter appropriate settings for the storage interface

(12)

Section 2.4 - Configuring an NFS file system for use by VMware vSphere

An NFS share/export can be made available to vSphere from the Celerra VSA. NFS storage is file-based storage, and is both created, and attached, to vSphere in a different manner than traditional block-based (iSCSI/FC) storage.

15. Select File Systems from the Storage drop down to begin the process.

16. Because NFS resides on a file system, and none are created by default in the VSA, a file system must be created. Select Create from the bottom of the File Systems tab to create on

17. To create the file system, either a Storage Pool or Meta Volume can be used. Two Storage Pools are pre-created in the Celerra UBER VSA. Select one of the available Storage Pools, provide a file system name, select amount of storage to allocate, and use server_2. Click OK to create the file system.

Section 2.4 - Configuring an NFS file system for use by VMware vSphere

An NFS share/export can be made available to vSphere from the Celerra VSA. NFS storage is file-based storage, and is both created, and attached, to vSphere in a different manner than traditional block-based (iSCSI/FC) storage.

15. Select File Systems from the Storage drop down to begin the process.

16. Because NFS resides on a file system, and none are created by default in the VSA, a file system must be created. Select Create from the bottom of the File Systems tab to create on

17. To create the file system, either a Storage Pool or Meta Volume can be used. Two Storage Pools are pre-created in the Celerra UBER VSA. Select one of the available Storage Pools, provide a file system name, select amount of storage to allocate, and use server_2. Click OK to create the file system.

Section 2.4 - Configuring an NFS file system for use by VMware vSphere

An NFS share/export can be made available to vSphere from the Celerra VSA. NFS storage is file-based storage, and is both created, and attached, to vSphere in a different manner than traditional block-based (iSCSI/FC) storage.

15. Select File Systems from the Storage drop down to begin the process.

16. Because NFS resides on a file system, and none are created by default in the VSA, a file system must be created. Select Create from the bottom of the File Systems tab to create on

17. To create the file system, either a Storage Pool or Meta Volume can be used. Two Storage Pools are pre-created in the Celerra UBER VSA. Select one of the available Storage Pools, provide a file system name, select amount of storage to allocate, and use server_2. Click OK to create the file system.

(13)

18. The file system has been created, but has not yet been shared by the Celerra VSA. To share the file system, select NFS from the Sharing drop down.

19. Select Create from the bottom of the NFS Exports tab to create an export.

20. Select the file system created in step 3, and provide a path (/FileSystem-NFS in this case). Note:

vSphere hosts require root access to the presented NFS export. Enter the IP address of the

vSphere host in the Read/Write, Root, and Access Hosts fields. Click OK to create the export. 18. The file system has been created, but has not yet been shared by the Celerra VSA. To share the

file system, select NFS from the Sharing drop down.

19. Select Create from the bottom of the NFS Exports tab to create an export.

20. Select the file system created in step 3, and provide a path (/FileSystem-NFS in this case). Note:

vSphere hosts require root access to the presented NFS export. Enter the IP address of the

vSphere host in the Read/Write, Root, and Access Hosts fields. Click OK to create the export. 18. The file system has been created, but has not yet been shared by the Celerra VSA. To share the

file system, select NFS from the Sharing drop down.

19. Select Create from the bottom of the NFS Exports tab to create an export.

20. Select the file system created in step 3, and provide a path (/FileSystem-NFS in this case). Note:

vSphere hosts require root access to the presented NFS export. Enter the IP address of the

(14)

SECTION 2.5 - Mount the created NFS file system to an ESX host

Mounting an NFS datastore to vSphere requires IP connectivity between the Celerra VSA and a vSphere host’s VMkernel and Management Network. The process detailed below is a basic configuration for demonstration purposes only. To configure NFS storage for a production environment, please refer to the following resources:

 VMware: Best Practices for running VMware vSphere on Network Attached Storage

http://vmware.com/files/pdf/techpaper/VMware-NFS-BestPractices-WP-EN.pdf

 EMC: Introduction to Using EMC Celerra with VMware vSphere 4 – Applied Best Practices Guide

http://www.emc.com/collateral/hardware/white-papers/h6337-introduction-using-celerra-vmware-vsphere-wp.pdf

 VirtualGeek: A “Multivendor Post” to help our mutual NFS customers using VMware

http://virtualgeek.typepad.com/virtual_geek/2009/06/a-multivendor-post-to-help-our-mutual-nfs-customers-using-vmware.html

Now let’s continue with the implementation.

21. Log into vCenter Server and select a host to add NFS storage to. This can also directly be performed on an ESX(i) host.

22. Select the Configuration tab, followed by Storage (under hardware), then click Add Storage

23. Two types of storage can be added to vSphere hosts. Block based storage (iSCSI/FS) and file based storage (NFS). Select Network File System and press Next >.

SECTION 2.5 - Mount the created NFS file system to an ESX host

Mounting an NFS datastore to vSphere requires IP connectivity between the Celerra VSA and a vSphere host’s VMkernel and Management Network. The process detailed below is a basic configuration for demonstration purposes only. To configure NFS storage for a production environment, please refer to the following resources:

 VMware: Best Practices for running VMware vSphere on Network Attached Storage

http://vmware.com/files/pdf/techpaper/VMware-NFS-BestPractices-WP-EN.pdf

 EMC: Introduction to Using EMC Celerra with VMware vSphere 4 – Applied Best Practices Guide

http://www.emc.com/collateral/hardware/white-papers/h6337-introduction-using-celerra-vmware-vsphere-wp.pdf

 VirtualGeek: A “Multivendor Post” to help our mutual NFS customers using VMware

http://virtualgeek.typepad.com/virtual_geek/2009/06/a-multivendor-post-to-help-our-mutual-nfs-customers-using-vmware.html

Now let’s continue with the implementation.

21. Log into vCenter Server and select a host to add NFS storage to. This can also directly be performed on an ESX(i) host.

22. Select the Configuration tab, followed by Storage (under hardware), then click Add Storage

23. Two types of storage can be added to vSphere hosts. Block based storage (iSCSI/FS) and file based storage (NFS). Select Network File System and press Next >.

SECTION 2.5 - Mount the created NFS file system to an ESX host

Mounting an NFS datastore to vSphere requires IP connectivity between the Celerra VSA and a vSphere host’s VMkernel and Management Network. The process detailed below is a basic configuration for demonstration purposes only. To configure NFS storage for a production environment, please refer to the following resources:

 VMware: Best Practices for running VMware vSphere on Network Attached Storage

http://vmware.com/files/pdf/techpaper/VMware-NFS-BestPractices-WP-EN.pdf

 EMC: Introduction to Using EMC Celerra with VMware vSphere 4 – Applied Best Practices Guide

http://www.emc.com/collateral/hardware/white-papers/h6337-introduction-using-celerra-vmware-vsphere-wp.pdf

 VirtualGeek: A “Multivendor Post” to help our mutual NFS customers using VMware

http://virtualgeek.typepad.com/virtual_geek/2009/06/a-multivendor-post-to-help-our-mutual-nfs-customers-using-vmware.html

Now let’s continue with the implementation.

21. Log into vCenter Server and select a host to add NFS storage to. This can also directly be performed on an ESX(i) host.

22. Select the Configuration tab, followed by Storage (under hardware), then click Add Storage

23. Two types of storage can be added to vSphere hosts. Block based storage (iSCSI/FS) and file based storage (NFS). Select Network File System and press Next >.

(15)

24. Go to the Locate Network File System dialog box.

25. Enter the IP address for the cge0 interface you entered in Step 13 in Section 2.4.

26. Provide the export path created in Step 6 of the Configure an NFS file system for use by VMware vSphere section.

27. Provide a datastore name (FileSystem-NFS in this case). Press Next.

28. Review the settings entered, and press Next

29. The storage is now visible in the vSphere client and available to the vSphere host. 24. Go to the Locate Network File System dialog box.

25. Enter the IP address for the cge0 interface you entered in Step 13 in Section 2.4.

26. Provide the export path created in Step 6 of the Configure an NFS file system for use by VMware vSphere section.

27. Provide a datastore name (FileSystem-NFS in this case). Press Next.

28. Review the settings entered, and press Next

29. The storage is now visible in the vSphere client and available to the vSphere host. 24. Go to the Locate Network File System dialog box.

25. Enter the IP address for the cge0 interface you entered in Step 13 in Section 2.4.

26. Provide the export path created in Step 6 of the Configure an NFS file system for use by VMware vSphere section.

27. Provide a datastore name (FileSystem-NFS in this case). Press Next.

28. Review the settings entered, and press Next

(16)

Section 2.6 - Configuring an iSCSI LUN on VMware vSphere for a VMFS volume

In contrast to NFS storage, block based storage can also be presented to vSphere hosts with the Celerra VSA. Because the Celerra VSA is a VM, and Fibre Channel is not available, iSCSI can be used to provide block-based storage. VSphere supports both hardware and software iSCSI adapters. vSphere’s native software iSCSI adapters will be used in this demo, and the below steps detail their configuration. To use the native software iSCSI adapter, it must be enabled.

The process detailed below is a basic configuration for demonstration purposes only. To configure iSCSI storage for a production environment, please refer to the following references:

 VMware: iSCSI SAN Configuration Guide

http://www.vmware.com/pdf/vsphere4/r40/vsp_40_iscsi_san_cfg.pdf

 EMC: Introduction to Using EMC Celerra with VMware vSphere 4 – Applied Best Practices Guide

http://www.emc.com/collateral/hardware/white-papers/h6337-introduction-using-celerra-vmware-vsphere-wp.pdf

 VirtualGeek: A “Multivendor Post” on using iSCSI with VMware vSphere

http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html

Now let’s continue with the implementation.

30. From the vSphere client select Configuration, then Storage Adapter

Section 2.6 - Configuring an iSCSI LUN on VMware vSphere for a VMFS volume

In contrast to NFS storage, block based storage can also be presented to vSphere hosts with the Celerra VSA. Because the Celerra VSA is a VM, and Fibre Channel is not available, iSCSI can be used to provide block-based storage. VSphere supports both hardware and software iSCSI adapters. vSphere’s native software iSCSI adapters will be used in this demo, and the below steps detail their configuration. To use the native software iSCSI adapter, it must be enabled.

The process detailed below is a basic configuration for demonstration purposes only. To configure iSCSI storage for a production environment, please refer to the following references:

 VMware: iSCSI SAN Configuration Guide

http://www.vmware.com/pdf/vsphere4/r40/vsp_40_iscsi_san_cfg.pdf

 EMC: Introduction to Using EMC Celerra with VMware vSphere 4 – Applied Best Practices Guide

http://www.emc.com/collateral/hardware/white-papers/h6337-introduction-using-celerra-vmware-vsphere-wp.pdf

 VirtualGeek: A “Multivendor Post” on using iSCSI with VMware vSphere

http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html

Now let’s continue with the implementation.

30. From the vSphere client select Configuration, then Storage Adapter

Section 2.6 - Configuring an iSCSI LUN on VMware vSphere for a VMFS volume

In contrast to NFS storage, block based storage can also be presented to vSphere hosts with the Celerra VSA. Because the Celerra VSA is a VM, and Fibre Channel is not available, iSCSI can be used to provide block-based storage. VSphere supports both hardware and software iSCSI adapters. vSphere’s native software iSCSI adapters will be used in this demo, and the below steps detail their configuration. To use the native software iSCSI adapter, it must be enabled.

The process detailed below is a basic configuration for demonstration purposes only. To configure iSCSI storage for a production environment, please refer to the following references:

 VMware: iSCSI SAN Configuration Guide

http://www.vmware.com/pdf/vsphere4/r40/vsp_40_iscsi_san_cfg.pdf

 EMC: Introduction to Using EMC Celerra with VMware vSphere 4 – Applied Best Practices Guide

http://www.emc.com/collateral/hardware/white-papers/h6337-introduction-using-celerra-vmware-vsphere-wp.pdf

 VirtualGeek: A “Multivendor Post” on using iSCSI with VMware vSphere

http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html

Now let’s continue with the implementation.

(17)

31. Select Configure. 32. Select Enabled. 33. Select Add. 31. Select Configure. 32. Select Enabled. 33. Select Add. 31. Select Configure. 32. Select Enabled. 33. Select Add.

(18)

34. Enter the cge0 address as the iSCSI Server (192.168.1.191 in this case), with 3260 as the Port address (default)

35. The iSCSI Server will be visible

36. When prompted to rescan the host bus adapter, select Yes.

37. Switch to the Unisphere interface, and select the Sharing dropdown.

38. Choose the iSCSI wizard

34. Enter the cge0 address as the iSCSI Server (192.168.1.191 in this case), with 3260 as the Port address (default)

35. The iSCSI Server will be visible

36. When prompted to rescan the host bus adapter, select Yes.

37. Switch to the Unisphere interface, and select the Sharing dropdown.

38. Choose the iSCSI wizard

34. Enter the cge0 address as the iSCSI Server (192.168.1.191 in this case), with 3260 as the Port address (default)

35. The iSCSI Server will be visible

36. When prompted to rescan the host bus adapter, select Yes.

37. Switch to the Unisphere interface, and select the Sharing dropdown.

(19)

39. Select server_2 as the Data Mover, and select Next.

40. Click Create Target.

41. Choose server_2 and press Next.

42. Enter a name for the iSCSI target (ubervsa in this case) and press Next 39. Select server_2 as the Data Mover, and select Next.

40. Click Create Target.

41. Choose server_2 and press Next.

42. Enter a name for the iSCSI target (ubervsa in this case) and press Next 39. Select server_2 as the Data Mover, and select Next.

40. Click Create Target.

41. Choose server_2 and press Next.

(20)

43. Select the interface, followed by clicking Add, then Next.

44. Press Submit to commit the changes and create the iSCSI target.

45. Upon completion, press Next to continue.

43. Select the interface, followed by clicking Add, then Next.

44. Press Submit to commit the changes and create the iSCSI target.

45. Upon completion, press Next to continue.

43. Select the interface, followed by clicking Add, then Next.

44. Press Submit to commit the changes and create the iSCSI target.

(21)

46. Select the iSCSI target created (ubervsa in this case) and press Next.

47. Select Create File System

48. Select the server_2 Data Mover

46. Select the iSCSI target created (ubervsa in this case) and press Next.

47. Select Create File System

48. Select the server_2 Data Mover

46. Select the iSCSI target created (ubervsa in this case) and press Next.

47. Select Create File System

(22)

49. Choose Storage Pool

50. Pick a storage pool (2 are created by default) and press Next.

51. Give the File System a Name, amount of space to utilize, and any additional attributes. 49. Choose Storage Pool

50. Pick a storage pool (2 are created by default) and press Next.

51. Give the File System a Name, amount of space to utilize, and any additional attributes. 49. Choose Storage Pool

50. Pick a storage pool (2 are created by default) and press Next.

(23)

52. Press Submit

53. The Overview/Results page will confirm that the iSCSI filesystem has been created. Press Next.

54. Now that the filesystem has been created, press Next. 52. Press Submit

53. The Overview/Results page will confirm that the iSCSI filesystem has been created. Press Next.

54. Now that the filesystem has been created, press Next. 52. Press Submit

53. The Overview/Results page will confirm that the iSCSI filesystem has been created. Press Next.

(24)

55. Enter the LUN information

56. Select Grant to give the ESX(i) host access to the LUN.

57. Press Next

55. Enter the LUN information

56. Select Grant to give the ESX(i) host access to the LUN.

57. Press Next

55. Enter the LUN information

56. Select Grant to give the ESX(i) host access to the LUN.

(25)

58. If CHAP is to be used, it can be configured here. For the purposes of this demo configuration,

CHAP will not be used. Select Next.

59. Select Finish to complete the process.

60. Select Close to close the wizard.

58. If CHAP is to be used, it can be configured here. For the purposes of this demo configuration,

CHAP will not be used. Select Next.

59. Select Finish to complete the process.

60. Select Close to close the wizard.

58. If CHAP is to be used, it can be configured here. For the purposes of this demo configuration,

CHAP will not be used. Select Next.

59. Select Finish to complete the process.

(26)

61. In the vSphere client, select Configuration, Storage, and click Add Storage

62. Select Disk/LUN and select Next

63. The iSCSI LUN will be visible. Select it and press Next >.

61. In the vSphere client, select Configuration, Storage, and click Add Storage

62. Select Disk/LUN and select Next

63. The iSCSI LUN will be visible. Select it and press Next >.

61. In the vSphere client, select Configuration, Storage, and click Add Storage

62. Select Disk/LUN and select Next

(27)

64. Press Next

65. Give the iSCSI datastore a name and press Next >.

66. Enter the block size and press Next >. 64. Press Next

65. Give the iSCSI datastore a name and press Next >.

66. Enter the block size and press Next >. 64. Press Next

65. Give the iSCSI datastore a name and press Next >.

(28)

67. Select Finish

68. The datastore will be visible in the vSphere Configuration tab under storage

69. Select Properties to modify Storage I/O Control (SIOC) settings 67. Select Finish

68. The datastore will be visible in the vSphere Configuration tab under storage

69. Select Properties to modify Storage I/O Control (SIOC) settings 67. Select Finish

68. The datastore will be visible in the vSphere Configuration tab under storage

(29)

70. Enable Storage I/O Control (SIOC) by selecting Enabled

71. Select Close complete the configuration of iSCSI properties.

SECTION 2.6 - Troubleshooting Celerra VSA storage on VMware vSphere

Troubleshooting Celerra VSA storage that is presented to VMware vSphere is a straightforward process. Because the Celerra VSA only presents IP based storage (NFS/iSCSI) there are finite number of critical items to verify are working properly.

1. Verify IP connectivity

Because IP connectivity between the vSphere host and the Celerra VSA is critical, the first step is to verify proper IP communication is occurring.

Log into the vSphere host console. In ESX, log into the console using appropriate credentials. In ESXi log into the host, and press ALT+F1 to get to the troubleshooting console. Appropriate credentials are required a second time when using the troubleshooting console.

70. Enable Storage I/O Control (SIOC) by selecting Enabled

71. Select Close complete the configuration of iSCSI properties.

SECTION 2.6 - Troubleshooting Celerra VSA storage on VMware vSphere

Troubleshooting Celerra VSA storage that is presented to VMware vSphere is a straightforward process. Because the Celerra VSA only presents IP based storage (NFS/iSCSI) there are finite number of critical items to verify are working properly.

1. Verify IP connectivity

Because IP connectivity between the vSphere host and the Celerra VSA is critical, the first step is to verify proper IP communication is occurring.

Log into the vSphere host console. In ESX, log into the console using appropriate credentials. In ESXi log into the host, and press ALT+F1 to get to the troubleshooting console. Appropriate credentials are required a second time when using the troubleshooting console.

70. Enable Storage I/O Control (SIOC) by selecting Enabled

71. Select Close complete the configuration of iSCSI properties.

SECTION 2.6 - Troubleshooting Celerra VSA storage on VMware vSphere

Troubleshooting Celerra VSA storage that is presented to VMware vSphere is a straightforward process. Because the Celerra VSA only presents IP based storage (NFS/iSCSI) there are finite number of critical items to verify are working properly.

1. Verify IP connectivity

Because IP connectivity between the vSphere host and the Celerra VSA is critical, the first step is to verify proper IP communication is occurring.

Log into the vSphere host console. In ESX, log into the console using appropriate credentials. In ESXi log into the host, and press ALT+F1 to get to the troubleshooting console. Appropriate credentials are required a second time when using the troubleshooting console.

(30)

Ping the storage interface configured in Section 2.4.

2. Verify host access (NFS)

For a vSphere host to properly mount an NFS export presented from the Celerra VSA, the host must have appropriate rights assigned to the NFS export.

Log into Unisphere and select the Celerra VSA (ubervsa here)

Select NFS from the Sharing dropdown menu.

Select the NFS Export, and click Properties.

Ping the storage interface configured in Section 2.4.

2. Verify host access (NFS)

For a vSphere host to properly mount an NFS export presented from the Celerra VSA, the host must have appropriate rights assigned to the NFS export.

Log into Unisphere and select the Celerra VSA (ubervsa here)

Select NFS from the Sharing dropdown menu.

Select the NFS Export, and click Properties.

Ping the storage interface configured in Section 2.4.

2. Verify host access (NFS)

For a vSphere host to properly mount an NFS export presented from the Celerra VSA, the host must have appropriate rights assigned to the NFS export.

Log into Unisphere and select the Celerra VSA (ubervsa here)

Select NFS from the Sharing dropdown menu.

(31)

A properties dialog box will open. Ensure that the vSphere host’s IP network is listed in the Read/Write, Root, and Access hosts sections.

A properties dialog box will open. Ensure that the vSphere host’s IP network is listed in the Read/Write, Root, and Access hosts sections.

A properties dialog box will open. Ensure that the vSphere host’s IP network is listed in the Read/Write, Root, and Access hosts sections.

(32)

3. Verify host access (iSCSI)

For a vSphere host to properly mount an iSCSI volume presented from the Celerra VSA, the host’s iSCSI initiator must be included in the iSCSI volume’s list of approved initiators. This process is very similar to the process of zoning Fibre Channel HBAs on a Fibre Channel SAN. Log into Unisphere and select the Celerra VSA (ubervsa here)

Select iSCSI from the Sharing dropdown menu.

Select the presented iSCSI volume

Ensure the vSphere host’s iSCSI initiator is connected to the iSCSI volume 3. Verify host access (iSCSI)

For a vSphere host to properly mount an iSCSI volume presented from the Celerra VSA, the host’s iSCSI initiator must be included in the iSCSI volume’s list of approved initiators. This process is very similar to the process of zoning Fibre Channel HBAs on a Fibre Channel SAN. Log into Unisphere and select the Celerra VSA (ubervsa here)

Select iSCSI from the Sharing dropdown menu.

Select the presented iSCSI volume

Ensure the vSphere host’s iSCSI initiator is connected to the iSCSI volume 3. Verify host access (iSCSI)

For a vSphere host to properly mount an iSCSI volume presented from the Celerra VSA, the host’s iSCSI initiator must be included in the iSCSI volume’s list of approved initiators. This process is very similar to the process of zoning Fibre Channel HBAs on a Fibre Channel SAN. Log into Unisphere and select the Celerra VSA (ubervsa here)

Select iSCSI from the Sharing dropdown menu.

Select the presented iSCSI volume

(33)

SECTION 3.0 – Deploying a 2

nd

UBER VSA for Replication

In this section, we will create a pair of VSAs and perform some basic replication sessions between the two virtual devices. First, we will setup NFS replication (file system replication) and second, we will setup iSCSI replication (block level replication using LUNs). Although not necessary, it would be useful for the reader to get some background on EMC Celerra architecture and its components (such as the function of the data movers, solution enablers, Unisphere, etc).

SECTION 3.1 - Setting up the Replication Environment

Preliminary Tasks

1. Setting up the Source Celerra VSA

Follow the steps outlined in Section 1 to set up your initial (SOURCE) Celerra VSA, using your own IP addresses and naming conventions.

2. Setting up your initial (SOURCE) Volumes

Follow the steps outlined in Section 2 to set up your initial (SOURCE) NFS and iSCSI volumes. 3. Setting up the Target Celerra VSA

Follow the steps outlined in Section 1 to set up your secondary (TARGET) Celerra VSA, using your own IP addresses and naming conventions.

4. Verify Connectivity to the Secondary (Target) Celerra VSA

5. From the Home Screen of Unishpere for your source VCS, select "Run Ping Test" 6. Ping the Remote VSA

(34)

NOTE: Be sure you can ping both the control station IP address and the data mover IP address and/or the FQDN (fully qualified domain name) of both components.

SECTION 3.2 - Creating the Target iSCSI File System for File-System Replication

Setting up the target volumes differs for NFS and iSCSI, depending on which you are setting up.  NFS - As detailed below, you do not need to set up a target File System for NFS replication. In

fact, it is recommended to let the Replication Wizard set it up for you; this will ensure that the target file system is of the appropriate size and the Read Only flag is set.

 iSCSI - For iSCSI, you do need to create the File System and LUN ahead of time. The target LUN must have the Read Only flag set, and must be of the same size as the source.

7. On your TARGET Celerra VSA, use the menus to select Storage > File Systems

(35)

9. Enter File System Details

10. Use the dialog box to create the File System that will provide the capacity for the iSCSI LUN.

Note: In the example above, the Storage Capacity has been entered as 5072 MB, or 5 GB. Due to disk geometry, the resulting size of the File System may be smaller than the value you enter here.

(36)

11. Click OK.

12. Verify File System Creation

13. Verify your File System was created without errors.

Note: The size of the file System in the example above - it is not listed as 5 GB, but instead as 4.953 GB. This is significant, as the LUN created in the following steps must have the same size as its source LUN. Be sure to note the exact size of the source LUN prior to creating File Systems or LUNs on the remote (TARGET) Celerra VSA.

Section 3.3 - Creating the Target iSCSI LUN for Block-Level Replication

14. Navigate to the iSCSI Panel

15. On the TARGET Celerra VSA, use the menus to select Sharing ---> iSCSI.

(37)

17. Enter LUN Details

18. Use the dialog to select the Data Mover and Target.

19. Give your new LUN a unique LUN number ( '0' in the example above), and select the File System that will provide the capacity for the LUN.

(38)

Note: The LUN must be marked as "Read Only'. Also note that the size enetered in MB must be the same as the size of the SOURCE LUN on the SOURCE Celerra VSA.

20. Click OK.

21. Verify LUN Creation

22. Ensure your LUN was created without errors.

SECTION 3.4 - Configuring Replication

Once your File Systems and LUNS are set up, creating replication relationships between Celerra devices is essentially a three step process.

I. Create the target Celerra network server.

This identifies the remote Celerra network servers you will be working with, and stores your login credentials for administrative tasks on the remote device.

II. Create a Data Mover Interconnect.

This step sets up a network session between the local and remote Data Movers. III. Create your replication relationship.

During this step, you identify the source volume / LUN / File System you want to replicate, and define the characteristics of the replication.

These three tasks can be performed manually using various dialogs and screens in Unisphere, or alternatively all three can be accomplished using the Replication Wizard. The remainder of this section walks the reader through each step of the process manually. Then, at Section 3.8, we will walk through the process of automatically configuring the parameters using the Replication Wizard.

The manual process is useful for understanding how the replication works. But if you are interested in just getting the replication working, we recommend skipping to Section 3.8.

(39)

23. Create the Target Celerra Network Server

NOTE: This process must be completed on both the local and remote Celerra VSA network servers.

24. Navigate to the Replication Panel

25. Using the menus, select Replicas ---> Replications.

26. Click the Tab labeled "Celerra Network Servers". You should see the local Celerra represented in the panel.

(40)

28. Enter the Details for the Remote Celerra VSA

29. Click OK.

30. Confirm the Celerra Network Server was Created Successfully

(41)

32. You must now manually create an entry for the local Celerra VSA on the Control Station of the

remote Celerra VSA.

SECTION 3.5 - Creating a Data Mover Interconnect

This process must be completed on both the local and remote Celerra VSA network servers. 33. Navigate to the Tab Labeled 'DM Interconnects'. You should see a 'loopback' interconnect

already listed.

(42)

35. Enter the Details of the Interconnect Celerra Network Server (the remote Celerra VSA) to which you are connecting.

NOTE: Here are some definitions of the terms used in this exercise

Data Mover Interconnect Name - a name to identify the relationship. Be sure to choose a name that allows you to understand the direction of the relationship.

Data Mover - the local Data Mover establishing the connection.

Interfaces - one or more network interfaces that will participate in the network session.

Name Service Interface Names - A comma-separated list of the name service interface names available for the local side of the interconnect. [This can be left blank.]

Peer Data Mover - the Data Mover on the remote Celerra VSA to which you are establishing a session.

(43)

Peer Name Service Interface Names - A comma-separated list of the name service interface names available for the local side of the interconnect. [This can be left blank.]

36. (Optional) Set a Schedule and Bandwidth limits

NOTE: Setting a Schedule is not necessary - By default, all available bandwidth is used at all times for the interconnect.

37. Click OK.

38. Verify the Interconnect Created Without Errors

39. Repeat the Process in Reverse on the Remote Celerra VSA

NOTE: You must now manually create an interconnect for the local Celerra VSA on the Control Station of the remote Celerra VSA.

(44)

SECTION 3.6 - Initiating File-System Replication (NFS)

40. Navigate to the Replications Tab

41. Click Create

(45)

43. Click Continue

(46)

45. Enter Replication Details

NOTE: Be sure to enter a name that reminds you of the direction of replication.

46. Select the storage pools on the remote Celerra VSA that will be used to create the target File System. Click Help for a description of any of the fields.

47. Click OK.

(47)

49. Verify the Replication on the Remote Celerra VSA

NOTE: If successful up to this point, the replication will appear in the Replications Tab of the remote (TARGET) Celerra VSA as well.

SECTION 3.7 – Initating Block-Level Replication ( iSCSI)

(48)

51. Click Create

52. Select 'Replicate an iSCSI LUN'

(49)

54. Select the Destination Celerra Network Server

NOTE: By default, the local (SOURCE) Celerra is selected in the dialog. You must change this to the remoet (TARGET) Celerra to select your target iSCSI LUN.

(50)

55. Enter the Replication Details

56. Be sure to enter a name that informs you of the direction of replication.

Note: The size of the source and destination LUNs are the same. If the LUNS are of different size, or if the target LUN is not set to 'Read-Only, the replication will fail to establish.

57. Click Help for a description of any of the fields. 58. Click OK.

59. Verify the Replication Created Without Errors

(51)

NOTE: If successful up to this point, the replication will appear in the Replications Tab of the remote (TARGET) Celerra VSA as well.

SECTION 3.8 - Using the Replication Wizard - NFS

1. Navigate to the Replications Tab

(52)

3. Select 'File System' for Replication Type.

4. Select Next

5. Select 'Ongoing File System Replication'

(53)

7. Create Destination Celerra

8. Select the Destination Celerra Network Server (TARGET)

NOTE: If you have not already created a destination Celerra, you may do so here.

9. Click 'New Destination Celerra’ 10. Enter Destination Celerra Details

11. Enter the name and IP address of the destination (TARGET) Celerra in this field. 12. Click Next.

(54)

13. Enter NasAdmin Credentials

NOTE: You must store your NasAdmin (or equivalent) credentials here to perform further administrative actions on the destination (TARGET) Celerra.

14. Enter Name of Local Celerra VSA

NOTE: The Wizard will automatically create the appropriate entries for Celerra Network Servers on the remote side; this saves you the time of doing it yourself, as outlined in the previous section(s).

15. The IP address is already filled in for you, simply enter the name of the local Celerra. 16. Click Next.

(55)

17. Confirm Your Actions

18. Confirm that you have entered all the data appropriately. 19. Click Submit.

20. Wait for Confirmation of Successful Creation.

(56)

22. Select the Remote Celerra Network Server.

23. Click Next

24. Create the Interconnect

NOTE: If not already established, a network session (called an Interconnect) must be established between the local and remote Celerra Network Servers.

(57)

26. Provide a Name for the local Interconnect

27. Click Next

28. Provide a Name for the Remote Interconnect

NOTE: As with the Network Servers, the Replication Wizard will also automatically create the Interconnect on the remote server for you.

(58)

30. (Optional) Provide a Schedule and Bandwidth Limits

31. Click Next

32. Confirm Your Selections

(59)

34. Select Your Local Interconnect

35. Click Next

36. Select the Source and Destination interface(s) that will participate in this replication scheme

(60)

38. Enter Replication Details

39. Enter a Name for the replication scheme 40. Be sure to select the correct Source File System 41. Click Next

42. Select the Destination Storage Pool(s)

NOTE: The remote (TARGET) Celerra can create a File System for replication.

(61)

44. (Optional) Modify the Update Policy

45. Click Next

46. (Optional) Select Tape Transport

(62)

48. Confirm Your Selections

49. Click Finish

50. Verify the Replication Completed

51. Click Close

(63)

Section 3.9 - Using the Replication Wizard - iSCSI

1. Navigate to the Replications Tab

(64)

3. Select 'iSCSI LUN' for Replication Type

4. Select Next

5. Create Destination Celerra

6. Select the Destination Celerra Network Server (TARGET)

NOTE: If you have not already created a destination Celerra, you may do so here.

(65)

8. Enter Destination Celerra Details

9. Enter the name and IP address of the destination (TARGET) Celerra in this field 10. Click Next

11. Enter NasAdmin Credentials

NOTE: You must store your NasAdmin (or equivalent) credentials here to perform further administrative actions on the destination (TARGET) Celerra.

(66)

12. Enter Name of Local Celerra VSA

13. The Wizard will go ahead and create the appropriate entries for Celerra Network Servers on the remote side; this saves you the time of doing it yourself, as outlined in the previous section(s). 14. The IP address is already filled in for you, simply enter the name of the local Celerra,.

15. Click Next and confirm your actions

16. Confirm that you have entered all the data appropriately 17. Click Submit

(67)

18. Wait for Confirmation of Successful Creation. Click Next

(68)

20. Create the Interconnect

NOTE: If not already established, a network session (called an Interconnect) must be established between the local and remote Celerra Network Servers

21. Click 'New Interconnect' and provide a name for the local Interconnect

(69)

23. Provide a Name for the Remote Interconnect

NOTE: As with the Network Servers, the Replication Wizard will also automatically create the Interconnect on the remote server for you.

24. Click Next

25. (Optional) Provide a Schedule and Bandwidth Limits

(70)

27. Confirm Your Selections and then click Submit

(71)

29. Select the Source and Destination Interface(s) that will participate then click Next

30. Enter Replication Details

31. Enter a Name for the replication

32. Be sure to select the correct Source iSCSI LUN and target 33. Click Next

(72)

34. Select the Destination LUN and Target then click Next

NOTE: The remote (TARGET) iSCSI LUN must be of the same size as the local (SOURCE) LUN

(73)

36. Confirm Your Selections then click Finish

37. Verify the Replication Completed then click Close

(74)

SECTION 3.10 - Troubleshooting

Setting up iSCSI 1. Enable iSCSI

NOTE: Enable iSCSI for the Data Mover using the "Manage Settings" link on the iSCSI screen.

2. Binding Your Target

3. If you create the iSCSI target manually (without using the wizard), you must be sure to enter the IP address to which it will be bound. This is labeled "Network Portal" in the dialog box above.

(75)

4. Issues Creating a Replication - Target volume

Setting up the target volumes differs for NFS and iSCSI, depending on which you are setting up.  NFS - As detailed below, you do not need to set up a target File System for NFS replication.

In fact, it is recommended to let the Replication Wizard set it up for you; this will ensure that the target file system is of the appropriate size and the Read Only flag is set.

 iSCSI - For iSCSI, you do need to create the File System and LUN ahead of time. The target LUN must have the Read Only flag set, and must be of the same size as the source. See below to verify the properties of your source LUN.

1. Navigate to the iSCSI Panel

2. From the Menu, select Sharing --> iSCSI. 3. Select the Source LUN

(76)

5. Verify the Properties of the Source LUN

NOTE: Check the Properties of the source LUN to ensure you have recorded its name and size. The target LUN you will set up for replication on the remote Celerra VSA must be exactly the same size. It is helpful to record the size in your notes as you are creating the LUN to ensure you know the exact value you used during creation. In the example shown above, the original value used was 4992 MB, so use the same value to create the Target iSCSI LUN for replication.

6. Click Cancel to close the dialog.

SECTION 4.0 - Installing VSI Plug-ins

Download the VSI Storage Viewer (SV) and Unified Storage Management plug-ins from EMC PowerLink. If you do not have a PowerLink account, please create one now so that you can download the

necessary components.

1. Uncompress the zip files into a folder and execute installation.

2. Start vSphere Client and enable plug-ins and configure access to VSA storage 3. Download the Virtual Storage Integrator (VSI) packages (2)

4. Locate the tow CLI package on EMC Powerlink:

Home > Support > Software Downloads and Licensing > Downloads T-Z > Virtual Storage Integrator

(77)

5. Download the Storage Viewer and Unified Storage Management packages and documentation

SECTION 4.1 - Installing Virtual Storage Integrator (VSI)

6. Execute the self-extracting executable to a sub-directory of your choice and start the install by clicking on the auto install file

(78)

7. At the install screen, click “Next” and accept the license agreement terms and click “Next” to start the install. Notice that if Solution Enabler software is not already installed, this will be done automatically.

(79)

8. Click “Finish” to complete the installation

SECTION 4.2 - Installing the Unifed Storage Management Plug-in for VMware

(80)

2. At the install screen, click “Next”

3. Choose to accept the License agreement and click “Next” 4. Choose “Install” to initiate the installation.

5. Choose “Finish” to complete the installation.

SECTION 4.3 - Configuring the VSI Plug-in with the UBER VSA

After the VSI plug-in has been installed, you will need to point the plug-in to the management interface that you configured as part of the UBER VSA installation. You will need the following information:

(81)

Management (Control Station)

IP Address

Username

Password

1. To configure the UBER VSA, start the vSphere client (this assumes that the plug-in framework is already installed)

2. From the “Home” screen, click the following in the left pane below:

Solutions > Applications >EMC > Unified Storage Management in the left pane.

3. Click the Add button to continue. 4. Choose EMC Celerra and click Next

(82)

5. Fill in the blanks for Control Station IP, Username, and Password 6. Click the “Configure DHSM” radio button

7. Select “Create New DHSM User”

NOTE: By clicking the “Create New DHSM User” button, you are telling the system to create a local account that will enable the deduplication and compression feature that is supported by the plug-in.

8. The next screen will prompt you for the DHSM User information as well as an IP address that will be allowed to exploit the Dedupe and Compress feature. We recommend a username dhsm_vmware and a password of your choosing.

(83)

9. After you click Finish, you will notice the following tasks in your vSphere Client:

10. You will see a screen similar to the following which should indicate that the plug-in has now been configured to work with your UBER VSA installation.

NOTE: The “ID Field” above will fail to show as system Serial Number as the UBER VSA installation does not have a serial number configured. This is expected behavior.

(84)

SECTION 4.5 - Using the VSI Plug-in to Provision an NFS-Based Datastore

One of the benefits that the VSI in offers is NFS provisioning directly from the VI Client. The plug-in will create an NFS datastore usplug-ing the UBER VSA, assign export permissions, reconfigure the ESX kernel to use the EMC Recommended settings (optional), and mount the datatstore to one or more hosts in the cluster.

1. We want to create a datastore that is mounted by all hosts in a given resource pool. Right click the resource pool and choose the following:

EMC -> Unified Storage -> Provision Storage

(85)

3. Enter a Datastore Name and then choose the corresponding Control Station, Data Mover, and Network Interface to present the storage from (aside from the Datastore Name, configuration information will be automatically populated and user selectable

NOTE: This tool is designed to support multiple storage systems working with a single Virtual Center instance which is why you have options.

(86)

5. Choose a storage pool from which to create the datastore

6. Select an initial capacity and if you wish, enable Virtual Provisioning. You will need to set a maximum capacity up to which the filesystem can grow.

7. Before you click “Finish” – click the Advanced button. This is the screen you can use to modify the NFS export permissions (by default, all hosts in your resource pool will have access – but you may want to use subnet-based export permissions). Also, you will see the checkbox to “Set Timeout Settings” – this will update the ESX kernel settings to match EMC’s Best Practice recommendations for this configuration.

Figure

Updating...

References

Updating...

Related subjects :