• No results found

EMC STORAGE SOLUTIONS WITH RED HAT ENTERPRISE LINUX OPENSTACK PLATFORM

N/A
N/A
Protected

Academic year: 2021

Share "EMC STORAGE SOLUTIONS WITH RED HAT ENTERPRISE LINUX OPENSTACK PLATFORM"

Copied!
30
0
0

Loading.... (view fulltext now)

Full text

(1)

Reference Architecture Guide

EMC Solutions

July 2015

EMC STORAGE SOLUTIONS WITH RED HAT

ENTERPRISE LINUX OPENSTACK PLATFORM

Managing EMC Storage Arrays with OpenStack Juno

(2)

Copyright © 2015 EMC Corporation. All Rights Reserved. Published July 2015

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

Red Hat and Red Hat Enterprise Linux are trademarks of Red Hat, Inc.,

registered in the U.S. and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries. The OpenStack mark is either a registered trademark/service mark or trademark/service mark of the OpenStack Foundation, in the United States and other countries, and is used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community. All other trademarks used herein are the property of their respective owners. EMC Storage Solutions with Red Hat Enterprise Linux OpenStack Platform Part Number H14282

(3)

Table of contents

Reference architecture overview ... 5

Document purpose ... 5 Audience ... 5 Solution purpose ... 5 Business challenge ... 5 Technology solution ... 6 Key components ... 7 Introduction ... 7

VNX unified storage platform ... 7

XtremIO flash-based storage platform ... 7

ScaleIO storage management software ... 7

OpenStack ... 7

OpenStack technology overview... 8

Introduction ... 8

OpenStack components ... 8

Compute (Nova) ... 8

Object Storage (Swift) ... 8

Block Storage (Cinder) ... 8

Networking (Neutron) ... 9

Dashboard (Horizon) ... 9

Identity Service (Keystone) ... 9

Orchestration (Heat) ... 9

Telemetry (Ceilometer) ... 9

Image Service (Glance) ... 10

Data Processing (Sahara) (optional when deploying OpenStack) ... 10

Red Hat Enterprise Linux ... 10

Red Hat Enterprise Linux OpenStack Platform ... 10

Solution architecture ... 11 Architecture diagrams ... 11 Hardware resources ... 12 Software resources ... 13 Storage protocols ... 13 Requirements ... 14

Server and networking requirements ... 14

(4)

Installation ... 15

Servers ... 15

Networks ... 15

Fabric ... 15

Red Hat Enterprise Linux OpenStack Platform ... 15

OpenStack controller node ... 15

OpenStack compute node ... 16

VNX ... 16

iSCSI requirements ... 16

Fibre Channel requirements ... 16

XtremIO ... 17

iSCSI requirements ... 17

Fibre Channel requirements ... 17

Multipath ... 17 VNX Multipath ... 18 XtremIO Multipath ... 18 ScaleIO ... 19 Post-deployment ... 19 Configuration ... 21 VNX ... 21 XtremIO ... 22 ScaleIO ... 24

Managing storage volumes ... 25

Support ... 27

Conclusion ... 28

References... 29

EMC documentation ... 29

Red Hat documentation ... 29

Getting Started ... 29

(5)

Reference architecture overview

This reference architecture guide describes a solution for managing storage volume life cycles using EMC storage technologies and Red Hat Enterprise Linux OpenStack Platform 6, which is based on the OpenStack Juno distribution. This document introduces the main features and functionality of the solution, the solution architecture and components, and the validated hardware and software environments.

This document describes the reference architecture and provides guidance on integrating the components and functionality of Red Hat Enterprise Linux OpenStack Platform software and EMC storage systems. This document is not a comprehensive guide to every aspect of the solution.

This reference architecture guide is for cloud architects, cloud operators, and general IT administrators who want to manage EMC storage with Red Hat Enterprise Linux OpenStack Platform. Readers should be familiar with OpenStack, Linux, EMC storage technologies, and general IT functions.

The purpose of this solution is to build an enterprise-class, scalable, and multitenant cloud infrastructure that integrates EMC storage technologies with Red Hat Enterprise Linux OpenStack Platform software. This solution is built on EMC® VNX®, EMC

XtremIO™, and EMC ScaleIO® storage platforms managed by Red Hat Enterprise Linux OpenStack Platform 6.

The difficulty of creating a cloud solution has given rise to several cloud software vendors who have built proprietary technology and business models specifically catering to the requirements of standardization, agility, control, and reliability. Several new open source technologies also are available to assist in creating a cloud solution, but customers need to know how to best use these technologies to drive standardization, integrate open source and proprietary systems, minimize cost, and support service-level agreements.

Many organizations are also under pressure to provide enterprise-quality service levels without paying enterprise prices. As a result, IT departments must implement cost-effective alternatives to proprietary cloud software and services. These

alternatives need to include features such as data protection, disaster recovery, and guaranteed service levels.

This solution enables customers to build an open-source cloud environment and validate the environment for performance, scalability, and functionality. With EMC storage solutions and Red Hat Enterprise Linux OpenStack Platform, customers gain the following benefits:

• A virtual infrastructure that can be deployed quickly with Red Hat Enterprise Linux OpenStack Platform Installer

• Reduced licensing and operating costs

• Compatibility with multiple hardware and software vendors Document purpose

Audience

Solution purpose

Business challenge

(6)

• Increased cloud solution portability and agility because of reduced dependence on proprietary systems

This solution demonstrates how to use EMC storage systems, Red Hat Enterprise Linux OpenStack Platform, and Cinder block storage drivers to provide the storage resources for a robust OpenStack environment. This solution incorporates the following components:

• EMC VNX • EMC ScaleIO • EMC XtremIO

• Red Hat Enterprise Linux OpenStack Platform 6, based on the OpenStack Juno release

• Red Hat Enterprise Linux OpenStack Platform Installer • Cinder drivers for EMC VNX and XtremIO (part of Juno release) • Cinder drivers for EMC ScaleIO (available from EMC Support) Technology

(7)

Key components

This section briefly describes the following key components used in this solution: • EMC VNX storage platform

• EMC ScaleIO storage management • EMC XtremIO flash array

• OpenStack cloud computing software platform

The VNX family delivers a choice of systems ranging from affordable entry-level solutions to high-performance, petabyte-capacity configurations servicing the most demanding application requirements. The VNX family includes the following:

• The VNX series is designed to meet the high-performance, high-scalability requirements of midsize and large enterprises. It includes the VNX8000™, VNX7600™, VNX5800™, VNX5600™, VNX5400™, and the VNX5200™ systems.

• The VNXe series is designed for small- and medium-sized businesses. This entry-level series includes the VNXe3200™ system.

XtremIO is a scale-out clustered design that grows capacity and performance linearly to meet any requirement. XtremIO arrays are created from building blocks called X-Bricks that are each a high-availability, high-performance, fully active/active storage system with no single point of failure. The XtremIO arrays include the Starter X-Brick, 1 X-Brick, 2 X-Brick cluster, 4 X-Brick cluster, and 6 X-Brick cluster.

ScaleIO is a software-only server-based storage area network (SAN) that combines storage and compute resources to form a single-layer, enterprise-grade storage product. ScaleIO storage is elastic and delivers linear scalable performance. Its scale-out storage architecture can grow from a few servers to thousands of servers.

OpenStack is a cloud computing software platform that controls large pools of compute, storage, and networking resources in a data center, all managed through a dashboard that gives administrators control while enabling users to provision resources through a web interface. OpenStack supports several hypervisors, including KVM, and a wide range of hardware.

Introduction VNX unified storage platform XtremIO flash-based storage platform ScaleIO storage management software OpenStack

(8)

OpenStack technology overview

OpenStack is a cloud computing software platform that controls large pools of compute, storage, and networking resources throughout a data center, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface.

The following OpenStack components create a virtual computing environment. Compute (Nova)

OpenStack Compute (Nova) is a cloud computing fabric controller. It is designed to manage and automate pools of computer resources and can work with widely available virtualization technologies as well as bare metal and high-performance computing (HPC) configurations. Compute can use several hypervisor technologies, including KVM.

Nova's architecture is designed to scale horizontally on standard hardware with no proprietary hardware or software requirements and to integrate with legacy systems and third-party technologies.

Object Storage (Swift)

OpenStack Object Storage (Swift) is a scalable redundant storage system. Objects and files are written to multiple disk drives spread throughout servers in the data center, with the OpenStack software responsible for ensuring data replication and integrity across the cluster. You can scale storage clusters horizontally simply by adding new servers.

If a server or hard drive fails, OpenStack replicates its content from other active nodes to new locations in the cluster. Because OpenStack uses software logic to ensure data replication and distribution across different devices, inexpensive commodity hard drives and servers can be used. Enterprise storage options include EMC Elastic Cloud Storage and the EMC Isilon® platform. Both options provide storage that is compatible with Swift and that can be accessed using the Object Storage API. Block Storage (Cinder)

OpenStack Block Storage (Cinder) provides persistent block-level storage devices for use with OpenStack compute instances. Block storage volumes are fully integrated into OpenStack Compute and the Dashboard, enabling cloud users to manage their own storage needs. VNX and XtremIO drivers are included with Cinder in the Juno release.

In addition to local Linux server storage, OpenStack Block Storage can use storage platforms such as VNX, ScaleIO, and XtremIO. OpenStack Block Storage is

appropriate for performance-sensitive scenarios such as database storage and expandable file systems, or providing a server with access to raw block-level storage. Snapshot management provides powerful functionality for backing up data stored on block storage volumes. Snapshots can be restored or used to create a new block storage volume.

Introduction

OpenStack components

(9)

Networking (Neutron)

OpenStack Networking is a service for managing networks and IP addresses.

OpenStack Networking ensures that the network is not a bottleneck or limiting factor in a cloud deployment, and gives users self-service capability, even over network configurations.

OpenStack Networking provides networking models for different applications or user groups. Standard models include flat networks or VLANs that separate servers and traffic. OpenStack Networking manages IP addresses, allowing for dedicated static IP addresses or DHCP. Floating IP addresses let traffic be dynamically rerouted to any resources in the IT infrastructure, so users can redirect traffic during maintenance or in case of a failure.

Users can create their own networks, control traffic, and connect servers and devices to one or more networks. Administrators can use software-defined networking (SDN) technology like OpenFlow to support high levels of multitenancy and massive scale. OpenStack Networking provides an extension framework that can deploy and manage additional network services such as intrusion detection systems, load balancing, firewalls, and virtual private networks.

Dashboard (Horizon)

OpenStack Dashboard (Horizon) provides administrators and users a graphical interface to access, provision, and automate cloud-based resources. The design accommodates third-party products and services, such as billing, monitoring, and additional management tools. OpenStack Dashboard is also brandable for service providers and other commercial vendors. OpenStack Dashboard is one of several ways users can interact with OpenStack resources.

Identity Service (Keystone)

OpenStack Identity Service (Keystone) provides a central directory of users mapped to the OpenStack services they can access. It acts as a common authentication system across the cloud platform and can integrate with existing back-end directory services like LDAP. It supports multiple forms of authentication including standard username and password credentials, token-based systems, and Amazon Web Services (AWS)-style logins.

The Keystone catalog provides a queryable list of all of the services deployed in an OpenStack cloud in a single registry. Users and third-party tools can

programmatically determine which resources can be accessed. Orchestration (Heat)

Heat is a service to orchestrate multiple composite cloud applications using

templates, through both a REST API that is native to OpenStack and a Query API that is compatible with AWS CloudFormation.

Telemetry (Ceilometer)

The OpenStack Telemetry service aggregates usage and performance data across the services deployed in an OpenStack cloud. This powerful capability provides visibility and insight into the usage of the cloud across dozens of data points and allows cloud operators to view metrics globally or by individual deployed resources.

(10)

Image Service (Glance)

The OpenStack Image Service provides discovery, registration and delivery services for disk and server images. The ability to copy or snapshot a server image and immediately store it is a powerful capability of the OpenStack cloud operating system. If you are provisioning multiple servers, stored images can be used as a template to get new servers up and running more quickly and consistently than installing a server operating system and individually configuring additional services. The Image Service can store disk and server images in a variety of back-ends,

including OpenStack Object Storage. It can also be used to store and catalog an unlimited number of backups. The Image Service API provides a standard REST interface for querying information about disk images and lets clients stream the images to new servers.

Data Processing (Sahara) (optional when deploying OpenStack)

OpenStack Sahara enables users to provision Hadoop clusters by specifying parameters such as Hadoop version, cluster topology, and node hardware details. Sahara provides the means to scale an already-provisioned cluster by adding or removing worker nodes on demand.

Sahara provides the following:

• Fast provisioning of Hadoop clusters on OpenStack for development and QA • Utilization of unused compute power from a general-purpose OpenStack

infrastructure-as-a-service cloud

• Analytics as a service for ad-hoc or bursty analytic workloads

The data processing capability introduced in the Juno release automates provisioning and management of big data clusters using Hadoop and Spark. Big data analytics are a priority for many organizations and a popular use case for OpenStack, and this service lets OpenStack users provision resources more quickly.

Red Hat Enterprise Linux

Red Hat Enterprise Linux is a Linux distribution developed by Red Hat for the commercial market. Red Hat Enterprise Linux is released in server versions for x86, 64, Itanium, PowerPC, and IBM z Systems, and desktop versions for x86 and x86-64. Red Hat's official support and training, together with the Red Hat Certification Program, focuses on the Red Hat Enterprise Linux platform.

Red Hat Enterprise Linux OpenStack Platform

Red Hat Enterprise Linux OpenStack Platform delivers an integrated foundation to create, deploy, and scale a secure and reliable public or private OpenStack cloud. Red Hat Enterprise Linux OpenStack Platform combines the world’s leading enterprise Linux and the fastest-growing cloud infrastructure platform to give you the agility to scale and quickly meet customer demands without compromising on availability, security, or performance.

Red Hat Enterprise Linux OpenStack Platform uses components from other Red Hat products. Specific information pertaining to the support of these components is available on the Red Hat Enterprise Linux OpenStack Platform Life Cycle page.

(11)

Solution architecture

The diagrams in this section depict the architecture used to deploy Red Hat Enterprise Linux OpenStack Platform with EMC storage systems and Cinder drivers. The

deployment involves building an OpenStack environment and integrating it with VNX, XtremIO, or ScaleIO, as well as integrating the features of these systems to provide a highly-available, high-performance, and cost-effective storage solution.

Figure 1 depicts the overall physical architecture of the solution.

Figure 1. EMC with Red Hat Enterprise Linux OpenStack Platform reference architecture

Architecture diagrams

(12)

Figure 2 depicts the network architecture of the solution.

Figure 2. EMC with OpenStack network architecture

Table 1 lists the hardware used in this solution.

Table 1. Solution hardware

Hardware Quantity Connectivity Firmware version

EMC VNX5800 1 iSCSI and Fibre Channel

(FC)

5.33.006.5.096 EMC XtremIO Generation

2 1 iSCSI and FC 3.0.0-44

Cisco UCS B200 M2 Blade Server

10 N/A N/A

Brocade 6510 Switch 2 FC fabric v7.3.1

Hardware resources

(13)

Table 2 lists the OpenStack and cloud infrastructure software used in this solution.

Table 2. OpenStack and cloud infrastructure software Software Version Description

OpenStack Juno Open-source cloud computing software platform Red Hat Enterprise Linux

OpenStack Platform Installer 6.0 (1:0.5.7-1.el7ost) OpenStack deployment and management software Red Hat Enterprise Linux

7.1 (kernel 3.10.0-229.1.2.el7.x86_6 4)

Operating system for the cloud environment KVM N/A Hypervisor in the Red Hat Enterprise Linux kernel DM-Multipath 0.4.9-77.el7 Multipathing software

Table 3 lists the EMC storage software used in this solution.

Table 3. EMC storage software

Software Version Description

EMC Unisphere 1.3.6.1.0096 Management software for VNX storage EMC Navisphere CLI (Linux x64) 7.33.3.0.72 CLI for OpenStack Cinder driver EMC VNX Operating Environment 5.33.006.5.096 Operating environment for VNX block storage EMC XtremIO 3.0.0-44 Operating environment for XtremIO

EMC ScaleIO 1.31.2 Software-defined storage

EMC VNX Cinder Driver 4.1.0 Block storage driver

EMC XtremIO Cinder Driver 1.0.4 Block storage driver EMC ScaleIO Cinder Driver 1.31.2 Block storage driver

Table 4 lists the storage protocols used in this solution.

Table 4. Storage protocols

Protocol Bandwidth

iSCSI 10 Gb Ethernet

Fibre Channel 8 Gb Fibre Channel

EMC ScaleIO Data Client (SDC) Driver that exposes shared storage as a global block device, serving local I/O requests on each server Software resources

(14)

Requirements

This section outlines specific requirements that must be met before you can use Red Hat Enterprise Linux OpenStack Platform with EMC storage solutions. Refer to

Capacity Planning for Red Hat Enterprise Linux OpenStack Platform for system requirements.

The server hardware and networking requirements for this OpenStack solution comply with the EMC Simplified Support Matrix. The solution currently supports only IPv4. IPv6 has not been tested. This solution uses standard EMC-supported storage system connectivity options, including the following:

• Management interface: 10 Gb Ethernet • iSCSI storage network: 10 Gb Ethernet • 8 Gb/s Fibre Channel

This architecture is designed to be used with Red Hat Enterprise Linux OpenStack Platform 6 running on Red Hat Enterprise Linux 7.

This architecture supports only the local or NFS option for Glance, and the LVM Cinder option in the Red Hat Enterprise Linux OpenStack Platform deployment setup. This architecture only supports the default Cinder option in the Red Hat Enterprise Linux OpenStack Platform Installer environment setup. The architecture does not support Ceph. Cinder drivers for both VNX and XtremIO are installed as part of the OpenStack deployment with Red Hat Enterprise Linux OpenStack Platform Installer. ScaleIO drivers are not installed with the Red Hat Enterprise Linux OpenStack Platform deployment and must be obtained from EMC Online Support.

Server and networking requirements

OpenStack requirements

(15)

Installation

Refer to the following guides for the latest information regarding deploying and configuring Red Hat Enterprise Linux OpenStack Platform in your environment:

• Capacity Planning for Red Hat Enterprise Linux OpenStack Platform

• Deploying OpenStack: Enterprise Environments (Red Hat Enterprise Linux

OpenStack Platform Installer)

• Administration Guide for Red Hat Enterprise Linux OpenStack Platform

Prepare server hardware as detailed in the Red Hat Enterprise Linux OpenStack Platform guides. Additional steps for VNX and XtremIO include verifying network connectivity to the array’s management network and, where iSCSI is being used, to the array’s iSCSI network. For FC, all nodes must have physical access to the FC fabric.

Fabric describes additional FC requirements. The requirements specified in Network and Fabric are the same for both VNX and XtremIO.

The Red Hat Enterprise Linux OpenStack Platform guides explain the physical and virtual network preparation. This architecture uses OpenStack tenant VLANs. Follow these guidelines:

• A Public API network is recommended to separate external API traffic from internal API and management traffic.

• In the case of either XtremIO or VNX, network connectivity is required from all OpenStack nodes to the management network of the VNX or XtremIO array. • For iSCSI on either VNX or XtremIO, the storage network that is created as part

of the deployment must be able to reach the iSCSI network(s) that VNX and XtremIO use. To force iSCSI traffic to use these interfaces, static routes must be configured on each node to use the default gateway of the storage network as the next hop for the storage array iSCSI network(s).

FC zoning entries are required between all nodes and the VNX or XtremIO arrays unless the Brocade FC Zone Manager for OpenStack is configured. The Brocade FC Zone Manager provides automatic host-to-array zoning for both OpenStack controller and compute nodes.

Refer to Deploying OpenStack: Enterprise Environments (Red Hat Enterprise Linux

OpenStack Platform Installer) for deployment steps.

Refer to Capacity Planning for Red Hat Enterprise Linux OpenStack Platform and the

Deploying OpenStack: Enterprise Environments (Red Hat Enterprise Linux OpenStack

Platform Installer) for controller node deployment steps.

For VNX and XtremIO configuration, all nodes must have a public network address assigned during configuration of the OpenStack environment. Only the Multi-Node HA Servers

Networks

Fabric

Red Hat Enterprise Linux OpenStack Platform

OpenStack controller node

(16)

deployment, which is the recommended deployment option, was tested with Red Hat Enterprise Linux OpenStack Platform.

Note: A simple deployment should also work, but that deployment was not tested. Refer to Capacity Planning for Red Hat Enterprise Linux OpenStack Platform and

Deploying OpenStack: Enterprise Environments (Red Hat Enterprise Linux OpenStack

Platform Installer) for compute node deployment steps. For VNX and XtremIO

configuration, all compute nodes must have a public network address assigned during configuration of the OpenStack environment.

The use of iSCSI requires network connectivity from all OpenStack controller and compute nodes to the VNX management network and the VNX iSCSI network. You must configure VNX storage and storage pools using EMC Unisphere or Naviseccli. You must manually register host iSCSI and FC initiators on the VNX unless the cinder.conf configuration file specifies automatic host initiator registration. iSCSI requirements

Additional packages are required to enable VNX iSCSI device multipathing on all OpenStack nodes. Type the following to install the multipathing software:

yum -y install multipath device-mapper-multipath-libs

The iscsi-initiator-utils package is not installed on OpenStack controller nodes during Red Hat Enterprise Linux OpenStack Platform deployment. Certain Cinder functions require iscsi-initiator-utils on OpenStack controllers.

1. Type the following command to install the iSCSI initiator utils package: yum -y install iscsi-initiator-utils

2. Ensure the following value is set in the libvirt section of nova.conf on all compute nodes:

iscsi_use_multipath = True Fibre Channel requirements

For FC connections, all controller and compute nodes must have zoning entries to the required ports on the VNX array unless the OpenStack Brocade FC Zone Manager is being used. The Brocade FC Zone Manager enables automatic FC zoning of hosts from OpenStack. To enable use of FC LUNs with Cinder, install additional required

packages as follows:

1. Type the following command to install the required packages for FC: yum install sysfsutils sg3_utils

2. Verify that libaio is installed on each node. If it is not, install it by typing the following command:

yum install libaio OpenStack

compute node

(17)

Additional packages are required to enable VNX FC device multipathing on all OpenStack nodes. Type the following to install the multipathing software:

yum -y install multipath device-mapper-multipath-libs

Network connectivity is required from the OpenStack controller and compute nodes to both the XtremIO management network and, in the case of iSCSI, from the XtremIO iSCSI network.

iSCSI requirements

XtremIO deployments using iSCSI require manual installation of the iscsi-initiator-utils package on all OpenStack controller nodes following the deployment of the OpenStack environment.

1. Type the following command to install the iSCSI initiator utils package: yum -y install iscsi-initiator-utils

Additional packages are required to enable device multipathing with iSCSI. Do the following to configure an environment for iSCSI multipathing:

2. Ensure the following value is set in the libvirt section of nova.conf on all compute nodes:

iscsi_use_multipath = True

Additional packages are required to enable XtremIO iSCSI device multipathing on all OpenStack nodes. Type the following to install the multipathing software:

yum -y install multipath device-mapper-multipath-libs

Fibre Channel requirements

For FC connections, all controller and compute nodes must have zoning entries to the required ports on the XtremIO array. To enable use of FC LUNs with Cinder, install additional required packages as follows:

1. Type the following command to install the required packages for FC: yum install sysfsutils sg3_utils

2. Verify that libaio is installed on each node. If it is not, install it by typing the following command:

yum install libaio

Additional packages are required to enable VNX iSCSI device multipathing on all OpenStack nodes. Type the following to install the multipathing software:

yum -y install multipath device-mapper-multipath-libs

Multipath requirements apply to VNX and XtremIO for both FC and iSCSI

protocols. Multipathing must be installed and configured on all OpenStack nodes to XtremIO

(18)

ensure proper operation of the attach and detach Cinder commands with VNX and XtremIO Cinder backends. Complete the instructions in the following sections to configure multipath for VNX and XtremIO.

VNX Multipath

To configure multipath for VNX systems, edit the /etc/multipath.conf file to include the code specified below.

blacklist {

# Skip LUNZ device from VNX device { vendor "DGC" product "LUNZ" } } defaults { user_friendly_names no flush_on_last_del yes } devices {

# Device attributes for EMC CLARiiON and VNX series ALUA device { vendor "DGC" product ".*" product_blacklist "LUNZ" path_grouping_policy group_by_prio path_selector "round-robin 0" path_checker emc_clariion features "1 queue_if_no_path" hardware_handler "1 alua" prio alua failback immediate } }

Once the multipath settings have been saved, restart the multipath service using the following command:

systemctl reload multipathd.service XtremIO Multipath

To enable multipathing for XtremIO, edit the /etc/multipath.conf file to include the code specified below.

defaults { user_friendly_names no flush_on_last_del yes } devices { device { vendor XtremIO product XtremApp path_selector "queue-length 0" rr_min_io_rq 1 path_grouping_policy multibus path_checker tur

(19)

} }

Save the changes and run the following command to restart the multipath service: systemctl reload multipathd.service

In this architecture, the ScaleIO management components, including ScaleIO Meta Data Manager, Callhome, Tie-Breaker, and Gateway, are installed on the OpenStack controller nodes. The ScaleIO Data Server (SDS) and ScaleIO Data Client (SDC)

components are installed on all OpenStack compute nodes. Additionally, the ScaleIO SDC component must be installed on all OpenStack controller nodes.

ScaleIO SDS nodes require dedicated local storage, either hard-disk drives or flash drives, to provide SDS storage for the ScaleIO cluster.

An additional package is required on all hosts to enable the use of ScaleIO volumes with Cinder. Type this command to install the required package for ScaleIO:

yum install sysfsutils

Note: Ensure that the dedicated disks for ScaleIO SDS storage are not selected by Fuel

during OpenStack deployment.

EMC recommends using dual 10 GbE networks for all ScaleIO deployments. When deploying ScaleIO with Red Hat Enterprise Linux OpenStack Platform, ensure that both 10 GbE networks are being used for ScaleIO to ensure optimal ScaleIO

performance. For all other ScaleIO requirements, refer to the EMC ScaleIO V1.31 User Guide.

The Cinder drivers for ScaleIO are not included in the Red Hat Enterprise Linux

OpenStack Platform distribution and must be obtained from EMC. Obtain the ScaleIO software packages and Cinder driver files from EMC Online Support. Refer to the EMC ScaleIO V1.31 User Guide for instructions on installing and configuring ScaleIO in an OpenStack environment.

Note: If you plan to use the scaleio_install.py installation script for the Cinder driver for

ScaleIO, download and install the Python ConfigObj config file reader and writer with this command: pip install configobj

Once the OpenStack environment has been deployed, the Puppet agents on all compute and controller nodes must be disabled to prevent Puppet from overwriting configuration changes. Run the following command on each node:

puppet-agent --disable

Also ensure that qemu-kvm-rhev is at least version 2.1.2-23.el7_1_1.2 or failures may occur when creating a volume from an image with an XtremIO array.

See Custom Configuration with the Red Hat Enterprise Linux OpenStack Platform Installer for steps to disable custom Puppet classes after deployment is complete. ScaleIO

(20)

Note: For highly available Red Hat Enterprise Linux OpenStack Platform deployments, the

idle_timeout value for MySQL must be changed. Refer to this page for instructions on configuring the correct idle_timeout value.

(21)

Configuration

This section includes links to specific configuration steps required to enable Cinder drivers for VNX, XtremIO, and ScaleIO.

The Cinder driver for VNX is installed as part of the Red Hat Enterprise Linux OpenStack Platform deployment process. For VNX Cinder driver requirements and configuration steps, refer to the OpenStack Configuration Reference for Juno. Figure 3 shows the VNX with iSCSI deployment architecture.

Figure 3. VNX with iSCSI deployment architecture

(22)

Figure 4 shows the VNX with Fibre Channel deployment architecture.

Figure 4. VNX with Fibre Channel deployment architecture

The VNX Cinder driver provides support for all operations outlined in Table 5. The driver also offers support for the following features:

• Cinder volume support for VNX Fully automated Storage Tiering (FAST) • Cinder volume support for VNX Fast Cache

• Creation of thin, thick, compressed, and deduplicated volumes

Note: VNX Block Operating Environment 05.33.006.5.096 is required for OpenStack Cinder

support for VNX deduplicated volumes. • Storage-assisted volume migration • Creation of read-only Cinder volumes • Multiple back ends

• Multiple VNX storage pools • Fibre Channel auto-zoning

The Cinder driver for XtremIO is installed as part of the Red Hat Enterprise Linux OpenStack Platform deployment process. You can find XtremIO Cinder driver requirements and configuration steps in the EMC XtremIO OpenStack Block Storage

driver guide for Juno.

(23)

Figure 5 shows the XtremIO with iSCSI deployment architecture.

Figure 5. XtremIO with iSCSI deployment architecture

Figure 6 shows the XtremIO with Fibre Channel deployment architecture.

(24)

XtremIO offers native thin-provisioned volumes, inline data compression, data deduplication, and full support for multiple Cinder back ends. Table 5 lists all supported Cinder commands.

ScaleIO can be deployed in an existing Red Hat Enterprise Linux OpenStack Platform environment using the OpenStack controller and compute nodes. Servers that provide ScaleIO SDS require additional local disks. Red Hat Enterprise Linux OpenStack Platform Installer assigns the first disk in each node as the operating system disk by default. Any other local hard-disk drives can be used for SDS storage. For ScaleIO driver requirements and configuration steps for OpenStack, refer to the

EMC ScaleIO V1.31 User Guide. You can configure ScaleIO to run on the existing networks that are created by Red Hat Enterprise Linux OpenStack Platform Installer during the environment deployment. For optimal performance, EMC recommends using dual 10 GbE networks with ScaleIO.

Figure 7 shows the ScaleIO deployment architecture.

Figure 7. ScaleIO deployment architecture

In addition to the Cinder functions outlined in Table 5, ScaleIO offers the following Cinder functionality:

• Native data obfuscation

• Support for multiple protection domains and storage pools • Creation of thick and thin-provisioned Cinder volumes ScaleIO

(25)

Managing storage volumes

The Managing Red Hat Enterprise Linux OpenStack Platform Environment

Administration Guide and the OpenStack Admin User Guide provide the latest

detailed information for managing storage volumes in your environment. Table 5 summarizes supported volume operations for each platform.

Table 5. Cinder volume operations for OpenStack Juno VNX XtremIO ScaleIO

Create Cinder volume Create Cinder volume Create Cinder volume List Cinder volumes List Cinder volumes List Cinder volumes Delete Cinder volume Delete Cinder volume Delete Cinder volume Snapshot Cinder volume Snapshot Cinder volume Snapshot Cinder volume List volume snapshots List volume snapshots List volume snapshots Delete volume snapshots Delete volume snapshots Delete volume snapshots

Attach volume Attach volume Attach volume

Detach volume Detach volume Detach volume

Create volume from snapshot Create volume from snapshot Create volume from snapshot Copy image to volume Clone volume Copy volume to image Copy volume to image Extend volume Clone volume

Clone volume Configure multiple storage back

ends Extend volume

Extend volume Get volume stats Configure multiple storage back

ends

Migrate volume Retype volume Get volume stats

Create Cinder volume consistency groups

Delete Cinder volume consistency groups

Create Cinder consistency group snapshots

List Cinder consistency group snapshots

Delete Cinder consistency group snapshots

(26)

Note: Red Hat Enterprise Linux OpenStack Platform does not support copy image to volume

(27)

Support

Refer to the EMC Simple Support Matrices for links to tables of hardware and software supported by EMC products. For support for EMC with OpenStack environments, you can use TSANet.org. TSANet allows companies to transfer support cases without a support agreement.

Contact EMC Online Support for issues with EMC storage solutions. Contact Red Hat support for issues with Red Hat Enterprise Linux OpenStack Platform.

(28)

Conclusion

Reducing IT operational expenditures while simultaneously increasing the level of security and software capabilities is a top priority for many companies. Cost, security, reliability, and ease of use are often key considerations when an enterprise evaluates new technology solutions.

OpenStack provides inexpensive and flexible storage and virtualization software. Using OpenStack with EMC storage solutions enables IT organizations to meet or exceed their needs to save money and maintain secure and reliable service with a storage system that is both easy to deploy and manage. With Red Hat Enterprise Linux OpenStack Platform and EMC storage solutions, customers realize the following benefits:

• Decreased costs associated with environment scalability • More choices in supported hardware

• Greater flexibility in cloud migration • Lower operational and maintenance costs

(29)

References

The following documents, located on the EMC.com, provide additional and relevant information. Access to these documents depends on your login credentials. If you do not have access to a document, contact your EMC representative:

• Introduction to the EMC VNX2 Series: VNX5200, VNX5400, VNX5600, VNX5800, VNX7600, & VNX8000—A Detailed Review

• EMC Unisphere: Unified Storage Management Solution for the New VNX Series: VNX5200, VNX5400, VNX5600, VNX5800, VNX7600, and VNX8000—A Detailed Review

• EMC VNX2 FAST VP VNX 5200, VNX 5400, VNX5600, VNX5800, VNX7600, & VNX8000—A Detailed Review

• EMC VNX2 Multicore FAST Cache: VNX5200, VNX5400, VNX5600, VNX5800, VNX7600, & VNX8000—A Detailed Review

• EMC VNX2 Deduplication and Compression: VNX5200, VNX5400, VNX5600, VNX5800, VNX7600, & VNX8000—Maximizing effective capacity utilization

• Solution Overview: EMC ScaleIO—For Development and Testing

• Solution Overview: EMC ScaleIO—For High-Performance Computing

• Introduction to the EMC XtremIO Storage Array—A Detailed Review

The following Red Hat documents, located on the Red Hat website, also provide useful information:

Getting Started

• Capacity Planning for Red Hat Enterprise Linux OpenStack Platform

• Choosing a Network Back-end for Red Hat Enterprise Linux OpenStack Platform

• Evaluating OpenStack: Deployment Types and Tools

• Evaluating OpenStack: Single-Node Deployment

• Evaluating OpenStack: Simple Networking

• Evaluating OpenStack: Install OpenShift on OpenStack

• Deploying OpenStack: Enterprise Environments (Red Hat Enterprise Linux OpenStack Platform Installer)

• Deploying OpenStack: Proof-of-Concept Environments (Packstack)

• Deploying OpenStack: Learning Environments (Manual Setup)

• Deploying OpenStack in a Virtual Machine Environment Using Instack

The following OpenStack documents, located on the OpenStack website, also provide useful information:

• OpenStack Installation Guide for Red Hat Enterprise Linux 7, CentOS 7, and Fedora 20 EMC documentation Red Hat documentation OpenStack documentation

(30)

• OpenStack High Availability Guide

References

Related documents

A power tilt and recline seating system (E1006-E1008) includes: a solid seat platform and a solid back; any frame width and depth; detachable or flip-up fixed height or

This study presents a comprehensive analysis of winter and summer subantarctic cyclones and their characteristics, as identified by 14 objective identification and tracking

RHEL OpenStack Platform 6: Tech Preview.. RED HAT ENTERPRISE LINUX

Figure 1: Offshoring services used in the high skill intensive production process These implications can partly be illustrated by Figure 1 which depicts wages of organizational

* Red Hat Enterprise Linux OpenStack Platform 4 * Red Hat Enterprise Linux OpenStack Platform 3 * RedHat Certified Virtualization Administrator (RHCVA) *

Input and output data sources, like job binaries, must also be registered first with the OpenStack Data Processing service (see Section 4.1, “Register Input and Output Data

LONRHO, the giant conglomerate run by Angus Ogilvie, an important member of the Committee of 300, on behalf of his cousin Queen Elizabeth II, now has total control of this

In the absence of unity with the black movement and a revitalization of rank and file participation, the trade unions, therefore, became the captive political base for