IBM i Virtualization and Open Storage. Mike Schambureck IBM Lab Services Rochester, MN

59 

Loading....

Loading....

Loading....

Loading....

Loading....

Full text

(1)

IBM i Virtualization and Open Storage

Mike Schambureck

IBM Lab Services

(2)

Partition Virtualization on POWER

IO Virtualization

with Dedicated Adapters

Hypervisor Fabric Func PCI adapter Port Func Port LPAR A LPAR B Physical Adapter DevDrv Physical Adapter DevDrv PCI adapter IO Virtualization

with a Hosting Server

Hypervisor Server LPAR LPAR A Physical Adapter DevDrv Virtual Fabric Virtual Adapter Server Virtual Adapter Client Virtual Adapter Server LPAR B Virtual Adapter Client Fabric Func Port PCI adapter Increasing Adapter BW & LPAR Density per Slot

(3)

Partition Virtualization concepts / benefits

Virtualization allows you to use the same physical adapter across

several partitions simultaneously.

– For storage • Disk • Tape • Optical – For Ethernet

Benefits:

– This reduces your hardware costs – Better hardware utilization

(4)

IBM i Host and Client Partitions: Overview

IBM i Host Virtual SCSI connection IBM i Client Integrated Disks OR SAN DVD DDxx NWSSTGs DDxx OPTxx DVD OPTxx

Ethernet Virtual LAN connection

CMNxx

DASD

– Hardware assigned to host LPAR in HMC

– Hosting server’s DASD can be integrated or SAN

– DASD virtualized as NWSSTG objects tied to network server descriptions

Optical

– DVD drive in host LPAR virtualized directly (OPTxx)

Networking

– Network adapter and Virtual Ethernet adapter in host LPAR – Virtual Ethernet adapter in client

(5)

VIO Server and Client Partitions: Overview

VIOS Host Virtual SCSI connection IBM i Client Integrated Disks OR SAN DVD DD## CD# DVD OPT##

Ethernet Virtual LAN connection

CMN##

DASD

– Hardware assigned to VIOS LPAR in HMC

– DASD can be integrated or SAN – Hdisk# is virtualized as IBM i DD##

devices

Optical

– DVD drive in host VIOS LPAR virtualized directly (OPT##)

Networking

– Network adapter and Virtual Ethernet adapter in VIOSLPAR – Virtual Ethernet adapter in IBM i

client LPAR

(6)
(7)

Integrated Server Virtualization concepts / benefits

Virtualization also allows IBM i to host x86 operating systems

– For storage

• Disk (also uses network storage spaces) • Tape

• Optical – For Ethernet

Benefits:

– Take advantage of IBM i ease of use and legendary reliability

– Designed to pool resources and optimize their use across a variety of operating systems

– Centralize storage and server management

– Take advantage of IBM i save/restore interfaces for x86 data • Object level (storage space)

(8)

Where Do I Start with Virtualization on IBM i on Power systems?

•Latest version at: http://www.ibm.com/systems/resources/systems_i_Virtualization_Open_Storage.pdf •http://www.ibm.com/systems/resources/systems_power_hardware_blades_i_on_blade_readme.pdf

(9)

Virtual SCSI (vSCSI): IBM i hosting IBM i or

VIOS hosting IBM i

Source

Hosting Server IBM i Client (System 1)

As of POWER6 with IBM i 6.1.1

System 1 System 2 System 3 FC HBA IBM i Client (System 2) IBM i Client (System 3) Hypervisor

•Assign storage to the physical adapter in the hosting partition

•Requires 512 byte per sector LUNs to be assigned to the host

•Many Storage options supported

6B22

Device Type

6B22

Device Type

6B22

Device Type

(10)

vSCSI Storage Mapping

Hosting server

As of POWER6 with IBM i 6.1.1

Storage adapter IBM i Client Hypervisor

6B22

Device Type VSCSI SERVER VSCSI Client vhostXXX hdisk1 hdisk2 •Storage management allocation are

done from both the external storage and the IBM i/VIOS

•Storage is assigned to the hosting IBM i/VIOS partition

•Within the VIOS you map the hdisk# (lun) to the vhost

corresponding to the client partition •Within IBM i host, you map storage spaces (NWSSTG) to network

server description (NWSD) tied to the client partition.

•Flexible disk sizes

• Load source requirements •16 disks per vscsi adapter. Just increased in i7.1 TR8/i7.2 to 32!

NWSD nwsstg

(11)

vSCSI for optical

Hosting server

As of POWER6 with IBM i 6.1.1 IBM i Client Hypervisor VSCSI SERVER VSCSI Client vhostXXX

•Drive is assigned to the hosting partition

•Within the VIOS you map physical tape or optical or file backed virtual optical to the vhost corresponding to the client partition

•IBM i hosting

automatically maps optical and tape

resources to the client using the vSCSI adapter •VIOS has no tape

library support with vSCSI adapters. Must use VFC adapters.

cd1 rmt1

(12)
(13)

Create the Virtual SCSI

Server

Adapter

Specify IBM i LPAR

Specify adapter ID used when creating the client adapter in IBM i

(14)

Assigning VIOS Storage to IBM i – SAN Storage

vhost0 vhost1

VIOS

vtscsiXX vtscsiYY storage volumes

IBM i

LPAR #1

IBM i

LPAR #2

DDxx DDxx

Virtual SCSI Connection

VIOS: Create Virtual SCSI Server Adapters in VIOS (VIOS partition profile)

VIOS: Assign storage volumes to IBM i client partitions (HMC or command line) IBM i: Initialize and Add Disks to ASP (from SST)

VIOS: Create Virtual SCSI Client Adapters in Client IBM i partition profile

Max of 32* virtual devices per connection

vSCSI

vSCSI

(15)

Use HMC Virtual Storage Management to view

storage in VIOS

(16)
(17)

Virtual Storage Management – Map Disk to IBM i client

Option 2 – VIOS Command Line mkvdev –vadapter vhost0 –vdev hdisk1

(18)

IBM i + NPIV ( Virtual Fiber Channel (vFC) )

Source

VIOS IBM i Client (System 1)

As of POWER6 with IBM i 6.1.1

System 1 System 2 System 3 8Gbs HBA IBM i Client (System 1) IBM i Client (System 1) Hypervisor

•Hypervisor assigns 2 unique WWPNs to each Virtual fiber

•Host on SAN is created as an iSeries hosttype

•Requires 520 byte per sector LUNs to be assigned to the iSeries host on DS8K

•Can Migrate existing direct connect LUNS

•DS8100, DS8300, DS8700,

DS8800, DS5100, DS5300, V7000, SVC, V3700 and V3500 supported

Virtual address example C001234567890001

Note: an NPIV ( N_port ) capable switch is required to connect the VIOS to the SAN/tape library to use virtual fiber.

(19)

Must use 8 Gb or 16 Gb fibre channel adapters on the Power System and

assign to VIOS partitions

Must use a fibre channel switch to connect Power System and Storage Server

Fibre Channel switch must be NPIV-capable

Storage Server must support NPIV as an attachment between VIOS and IBM i

– Coming up on another slide

19

Requirements for NPIV with VIOS and IBM i Client

Partitions

(20)

NPIV Configuration - Limitations

 Single client adapter per physical port per partition

– Intended to avoid single point of failure – Documentation only – not enforced

 Maximum of 64 active client connections per physical port

– It is possible to map more than 64 clients to a single adapter port – May be less due to other VIOS resource constraints

 32K unique WWPN pairs per system platform

– Removing adapter does not reclaim WWPNs

Can be manually reclaimed through CLI (mksyscfg, chhwres…) “virtual_fc_adapters” attribute

– If exhausted, need to purchase activation code for more

 Device Limitations

– Maximum of 128 visible target ports

Not all visible target ports will necessarily be active Redundant paths to a single DS8000 node

Device level port configuration

Inactive target ports still require client adapter resources – Maximum of 64 target devices

Any combination of disk and tape

(21)

Create VFC Client Adapter in IBM i Partition Profile

Specify VIOS LPAR

Need to check box

(22)

VFC Client Adapter Properties

Virtual WWPNs used to configure hosts on the storage server

(23)

Disk and Tape Virtualization with NPIV – Assign Storage

 Use HMC to assign IBM i LPAR and VFC adapter pair to physical FC port

(24)

Disk and Tape Virtualization with NPIV – Configure SAN

•Complete zoning on your switch using virtual WWPNs generated for the IBM i LPAR

•Configure a host connection on the SAN tied to the virtual WWPN •Use storage or tape library UI and Redbook to assign LUNs or tape

(25)

Redundant VIOS with NPIV

VIOS VIOS POWER6 IBM i Physical FC connections SYSBAS IASP Server VFC adapters Client VFC adapters

1

 Step 1: configure virtual and physical FC adapters

– Best Practice to make VIOS

redundant or separate individual VIOS partitions where a single hardware failure would not take down both VIOS partitions.

 Step 2: configure SAN fabric and storage – Zone LUNs to the virtual WWPNs. – Each DASD sees a path through 2

VIOS partitions

2

•Notes: Support up to 8 paths per LUN •Not all paths have to go through separate VIOS partitions.

(26)

26

Connecting IBM i to VIOS storage - VSCSI vs. NPIV

EMC IBM i FCP VIOS FC HBAs DS8000 VIOS FC HBAs SAN DS8000

• All storage subsystems* and internal storage

supported

• Storage assigned to VIOS first, then virtualized to IBM i

• Some storage subsystems and some FC tape libraries

supported

• Storage mapped directly to Virtual FC adapter in IBM

i, which uses N_Port on FC adapter in VIOS

NPIV IBM i VIOS FC HBAs XIV generic scsi disk generic scsi disk DS3500 VIOS FC HBAs SAN SCSI VSCSI

* See following charts for list of IBM supported storage devices

V7000 V7000

(27)

Support for IBM Storage Systems with IBM i

Table as of April 2014 DS3200 DS3400 DS3500 DCS3700 DS3950 DS4700 DS4800 DS5020 SVC Storwize V7000 V3700 V3500 DS5100 DS5300 XIV DS8100 DS8300 DS8700 DS8800 DS8870 Rack / Tower Systems IBM i Version Hardware 6.1 / 7.1 POWER6/7 Not DS3200#, Yes DS3500## 6.1 / 7.1 POWER6/7 6.1 / 7.1 POWER6/7 6.1 / 7.1 POWER6/7 6.1 / 7.1 POWER6/7 5.4 / 6.1 / 7.1 POWER5/6/7

IBM i Attach VIOS VSCSI VIOS VSCSI

Direct* or VIOS -- VSCSI and

NPIV%%

Direct* or VIOS –

VSCSI and NPIV% VIOS VSCSI

Direct or VIOS – VSCSI and NPIV**

Power Blades IBM i Version Hardware 6.1 / 7.1 POWER6/7 @, #, ## 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH)

IBM i Attach VIOS VSCSI VIOS VSCSI VIOS VSCSI VIOS VSCSI and

NPIV% VIOS VSCSI

VIOS VSCSI and NPIV** PureFlex Nodes IBM i Version Hardware 6.1 / 7.1 POWER7/7+ 6.1 / 7.1 POWER7/7+ 6.1 / 7.1 POWER7/7+ 6.1 / 7.1 POWER7/7+ 6.1 / 7.1 POWER7/7+ 6.1 / 7.1 POWER7/7+ IBM i Attach Behind V7000 Behind V7000 VIOS VSCSI

For V7000 Behind V7000 Behind V7000 Behind V7000

Flex Nodes IBM i Version Hardware 6.1 / 7.1 POWER7/7+ 6.1 / 7.1 POWER7/7+ 6.1 / 7.1 POWER7/7+ 6.1 / 7.1 POWER7/7+ 6.1 / 7.1 POWER7/7+ 6.1 / 7.1 POWER7/7+

IBM i Attach VIOS VSCSI VIOS VSCSI

VIOS VSCSI or NPIV%% For V7000 / V3700 / SVC

Native* or VIOS –

VSCSI and NPIV% VIOS VSCSI

Direct or VIOS – VSCSI and NPIV**

(28)

Support for IBM Storage Systems with IBM i

Notes

- This table does not list more detailed considerations, for example required levels of firmware or PTFs required or configuration performance considerations - POWER7 servers require IBM i 6.1 or later

- This table can change over time as addition hardware/software capabilities/options are added

# DS3200 only supports SAS connection, not supported on Rack/Tower servers which use only Fibre Channel connections, supported on Blades with SAS ## DS3500 has either SAS or Fibre Channel connection. Rack/Tower only uses Fibre Channel. Blades in BCH support either SAS or Fibre Channel. Blades in BCS only uses SAS.

### Not supported on IBM i 7.1. But see SCORE System RPQ 846-15284 for exception support

* Supported with Smart Fibre Channel adapters – NOT supported with IOP-based Fibre Channel adapters

** NPIV requires Machine Code Level of 6.1.1 or later and requires NPIV capable HBAs (FC adapters) and switches @ BCH supports DS3400, DS3500, DS3950 & BCS supports DS3200, DS3500

% NPIV requires IBM i 7.1 TR2 (Technology Refresh 2) and latest firmware released May 2011 or later %% NPIV requires IBM i 7.1 TR6 (Technology Refresh 6)

For more details, use the System Storage Interoperability Center: www.ibm.com/systems/support/storage/config/ssic/ Note there are currently some differences between the above table and the SSIC. The SSIC should be updated to reflect the above information.

(29)

© 2013 IBM Corporation

IBM Power Systems

Set Tagged I/O – Specify Client SCSI Adapter for Load Source

Client Adapter created on previous slides - Client SCSI or Client VFC

Physical CD/DVD or Client SCSI adapter if virtualizing the device.

(30)

© 2013 IBM Corporation

IBM Power Systems

PC5250 emulator is used for the IBM i console

 Just like the HMC uses

(31)

© 2013 IBM Corporation

IBM Power Systems

6B25 Adapter Look & Feel

 Similar in look & feel to other IOPless storage adapters

 Attached device resources have real hardware CCINs

(32)

Virtual Ethernet

PowerVM Hypervisor Ethernet

switch

– Part of every Power server

– Moves Ethernet data between LPARs – Can separate traffic based on VLANs

Shared Ethernet Adapter

– Part of the VIO server – Logical device

– Bridges traffic to and from external networks

– VLAN aware

– Link aggregation for external networks – SEA Failover for redundancy

IBM i Bridge Adapter

– Bridges traffic to and from external networks – Introduced in i7.1 TR3 PowerVM Hypervisor Hosting Server Phy Bridged Ethernet Adapter Virt

VLAN-Aware Ethernet Switch

Client 2 CMN (Vir) Client 1 CMN (Vir) Ethernet Switch

(33)

SEA Failover Configuration for Redundant VIOS’s

VIOS VET H Partition VET H SEA ETH Hypervisor VET H VIOS SEA ETH VET H VET H Client networkNetwork Partition VET H

(34)

SEA Failover and Link Aggregation

 Create a 2ndVIOS

 Each VIOS has a SEA adapter*  Each VIOS has a link aggregation  A control channel is created between

the 2 VIOS

Note: One SEA adapter must have a lower priority at creation **

 Failover and Redundancy

– VIOS 1A could be taken down for maintenance

– VIOS 1B would take over the network traffic

– A broken cable, or failed adapter for example would not disrupt Ethernet traffic

Switch

Switch

VIOS 1B VIOS 1A IBM i Client VLAN 99 Control Channel PVID = 99 PVID = 99 Hypervisor PVID = 1 PVID = 1 PVID = 1 Primary Standby CMN0 Virt Ent 4 Virt Ent 2 Virt 1Gb 1Gb Ent 2 Virt Ent 7 Virt Ent 3 Aggr Ent 5 Aggr Ent 6

SEA Ent 5SEA

Ent 0 Phy Ent 1 Phy Ent 3 Phy Ent 4 Phy

*The HMC must have “Access External Networks” checked for ent 2 virtual adapter on the VIOS’s! **Only 1 virtual ethernet adapter used for SEA(ent2) can have a priority of “1” on the HMC.

(35)

 Note the setting “Access external network”

– Required for Shared Ethernet Adapter

 Client partitions use the same VLAN ID

(36)

Create Virtual Adapter in Client Partition

Needs to match VLAN ID in VIOS

(37)
(38)
(39)

Create Virtual Adapter Control Channel

 A control channel is created to allow a primary VIOS to communicate with a secondary VIOS so that a failover can occur if the primary VIOS is unavailable

 The control channel is a virtual ethernet adapter pair (one on each VIOS) that is linked to the SEA on that VIOS

 Heartbeat messages are passed from the primary to the secondary VIOS over a separate VLAN (PVID)

 Control channel must be created before the failover SEA is created on the

secondary VIOS

– Operation will fail if control channel doesn’t exist

VIOS1A

(40)
(41)

View of Both VLANs from the HMC

(42)

Dual SEAs

 Another option is to create shared ethernet adapters (SEAs) in each VIOS and make them peers (not primary/secondary)

– This is also referred to as “load sharing”

 HMC does not support this feature yet so need to use VIOS command line

 Need to set ha_mode = sharing when creating the SEAs from the VIOS command line

 If changing existing SEAs that were previously set to primary/secondary, make sure you change the ha_mode attribute on the primary first

chdev -dev entX -attr ha_mode = sharing

(43)

10 Gb Shared Ethernet Adapter Performance

10 Gb SEAs put a much greater load on VIOS than 1 Gb SEAs

– Current recommendation is 2 dedicated processors for VIOS partitions that virtualize 10 Gb SEAs

Make sure large send attribute is turned on (at TCP layer)

– chdev -dev ent2 -attr large_send = yes -perm

Make sure flow control attribute is turned on

– chdev -dev ent2 -attr flow_ctrl = yes -perm

(44)

IBM i bridge adapters

From the IBM i command line interface:

• Create an Ethernet line description for the physical Ethernet resource, and set its Bridge identifier to your chosen bridge name.

• Create an Ethernet line description for the selected virtual Ethernet resource, and set its Bridge identifier to the same bridge name.

- The VE adapter must have the Use this adapter to access the external network selected.

• When both line descriptions are varied on, traffic is bridged between the two networks, and any other partitions with virtual Ethernet adapters on the same VLAN as the new

(45)

Virtual Ethernet Limits

Description Limit

Maximum virtual Ethernet adapters per LPAR 256

Maximum number of VLANs per virtual adapter 21 VLAN (20 VID, 1 PVID)

Number of virtual adapter per single SEA sharing a single physical network adapter

16

Maximum number of VLAN IDs 4094

(46)

Where do you have to run VIOS hosting IBM i?

(47)
(48)

PowerVM Active Memory Sharing

Supports over-commitment of logical memory with overflow going to a paging device

Intelligently flow memory from one partition to another for increased utilization and flexibility

Memory from a shared physical memory pool is dynamically allocated among logical partitions as needed to optimize overall memory usage

Designed for partitions with variable memory requirements

PowerVM Enterprise Edition on POWER6 and Power7 processor-based systems

– Partitions must use VIOS for I/O virtualization

Make sure it’s a good fit for you!

POWER Server

Virtual I/O Server

Paging

PowerVM Hypervisor AMS

Dedicated Memory

CPU

Shared Memory

Shared CPU

(49)

LPAR Suspend/Resume – Customer Value

Planned CEC outages for maintenance/upgrades

– Suspend/resume may be used in place of or in conjunction with partition mobility.

– Suspend/resume may require less time and effort than manual database shutdown and restart, for example.

Resource balancing for long-running batch jobs

– e.g. suspend lower priority and/or long running workloads to free resources.

Minimum Requirements:

All I/O is virtualized

HMC version 7 releases 7.3

FSP FW: Ax730_xxx

IBM i 7.1 TR2

(50)

Storage Subsystem ReservedStorage Pool LUN Validate environment for appropriate resources Power7 System #1 A Hypervisor VIOS A vscsi0 vtscsi0 vhost0 fcs0 en2 (if) VLAN ent2 SEA ent0 ent1 en0 (if) ent1 Mover Service VASI

Suspended PartitionIBM i Client 1

Partition Suspended Suspend Partition

CPU and I/O Ask partiton if it’s ready for suspend

M M M M M M

Partition Suspend/Resume supported on POWER7 IBM i 7.1 TR2

C

Move Memory and CPU to Storage Pool

C C

(51)

PowerVM Live Partition Mobility

•Move running partition from one system to another with almost no impact to end users

•Requires POWER7 systems or later, PowerVM Enterprise, and all I/O must be through the Virtual I/O Server

•Requires IBM i 7.1 with TR4 or newer

Potential Benefits

• Eliminate planned outages

• Balance workloads across systems • Energy Savings Movement of the OS and applications to a different server with no loss of service

Virtualized storage and Network Infrastructure

(52)

Requirements & Planning

Source and destination must be mobility

capable and compatible:

– Enhanced hardware virtualization capabilities.

– Identical or compatible processors.

– Compatible firmware levels.

Source and destination must be LAN

connected

– same subnet

All resources (CPU, Memory, IO

adapters) must be virtualized prior to migration.

– Hypervisor will handle CPU and Memory

automatically, as required. Virtual IO adapters are pre-configured, and SAN-attached disks accessed through Virtual IO Server (VIOS)

Source and destination VIOS must have

symmetrical access to the partition’s disks.

– e.g. no internal or VIOS LVM-based disks.

OS is migration enabled/aware.

– Certain tools/ application middleware can benefit from being migration aware also.

SAN LAN Boot Paging Application Data LPAR HMC

(53)

M

Once enough memory pages have been moved, suspend the

source system Create shell partition

on target system Validate environment

for appropriate resources

Live Partition Mobility

Power7 System #2 Power7 System #1 Storage Subsystem A HMC Hypervisor VIOS A vscsi0 vtscsi0 vhost0 fcs0 en2 (if) VLAN ent2 SEA ent0 ent1 en0 (if) ent1 Hypervisor VIOS fcs0 en2 (if) VLAN ent2 SEA ent0 ent1 A vscsi0 en0 (if) ent1 vtscsi0 vhost0 Mover Service VASI Mover Service VASI Shell Partition Suspended PartitionIBM i Client 1 IBM i Client 1

Finish the migration and remove the

original LPAR definitions Start migrating memory pages Create virtual SCSI

devices

M M M M M M M M M M M M M

M M

Partition Mobility supported on POWER7 IBM i 7.1 TR 4

(54)

Native attached Storage to IBM i

• No VIOS involved.

• Adapters are cabled to Fibre Channel (FC) switch(es).

• Switches include zoning from the SAN to the IBM i partition. • Active paths are solid, Passive

paths are dotted.

• Allows for failover recovery on loss of primary node. • Requires i7.1 TR6 or newer. • Supported SANs: • DS8000/5100/5300 • V7000 • V3700 • V3500 • SVC

(55)

Direct attached Storage to IBM i

• No VIOS involved.

• Adapters are cabled directly to the SAN

• Active paths are solid, Passive paths are dotted.

• Allows for failover recovery on loss of primary node. • This ties up host ports on

the SAN (ie can’t be shared) • Requires i7.1 TR6 or newer. • Supported SANs: • DS8000/5100/5300 • V7000 • V3700 • V3500 • SVC

(56)

IBM i Virtualization Enhancements

Virtualization by

GA Date

IBM i 7.1 IBM i 6.1 with 6.1.1 machine code

Environment

June 2014

-SRIOV native Ethernet support

-Increase vscsi disks per host adapter to 32 Technology Refresh 8 X X -ivirtualization ivirtualization, VIOS March 2013

- NPIV attach of SVC, Storwize V7000, V3700, V3500 Technology Refresh 6 X -- VIOS October 2012

-Large Receive offload for layer 2 bridging

-PowerVM V2.2 Refresh with SSP and LPM updates Technology Refresh 5 X X -VIOS VIOS May 2012

-IBM i Live Partition Mobility

-HMC Remote Restart PRPQ

-Performance enhancement for zeroing virtual disk

Technology Refresh 4 X X X -VIOS VIOS ivirtualization December 2011

-PowerVM V2.2 Refresh with SSP Enhancements

Technology Refresh 3

X - VIOS

5 6

(57)

IBM i Virtualization Enhancements (continued)

Virtualization by

GA Date

IBM i 7.1 IBM i 6.1 with 6.1.1 machine code

Environment

October 2011

-Ethernet layer-2 bridging

-Mirroring with NPIV attached storage

-VPM enhancements to create IBM i partitions

-PowerVM NPIV attachment for DS5000 for blades

-PowerVM V2.2 refresh with network load balancing

Technology Refresh 3 X X X X X -X (client only) -X ivirtualization VIOS ivirtualization VIOS VIOS May 2011

-Partition suspend and resume

-IBM i to IBM i virtual tape support

-PowerVM NPIV attachment of DS5000 Technology Refresh 2 X X X

-Apar II14615 (client only)

-VIOS ivirtualization

VIOS

December 2010

-PowerVM with shared storage pools

Technology Refresh 1

X - VIOS

September 2010

-Support for embedded media changers

-Expanded HBA and switch support for NPIV on blades

Technology Refresh 1 X X -X ivirtualization VIOS 5 7

(58)

The End

(59)

Trademarks

59

The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both.

The following are trademarks or registered trademarks of other companies.

* All other products may be trademarks or registered trademarks of their respective companies. Notes:

Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here.

IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.

All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions.

This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area.

All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.

Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.

Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom.

Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.

ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce.

For a complete list of IBM Trademarks, see www.ibm.com/legal/copytrade.shtml:

*, AS/400®, e business(logo)®, DBE, ESCO, eServer, FICON, IBM®, IBM (logo)®, iSeries®, MVS, OS/390®, pSeries®, RS/6000®, S/30, VM/ESA®, VSE/ESA, WebSphere®, xSeries®, z/OS®, zSeries®, z/VM®, System i, System i5, System p, System p5, System x, System z, System z9®, BladeCenter®

Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market.

Figure

Updating...

References

Updating...