PowerVM and VIOS for IBM i

48 

Loading....

Loading....

Loading....

Loading....

Loading....

Full text

(1)

PowerVM and VIOS for IBM i

QUSER user group meeting

3/19/2013

Minneapolis,MN

3605 Highway 52 North Rochester, MN 55901 Tel 507-253-2367 Fax 845-491-2347 Gottfried@us.ibm.com Gottfried Schimunek Senior Architect Application Design IBM STG Software Development Lab Services IBM ISV Enablement

(2)

Acknowledgement

Thanks to:

(3)

IBM

develops

hypervisor

that would

become VM

on the

mainframe

IBM

announces

first

machines to

do

physical

partitioning

IBM

announces

LPAR on

the

mainframe

IBM

announces

LPAR

on

POWER™

1967

1967

1973

1973

1987

1987

IBM intro’s

POWER

Hypervisor™

for System p™

and System i™

IBM

announces

PowerVM

2007

2007

2004

2004

1999

1999

2008

2008

IBM

announces

POWER6™

Live Partition

Mobility

IBM’s History of Virtualization Leadership

A 40+ year tradition continues with PowerVM™ and VM Control

TM

2009

(4)

PowerVM: Virtualization Without Limits

Sold with more than 70% of Power Systems

Improves IT resource utilization

Reduces IT infrastructure costs

(5)

PowerVM Editions

offer a unified

virtualization

solution for all

Power workloads

PowerVM Express Edition

Evaluations, pilots, PoCs

Single-server projects

PowerVM Standard Edition

Production deployments

Server consolidation

PowerVM Enterprise Edition

Multi-server deployments

Cloud infrastructure

PowerVM Editions are tailored to client needs

PowerVM Editions

Express

Standard

Enterprise

Concurrent VMs

server

2 per

20 per core**

(up to 1000)

20 per core**

(up to 1000)

Virtual I/O Server

NPIV

Suspend/Resume

Shared Processor Pools

Shared Storage Pools

Thin Provisioning

Live Partition Mobility

Active Memory Sharing

(6)

Emerging: Virtual Appliances

Workload Mobility Optimized for …. • Availability • Performance • Energy Shared Storage Shared Network Virtualization Compute Memory VM/LPAR OS SW VM/LPAR OS SW Network Virtualization Compute Memory VM/LPAR OS SW Storage Network App OS Image App OS Image App OS Image App OS Image Image Library Virtual Appliance Deployment VirtualIO Svr VirtlNW VirtStorage VirtualIO Svr VirtlNW VirtStorage + Mobility of Virtual Machines

+ Manually intensive server/storage/network mobility management

+ VM-based Availability/Resilience Mgmt + Storage Pools

+ IO virtualization and virtual switching

+ Hypervisor clustered fs access to virtual storage

Virtualization Compute Memory Network VirtualServer OS SW VirtualIO Svr VirtlNW VirtStorage OS SW VirtualServer

Physical resource discovery / configuration / provisioning / update / system health

OS provisioning

Virtual IO

Virtual Machine lifecycle mgmt

Dynamic resource optimization within a physical system

VLAN

External virtualized storage

External virtualized switches

StoragevSwitch

vSwitch

+Mobility of workloads with automated and integrated server, network & storage provisioning

+ Workload-based Availability/Resilience Mgmt + Converged Datacenter Network fabric + Storage Pools with advanced capabilities (cloning, snapshot, thin provisioning, …) + Managing to QoS Policies (intelligent placement)

+ VM Security Appliance

(7)

Two I/O Server Options

IBM i

Hypervisor

IBM i

POWER6

Hypervisor

POWER6

IBM i

VIOS

• Built into

IBM i

•Host Disk, Optical, Tape

•Consolidate Ethernet Traffic

•Same technology as hosting AIX,

Linux, and iSCSI

VIOS Server

•Host Disk, Optical, Tape

•Bridge Ethernet Traffic

•Attach external storage

(8)

What is the VIOS?

A special purpose appliance partition

Provide I/O virtualization

Advanced Partition Virtualization enabler

First GAed 2004

Built on top of AIX, but not an AIX partition

IBM i first attached to VIOS in 2008 with the IBM i 6.1

(9)

Why use the VIOS?

I/O Capacity Utilization

Storage Allocation Flexibility

Ethernet Flexibility

Memory Sharing

Suspend/Resume

(10)

IO Bus Virtualization

with Dedicated

Adapters

Hypervisor

Fabric

IO Adapter Virtualization

with VIO Server

Func

PCI adapter

Port

Func

Port

Hypervisor

VIOS LPAR

LPAR A

Physical

Adapter

DevDrv

Virtual Fabric

Virtual Adapter Server Virtual Adapter DevDrv Virtual Adapter Server

LPAR B

Virtual Adapter DevDrv

LPAR A

LPAR B

Physical Adapter DevDrv Physical Adapter DevDrv

Fabric

Func

Port

PCI adapter PCI adapter Increasing Adapter BW & LPAR Density per Slot

(11)

IBM i + VSCSI (Classic)

Source

VIOS

IBM i Client

(System 1)

POWER6 with IBM i 6.1.1

System 1

System 2

System 3

FC HBA

IBM i Client

(System 2)

IBM i Client

(System 3)

Hypervisor

Assign storage to the physical HBA

in the VIOS

Hostconnect is created as an open

storage or AIX hosttype,

Requires 512 byte per sector LUNs

to be assigned to the hostconnect

Cannot Migrate existing direct

connect LUNs

Many Storage options supported

6B22

Device

Type

6B22

Device

Type

6B22

Device

Type

(12)

Performance – Does Virtualization Perform?

Database ASP

0 2 4 6 8 10 12 14 16 0 10000 20000 30000 40000 50000 60000 OPS R es p o n se T im e in M S VIOS DS5K DA DS5K

(13)

IBM i + NPIV ( Virtual Fiber Chanel )

Source

VIOS

IBM i Client

(System 1)

POWER6 with IBM i 6.1.1

System 1

System 2

System 3

8Gbs HBA

IBM i Client

(System 1)

IBM i Client

(System 1)

Hypervisor

Hypervisor assigns 2 unique

WWPNs to each Virtual fiber

Hostconnect is created as an

iSeries hosttype,

Requires 520 byte per sector LUNs

to be assigned to the iSeries

hostconnect on DS8K

Can Migrate existing direct connect

LUNS

DS8100, DS8300, DS8700,

DS8800, DS5100 and DS5300 SVC,

V7000, V3700 supported

Virtual address example C001234567890001

Note: an NPIV ( N_port ) capable switch is required to connect the

VIOS to the DS8000 to use virtual fiber.

(14)

NPIV Concepts

Multiple VFC server adapters may map to

the same physical adapter port.

Each VFC server adapter connects to one

VFC client adapter; each VFC client

adapter gets a unique WWPN.

Client WWPN stays the same regardless of

physical port it is connected to.

Support for dynamically changing the

physical port to virtual port mapping.

Clients can discover and manage physical

devices on the SAN.

VIOS can’t access or emulate storage, just

provides clients access to the SAN.

Support for concurrent microcode download

to the physical FC adapter

(15)
(16)

NPIV Performance

NPIV vs Direct Attach (DS8300)

0 0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 0.01 0 20 40 60 80 100 120 CPW Users A p p li ca ti o n R es p o n se T im e

(17)

What is it?

The VIOS advisor is a standalone application that polls key performance metrics for minutes or hours,

before analyzing the results to produce a report that summarizes the health of the environment and

proposes potential actions that can be taken to address performance inhibitors.

How does it work?

VIOS Partition

VIOS Advisor

Download VIOS Advisor

STEP

1)

STEP

2)

Run Executable

VIOS Partition

View XML File

STEP

3)

The VIOS Advisor can monitor from 5min and up to 24hours

Open up .xml file using your favorite web-browser

to get an easy to interpret report summarizing your VIOS status. Only a single executable is

required to run within the VIOS

Introducing the VIOS Performance Advisor

(18)

OPTIMAL

Current condition

likely to deliver best performance

Sample Screenshot - SEA

WARNING:

Current condition

deviates from best practices.

Opportunity likely exists for

better performance.

CRITICAL:

Current condition

likely causing negative

performance impacts.

INFORMATIVE:

Context

relevant data helpful in making

adjustments

Performance

interpretation

combined with

effective visual cues

to alert clients

about the state of the system and opportunities to optimize.

(19)

Customer Driven Features – Your input matters

More detailed Fibre channel adapter statistics to aid with resource planning.

NEW

NEW

Additional Customer Driven Features

Check for updates button

Option to include timestamps in report name

FC adapter, command element monitoring.

Added server serial number to report.

(20)

VIOS Investigator

New iDoctor component released in May 2012

-

Combines NMON data and a VIOS to IBM i disk mapping process to analyze VIOS

performance.

-

Includes an NPIV data collection and analysis function.

-

Includes functions to display the VIOS configuration.

-

PerfPMR data collection and send to IBM support.

-

Free (except NPIV analysis functions – requires JW license)

Future plans

V7000 support

Documentation (see chapter 10)

(21)
(22)

IBM i + NPIV ( Virtual Fiber ) with PowerHA

Source

VIOS

POWER6 with IBM i 6.1.1

SYSBAS

IASP

8Gbs HBA

Hypervisor

VIOS 1

IBM i Client 1

IBM i Client 2

Each port is assigned separate

WWPNs by the Hypervisor

Each port is seen as a separate

adapter by IBM i – so PowerHA reset it

individually.

Reduces the hardware for a single

SYSBAS

(23)

PowerHA in the Virtual I/O Environment

With VSCSI

All Logical replication solutions supported including iCluster

PowerHA for i - Geographic mirroring

PowerHA for i – Storwize V7000 Metro and Global Mirror support (4Q2011)

With NPIV

All Logical replication solutions supported including iCluster

DS8000 Metro Mirroring

DS8000 Global Mirroring

DS8000 Lun Level Switching

SVC/V7000 Metro Mirroring

SVC/V7000 Global Mirroring

SVC/V7000 Lun Level Switching

(24)

Redundant VIOS with NPIV

VIOS

VIOS

POWER6

IBM i

Physical FC

connections

SYSBAS

IASP

Server

VFC

adapters

Client

VFC

adapters

1

Step 1: configure virtual and physical FC

adapters

Best Practice to make VIOS redundant or

separate individual VIOS partitions where a

single hardware failure would not take down

both VIOS partitions.

Step 2: configure SAN fabric and storage

Zone Luns to the virtual WWPNs.

Each DASD sees a path through 2 VIOS

partitions

2

•Notes: Support up to 8 paths per LUN

•Not all paths have to go through

separate VIOS partitions.

(25)

VIOS – Storage attach

Three categories of storage attachment to IBM i through VIOS

1)

Supported (IBM storage)

- tested by IBM; IBM supports the solution and owns resolution

-

IBM will

deliver the fix

2)

Tested / Recognized (3rd party storage including EMC and Hitachi)

- IBM / storage vendor collaboration, solution was tested (by vendor, IBM, or both);

-

CSA in place, states that IBM and storage vendor will work together to resolve the issue -

IBM or

storage vendor will deliver the fix

3)

Other

- not tested by IBM, maybe not have been tested at all

No commitment / obligation to provide fix

Category #3 (Other) was introduced in the last few years, “other” storage used to invalidate the

VIOS warranty. IBM Service has committed to provide some limited level of problem determination for

service requests / issues involving "other” storage. To the extent that they will try to isolate it to being

a problem within VIOS or IBM i, or external to VIOS or IBM i (ie. a storage problem). No guarantee

that a fix will be provided, even if the problem was identified as a VIOS or IBM i issue

(26)

Notes

- This table does not list more detailed considerations, for example required levels of firmware or PTFs required or configuration performance considerations - POWER7 servers require IBM i 6.1 or later

- This table can change over time as addition hardware/software capabilities/options are added

# DS3200 only supports SAS connection, not supported on Rack/Tower servers which use only Fibre Channel connections, supported on Blades with SAS ## DS3500 has either SAS or Fibre Channel connection. Rack/Tower only uses Fibre Channel. Blades in BCH support either SAS or Fibre Channel. Blades in BCS only uses SAS.

### Not supported on IBM i 7.1. But see SCORE System RPQ 846-15284 for exception support

* Supported with Smart Fibre Channel adapters – NOT supported with IOP-based Fibre Channel adapters

** NPIV requires Machine Code Level of 6.1.1 or later and requires NPIV capable HBAs (FC adapters) and switches @ BCH supports DS3400, DS3500, DS3950 & BCS supports DS3200, DS3500

Table as of

Feb, 2013

DS3200

DS3400

DS3500

DCS3700

DS3950

DS4700

DS4800

DS5020

SVC

Storwize

V7000

V3700

V3500

DS5100

DS5300

XIV

DS8100

DS8300

DS8700

DS8800

DS8870

Rack /

Tower

Systems

IBM i Version Hardware 6.1 / 7.1 POWER6/7 Not DS3200#, Yes DS3500## 6.1 / 7.1 POWER6/7 6.1 / 7.1 POWER6/7 6.1 / 7.1 POWER6/7 6.1 / 7.1 POWER6/7 5.4 / 6.1 / 7.1 POWER5/6/7

IBM i Attach VIOS VIOS VSCSI and VIOS NPIV%%

Direct* or VIOS –

VSCSI and NPIV% VIOS

Direct or VIOS – VSCSI and NPIV**

Power

Blades

IBM i Version Hardware 6.1 / 7.1 POWER6/7 @, #, ## 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH) 6.1 / 7.1 POWER6/7 (BCH)

(27)

IBM PowerVM Virtual Ethernet

PowerVM Ethernet switch

Part of PowerVM Hypervisor

Moves data between LPARs

Shared Ethernet Adapter

Part of the VIO server

Logical device

Bridges traffic to and from

external networks

Additional capabilities

VLAN aware

Link aggregation for external networks

SEA Failover for redundancy

PowerVM Hypervisor

Virtual I/O Server

CMN (Phy) Shared Ethernet Adapter CMN (Vir)

VLAN-Aware Ethernet Switch

Client 2

CMN (Vir)

Client 1

CMN (Vir)

Ethernet

Switch

(28)

PowerVM Active Memory Sharing

Supports over-commitment of logical

memory with overflow going to a paging

device

Intelligently flow memory from one partition

to another for increased utilization and

flexibility

Memory from a shared physical memory

pool is dynamically allocated among logical

partitions as needed to optimize overall

memory usage

Designed for partitions with variable memory

requirements

PowerVM Enterprise Edition on POWER6

and Power7 processor-based systems

POWER Server

Virtual

I/O

Server

Paging

PowerVM Hypervisor

AMS

Dedicated Memory

CPU

Shared Memory

Shared CPU

(29)

LPAR Suspend/Resume – Customer Value

Resource balancing for long-running batch jobs

e.g. suspend lower priority and/or long running workloads to free resources.

Planned CEC outages for maintenance/upgrades

Suspend/resume may be used in place of or in conjunction with partition mobility.

Suspend/resume may require less time and effort than manual database shutdown and

restart, for example.

Requirements:

All I/O is virtualized

HMC version 7 releases 7.3

FW:

Ax730_xxx

IBM i 7.1 TR2

(30)

Rebalance processing

power across servers when

and where you need it

Reduce planned downtime by

moving workloads to another server

Movement to a

different server

with no loss of

service

Virtualized SAN and Network Infrastructure

Virtualized SAN and Network Infrastructure

Move a running partition from one Power7 server

to another with no application downtime

(31)

Requirements

HMC/Firmware

version 7 releases 7.5

Firmware service pack 730_51,

740_40, or later

PowerVM Enterprise Edition

VIOS 2.2.1.4

Supported client operating systems

IBM i 7.1 TR4

Software

I/O

All I/O through the VIOS

VSCSI, NPIV, VE

External Storage

Same storage to both source and

destination

Power7 tower / rack Hardware

Both source and destination on same

(32)

M

Once enough memory

pages have been

moved, suspend the

source system

Create shell partition

on target system

Validate environment

for appropriate

resources

Live Partition Mobility

Power7 System #2

Power7 System #1

Storage Subsystem

HMC

Hypervisor VIOS

A

vscsi0 vtscsi0 vhost0 fcs0 en2 (if) VLAN ent2 SEA

ent0

ent1 en0 (if) ent1 Hypervisor VIOS fcs0 en2 (if) VLAN ent2 SEA

ent0

ent1

A

vscsi0 en0 (if) ent1 vtscsi0 vhost0 Mover Service VASI Mover Service VASI

Shell Partition

Suspended Partition

IBM i Client 1

IBM i Client 1

Finish the migration

and remove the

original LPAR

definitions

Start migrating

memory pages

Create virtual SCSI

devices

M M M M M M

M M M M M M M

M

M

Partition Mobility supported on POWER7

IBM i 7.1 TR4

(33)

Performance Considerations

Active partition migration involves moving the state of

a partition from one system to another while the

partition is still running.

Partition memory state is tracked while transferring memory state to

the destination system

Multiple memory transfers are done until a sufficient amount of

clean pages have been moved.

Memory updates on the source system affect transfer

time

Reduce the partition’s memory update activity prior to the migration

Network speed affects the transfer time

Use a dedicated network, if possible

At least 1Gb speed

(34)

Application impacts during migration

In general, applications and the operating system

are unaware that the partition is moved from one

system to another.

There are some exceptions to this:

Collection Services; when the partition is starting to run

on the target system, the Collection Services collector

job will cycle the collection so correct hardware

(35)

PowerVM – VIOS Shared Storage Pools

Extending Storage Virtualization Beyond a Single System

vSCSI Classic – storage virtualization

vSCSI NextGen – clustered storage virtualization

Storage pool spans multiple VIOS’s and servers

Enabler for federated management

Location transparency

Advanced capabilities

Storage pooled at VIOS for a single system

Enables dynamic storage allocation

Supports Local and SAN Storage, IBM and

non-IBM Storage

Storage Pool

Storage Pool

Storage Pool

Storage Pool

(36)

Servers

Admin

Admin

Virtual Servers

Storage Memory CPU Storage Memory CPU

VIOS

Integrated

Integrated

Server &

Server &

Storage

Storage

Management

Management

Storage Aggregation

SynerStor

SynerStor

Integrated Storage Capabilities

Server System

Administrators

Non-disruptive storage lifecycle management

Director integration to deliver high level values

Storage integrated with Server Mgmt

Infrastructures

Consistent capabilities across different storage

VIOS

Integrated Storage Virtualization increases Platform Value

Client Benefits

• Automated storage provisioning

• Simplified, integrated Director Mgmt

• Advanced image management

• Few interactions between mgmt domains

• Consolidated on-line backup

• Consistent capabilities with different

storage

Storage System

Administrators

NAS

IBM, EMC, Hitachi, Other SAN

SAN

Storage pooling Migration Copyservices SAN Virtualization File Virtualization Caching Geo mirroring Thin provisioning

VIOS 2.2

-

(37)

VMControl

Express Edition

VMControl

Standard Edition

VMControl

Enterprise Edition

VMControl

Virtualization Capabilities

Manage

resources

Automate

virtual images

Optimize

system pools

PowerVM

Create/manage virtual machines

(x86, PowerVM and z/VM)

Virtual machine relocation

Capture/import, create/remove

standardized virtual images

Deploy standard virtual images

Maintain virtual images in a

centralized library

Create/remove system pools and

manage system pool resources

Add/remove physical servers

within system pools

(38)

System Pools within IBM Systems Director

Managing a pool of system resources with single systems simplicity

IBM Systems Director

VMControl

System pools are being integrated as a new type of system with

the IBM System Director tools, allowing the pool to be managed a

single logical entity in the data center.

A dashboard view for System pools will provide overall view of

health and status of the pool and the deployed workloads.

Mobility

Optimized for …. • Availability • Performance

(39)

System Pool support for IBMi Images

Technical Overview:

Support the IBMi operating systems on the POWER platform.

All system pool operations supported; deploy, capture, relocate, optimize.

• It is assumed that the image meets the hardware/PTF requirements when using the GUI to do a

deploy.

• From the CLI/Rest Interfaces, you have the ability to mark that the image is not relocatable and it

will not be moved.

Hardware/Software Requirements:

P7 hardware at firmware release 740.40 or 730.51,

Managed by IBM Hardware Management Console V7R7.5.0M0 or later

IBMi image at v7r1 TR4 or later and PTF SI45682

Reference Information:

PTF information:

http://www-912.ibm.com/a_dir/as4ptf.nsf/ALLPTFS/SI45682

Infocenter information: https://www.ibm.com/developerworks/mydeveloperworks/wikis/home?lang

=en#/wiki/IBM i Technology Updates/page/Live Partition Mobility

Restrictions:

(40)

NPIV Support in System Pools

Technical Overview:

NPIV, or N PortID Virtualization is now fully supported in System Pools on the POWER platform

• Deploy to system pool for a single or mulit-disk VA is now supported.

• System Pools can contain both vSCSI and NPIV attached disks and will be handled

appropriate in relocation and optimization functions.

• Virtual Appliances can describe disks that are NPIV and vSCSI attached (mixed) for

both deploy and capture.

• Image Repositories can be hosted on NPIV backed attached storage.

Note: The storage connectivity is not preserved in the VA during capture.

Restrictions:

NPIV only supported on SAN storage

When editing disks for a VS you cannot switch from vSCSI to NPIV

(41)

Analyst commentary on PowerVM with POWER7

“A data center scaling out to a

cloud-supporting infrastructure

or supporting

multiple applications placing varying demands

on system resources,

would have to

purchase, deploy, provision, and

maintain a good deal more hardware and

software

with a VMWare based solution

to

achieve the same workload productivity

possible with PowerVM on POWER7.”

(42)

PowerVM Client Success: GHY International

Consolidating infrastructure benefits midsize business

Business challenge:

Predicting that international trade would increase as economic

conditions improve, customs brokerage GHY International wanted to

update its IT infrastructure to provide headroom for business growth.

Solution:

GHY International deployed an IBM® Power® 750, running IBM AIX®,

IBM i, and Linux® on a single POWER7® system using IBM

PowerVM™ and a separate IBM System x® 3850 and VMware

environment for Windows®.

Benefits:

Enhanced scalability: IBM Power 750 delivers over four times the

capacity of current server

Easy manageability: A four-person IT team now spends just five

percent versus 95 percent of its time on server management

Better energy efficiency: reduces electricity and cooling requirements

“With PowerVM, we went

from 95 percent to only 5%

of our time managing or

reacting to our environment.

And saved the business

hundreds of thousands of

dollars in licensing and

application fees.”

— Nigel Fortlage, vice president of IT

and CIO, GHY International

(43)

131%

PowerVM on Power 750 delivers

superior scale-up efficiency that

outperforms vSphere 5.0 by up to

131%, running the same workloads

across virtualized resources.

PowerVM is 103% better than vSphere

4.1 and 131% better than vSphere 5.0.

vSphere 5.0 is no better than vSphere

4.1.

PowerVM on POWER7 delivers better scale-up and higher throughput

performance than VMware vSphere

0

100000

200000

300000

400000

500000

600000

Jo

b

s/

m

in

1

2

4

8

16

32

# of vcpus

AIM7 SingleVM Scale-up

PowerVM

vSphere5

vSphere4.1

Power 750

32 cores (8cores/chip)

HP Proliant DL580 G7 (Westmere EX)

Xeon E7 – 4870 40 cores (10 cores/chip)

+103%

+131%

PowerVM

advantage

increases as we

scale-up

(44)

525%

PowerVM on Power 750

outperforms VMware by up to

525% when running multiple

VM’s and workloads.

PowerVM maximizes workload

performance and all system

resources. vSphere 5.0 has

more cores but still can’t

compete with PowerVM.

PowerVM on POWER7 delivers better scale-out and higher throughput

performance than VMware vSphere

0

100000

200000

300000

400000

500000

600000

Jo

b

s/

M

in

8 VM

AIM7 Multiple VM scale-out

(32 vcpus per VM)

(45)

Client Needs

PowerVM

VMware vSphere

5.0/5.1

High Performance

industry-leading Power Systems

Built-in hypervisor means all

benchmarks are fully virtualized

Degrades x86 workload

performance by up to 30%

compared to ‘bare metal’

Elastic Scalability

demanding mission-critical

Scales to support the most

enterprise workloads

Imposes constraints that limit

virtualization to small/medium

workloads

Extreme Flexibility

memory, storage and I/O without

Dynamically reallocates CPU,

impacting workloads

Limited ‘hot-add’ of CPU and

memory, with high risk of

workload failures

Maximum Security

firmware and protected by secure

Embedded in Power Systems

access controls and encryption

Downloaded software exposes

more attack surfaces, with

many published vulnerabilities

Platform Integration

Designed in sync with POWER

processor and platform

architecture road maps

Third-party add-on software

utility, developed in isolation

from processor or systems

PowerVM and POWER7 deliver a level of

integration unmatched by VMware and x86

(46)

http://www.ibm.com/systems/power/software/virtualization

PowerVM resources include

( … or Google ‘PowerVM’ and click I’m Feeling Lucky)

Learn more about PowerVM on the Web

(47)

Resources and references

Techdocs –

http://www.ibm.com/support/techdocs

(presentations, tips & techniques, white papers, etc.)

IBM PowerVM Virtualization Introduction and Configuration - SG24-7940

http://www.redbooks.ibm.com/abstracts/sg247940.html?Open

IBM PowerVM Virtualization Managing and Monitoring - SG24-7590

http://www.redbooks.ibm.com/abstracts/sg247590.html?Open

IBM PowerVM Virtualization Active Memory Sharing – REDP4470

http://www.redbooks.ibm.com/abstracts/redp4470.html?Open

IBM System p Advanced POWER Virtualization (PowerVM) Best Practices -

REDP4194

http://www.redbooks.ibm.com/abstracts/redp4194.html?Open

Power Systems: Virtual I/O Server and Integrated Virtualization Manager

commands (iphcg.pdf)

(48)

Trademarks and Disclaimers

8 IBM Corporation 1994-2008. All rights reserved.

References in this document to IBM products or services do not imply that IBM intends to make them available in every country.

Trademarks of International Business Machines Corporation in the United States, other countries, or both can be found on the World Wide Web at http://www.ibm.com/legal/copytrade.shtml.

Adobe, Acrobat, PostScript and all Adobe-based trademarks are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, other countries, or both.

Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Cell Broadband Engine and Cell/B.E. are trademarks of Sony Computer Entertainment, Inc., in the United States, other countries, or both and are used under license therefrom.

Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.

Information is provided "AS IS" without warranty of any kind.

The customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer.

Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products.

All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of

performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements

Figure

Updating...

Related subjects :