• No results found

Optimizing Storage for Oracle ASM with Oracle Flash-Optimized SAN Storage

N/A
N/A
Protected

Academic year: 2021

Share "Optimizing Storage for Oracle ASM with Oracle Flash-Optimized SAN Storage"

Copied!
32
0
0

Loading.... (view fulltext now)

Full text

(1)

Optimizing Storage for

Oracle ASM with Oracle

Flash-Optimized SAN

Storage

Simon Towers

Architect

Flash Storage Systems

October 02, 2014

(2)

Program Agenda

Goals

Flash Storage System

Environment setup

Exec summary: best practice configuration settings

Detailed findings and recommendations

Conclusions/summary

1

2

3

4

5

6

(3)

Goals of this Session

Best practices configuration settings for the Database 12c, ASM, Linux and

FS1 combination

(4)

Program Agenda

Goals

Flash Storage System

Experimental setup

Exec summary: best practice configuration settings

Detailed findings and recommendations

Conclusions/summary

1

2

3

4

5

6

(5)

Storage Software

Storage Management: FS MaxMan, OEM, ASM, Storage Analytics, ACSLS

Automated Tiering: FS1 QoS Plus, DB Partitions, SAM QFS, VSM

Data Reduction: 11g ACO, HCC, RMAN, ZFS Storage Appliance Dedup/Comp

Data Protection: FS1 MaxRep, FS1 Data Protection Manager, Data Guard, RMAN, OSB

Security/Encryption: ASO, Oracle Key Manager, Disk/Tape Encryption

Oracle’s Complete Storage Portfolio

Engineered for Data Centers. Optimized for Oracle Software

Engineered Systems

Exadata

SL8500

SL3000

VSM

SL150

LTO

T9840

T10K

SPARC

SuperCluster

ZFS Storage Appliances

SAN Storage

Pillar

Axiom 600

NAS Storage

Exalogic

Big Data

Appliance

Cloud storage

Deployment Options: Private, Public, Hybrid

Services: IaaS, PaaS, SaaS

Consumption Options: Build, Manage, Subscribe

Tape and Virtual Tape

Oracle FS1

SAN Storage

(6)

Cost-Performance of Storage Technology

Order of magnitude difference must be exploited to optimize solution

0.25, $/IOP =10.00

1.00, $/IOP =3.00

4.12, $/IOP =0.31

7.50, $/IOP =0.13

0

2

4

6

8

10

12

0

1

2

3

4

5

6

7

8

$/I

OP

$/GB

As of January 2014, List prices, approximate Net values

Cap HDD

Perf HDD

Cap SSD

You cannot find a better

technology than Flash if you

need performance.

You cannot afford Flash if you don’t

need the performance.

Perf

SSD

Auto-Tiering

(7)

Oracle FS1: QoS Plus

CPU

Cache

Set QoS by

Business Priority

Heat Maps

Auto Tiering

Fine-Grain

Storage Domains =

Secure

Multi-Tenancy

Low

Priority

Medium

Priority

High

Priority

Premium

Priority

Archive

Priority

Capacity

Flash

Performance

Disk

Capacity

Disk

Performance

Flash

Access Frequency

Read / Write Bias

Random / Sequential Bias

QoS Plus

Tuning parameters for the volumes you create

Storage Domains

(8)

Program Agenda

Goals

Flash Storage System

Environment setup

Exec summary: best practice configuration settings

Detailed findings and recommendations

Conclusions/summary

1

2

3

4

5

6

(9)

Physical Setup

Logical Setup

Hardware

Workload Generator

FS1-2

16Gbps

FC

switch

Sun Server

X4-2

Generator

Workload

Server

Perf SSD

Cap SSD

Perf HDD

Cap HDD

Controller

Controller

FS1-2

Database

12c

IP Load Balancing

ASM Disk

(10)
(11)

Load generator designed to stress

test Oracle DBs

Consists of a load generator, a

coordinator and a cluster overview

Includes four benchmarks,

OrderEntry, SalesHistory,

CallingCircle and StressTest

(12)

A tool for predicting the performance of

an Oracle DB without having to install

Oracle or create a DB.

Designed for simulating Oracle DB IO

workloads using the same IO software

stack as Oracle.

Can also simulate the effect of striping

performed by ASM

Can run tests using different IO loads to

measure performance metrics such as

MBPS, IOPS, and IO latency

(13)

Perf SSD

Cap SSD

Perf HDD

Cap HDD

Controller

Controller

FS1-2

12c

IP Load Balancing

Perf SSD

Cap SSD

Perf HDD

Cap HDD

Controller

Controller

FS1-2

Orion

Swingbench

ASM Disk

Group

ASM Disk

Group

ASM Disk

Group

ASM Disk

(14)

Program Agenda

Goals

Flash Storage System

Environment setup

Exec summary: best practice configuration settings

Detailed findings and recommendations

Conclusions/summary

1

2

3

4

5

6

(15)

Exec summary: configuring ASM, Linux and FS1 for 12c

•3 ASM disk groups: +DATA, +REDO, +FRA

•2 LUNs per ASM disk group

ASM Disk Groups

•+DATA multiple storage tiers; OLTP: Raid 10; +DSS: Raid 5

•+REDO: Performance Disk, Raid 10

•+FRA: Capacity Disk, Raid 6

Storage QoS Plus

•Enable large IOs

•Change from default scheduler for SSDs

Linux IO Scheduler

•Isolate ASM disk groups into separate storage domains

•Let auto-tiering work its magic

Storage Domains

and Auto-Tiering

(16)

Program Agenda

Goals

Flash Storage System

Environment setup

Exec summary: best practice configuration settings

Detailed findings and recommendations

Conclusions/summary

1

2

3

4

5

6

(17)

ASM Disk Groups:

How many disk groups?

• Standard Oracle

recommendation is two:

+DATA and +FRA

How many disks per disk

group?

• Standard Oracle

recommendation for

normal and high

redundancy is 4 * number

of active IO paths

(18)

Create disk groups that map to very

different IO workloads to avoid disk

contention

+DATA:

For OLTP this is mainly small random writes

For DSS this is large sequential reads

+FRA

Large sequential read/write

+REDO

Small sequential read/write

But … when using a high-end storage controller with varying

storage tiers and QoS settings …

ASM Disk

group

File types

12c Parameter

+DATA

Data, temp

DB_CREATE_FILE_DEST

+REDO

Redo logs, control

DB_CREATE_ONLINE_LOG_

DEST_1

+FRA

Archive logs &

backup sets

DB_RECOVERY_FILE_DEST

Make sure your ASM disk groups are set for

External Redundancy

(19)

But … when using a high-end storage controller with varying

storage tiers and QoS settings …

2 LUNs (or multiples of two)

balanced across the two FS1-2

controllers

2, 4, 8

LUNs

(20)
(21)

Storage QoS Plus

Storage Profile

Name

Raid level

Read ahead Priority Stripe width

Writes

Preferred storage classes

ASM_DATA_OLTP Mirrored

Conservative High

Auto-select

Back

Perf Disk, Perf SSD

ASM_DATA_DSS

Single parity Aggressive High

Auto-select

Back

Perf Disk, Cap SSD

ASM_REDO

Single parity Normal

Premium All

Back

Perf Disk

ASM_FRA

Double parity Aggressive Archive Auto-select

Back

Cap Disk

Match the storage QoS Plus settings to the ASM disk groups

and their IO workloads

(22)
(23)

Linux uses IO scheduling to control

the order of block IOs submitted

to/from storage

Goals:

Reorder IOs to minimize disk seek

times

Balance IO bandwidth amongst

processes

Ensure IOs meet deadlines

Keep the HBA IO queues full

Linux IO Scheduler

Operating

System

Storage

Applications feed IOs

into OS IO queue

(24)

Linux changes

/sys/block/dm-*/queue

Change scheduler

echo noop > scheduler

Enable large IOs

echo 4096 > max_sectors_kb

BUT

for SAN controllers with lots of cache memory and SSD

drives need to push IOs to storage as quickly as possible

(25)

Storage Domains and Auto-Tiering

Storage Domains: FS1 software that

isolates data in storage “containers”

Domains are composed of RAID Groups within

Drive Enclosures

RAID Groups can be SSD or HDD or any

combination thereof

Domains physically segregate data, avoiding

data co-mingling

QoS Plus and all major FS software operates

uniquely on each Storage Domain. Neither

data nor data services can cross a domain

boundary.

Up to 64 Storage Domains per FS1

Online reallocation of physical storage to

domains

Perf HDD RAID Groups (10K-rpm)

RAID Groups in Drive Enclosures

Storage Domains

Perf SSD RAID Groups

Capacity SSD RAID Groups

Cap HDD RAID Grps. (7.2K-rpm)

RG

RG

RG

RG

RG

RG

RG

RG RG

(26)

Separate your ASM disk

groups into different

storage domains

Capacity HDDs

Performance SSDs Capacity SSDs Performance HDDs

(27)

When to stop buying flash! A Rule of Thumb: Sum of the % of IOPS and % of capacity = 1

Diminishing Returns

(28)

Program Agenda

Goals

Flash Storage System

Environment setup

Exec summary: best practice configuration settings

Detailed findings and recommendations

Conclusions/summary

1

2

3

4

5

6

(29)

Exec summary: configuring ASM, Linux and FS1 for 12c

•3 ASM disk groups: +DATA, +REDO, +FRA

•2 LUNs per ASM disk group

ASM Disk Groups

•+DATA multiple storage tiers; OLTP: Raid 10; +DSS: Raid 5

•+REDO: Performance Disk, Raid 10

•+FRA: Capacity Disk, Raid 6

Storage QoS Plus

•Enable large IOs

•Change from default scheduler for SSDs

Linux IO Scheduler

•Isolate ASM disk groups into separate storage domains

•Let auto-tiering work its magic

Storage Domains

and Auto-Tiering

(30)
(31)

Oracle Open World 2014 – FS1 Sessions

Session ID: CON7789 Optimizing Oracle Data Stores in Virtualized Environments

Date and Time: 9/30/14, 10:45 - 11:30

Venue / Room: Intercontinental - Intercontinental C

Session ID: CON7830 Solving Data Skew in Oracle Business Applications with Oracle’s Flash-Optimized SAN Storage

Date and Time: 9/30/14, 15:45 - 16:30

Venue / Room: Intercontinental - Intercontinental C

Session ID: CON7792 Optimizing Oracle Data Stores with Oracle Flash-Optimized SAN Storage

Date and Time: 9/30/14, 17:00 - 17:45

Venue / Room: Intercontinental - Intercontinental C

Session ID: CON7832 Leveraging Oracle’s Flash-Optimized SAN Storage in a Cloud Deployment

Date and Time: 10/1/14, 12:45 - 13:30

Venue / Room: Intercontinental - Intercontinental C

Session ID: CON7841 Maximizing Oracle Database 12c with Oracle's Flash-Optimized SAN Storage

Date and Time: 10/2/14, 12:00 - 12:45

Venue / Room: Intercontinental - Union Square

Session ID: CON7831 Optimizing Storage for Oracle ASM with an Oracle Flash-Optimized SAN

(32)

Oracle Open World 2014 – FS1 DemoPods and HOL

DemoPods:

DemoID:3691 Leveraging Flash to Improve Latency of Multiple Database Instances, Location: SC-117

DemoID:3713 Quality of Service-Driven Autotiering, Location: SC-132

DemoID:3711 Maximizing Database Performance: Data Tiering vs Oracle HCC vs Deduplication, Location: SC-161

DemoID:3695 Simplifying storage management with Oracle Enterprise Manager, Location: SC-162

DemoID:4766 Hardware Showcase : Oracle FS1 Flash Storage System, Location: SC-133

Hands On Lab (HOL) :

Session ID: HOL8687 Oracle Storage System GUI: Faster Database Performance with QoS Enhancements

Date and Time: 9/30/14, 18:45 - 19:45

Venue / Room: Hotel Nikko - Nikko Ballroom I

References

Related documents

This paper presents Active Force Control (AFC) based architecture in characterizing the twin rotor multi-input multi-output (MIMO) system (TRMS).. The proposed architecture is

SATA RAID hard disk storage system's RAID controller has RAID-0 and RAID-0+1 and NRAID modes which can provide up to 2TB hard disk space at RAID-0 and 1TB hard disk space at

RAID Card I/O Processor S-ATA Controller Storage Appliance LAN Internal Disk Array Fibre Channel LAN Server External RAID Enclosure FC-to-S-ATA External RAID Card.. Disk Array

The following table shows the RAID controllers and options for external disk storage expansion.

physical RAID-5 as well as multiple servers and storage blades that are tied together in a virtual raid configuration, HP bladed storage can handle failures at the disk,

With Flex-RAID formatting, if you want to add disks to expand your storage capacity, you must back up the data to another system, add a disk, reformat the RAID volume, and restore

RAID Support RAID 6, 60 (Advanced Data Guarding) RAID 5, 50 (Distributed Data Guarding) RAID 1, 10 (Drive Mirroring). RAID 1 ADM, 10 ADM (Advanced Data Mirroring) RAID

 Planning, organizing and conducting CH services and programmes  Managerial role of the community health nurse in various settings. o Home visits o Clinics o