Optimizing Storage for
Oracle ASM with Oracle
Flash-Optimized SAN
Storage
Simon Towers
Architect
Flash Storage Systems
October 02, 2014
Program Agenda
Goals
Flash Storage System
Environment setup
Exec summary: best practice configuration settings
Detailed findings and recommendations
Conclusions/summary
1
2
3
4
5
6
Goals of this Session
•
Best practices configuration settings for the Database 12c, ASM, Linux and
FS1 combination
Program Agenda
Goals
Flash Storage System
Experimental setup
Exec summary: best practice configuration settings
Detailed findings and recommendations
Conclusions/summary
1
2
3
4
5
6
Storage Software
•
Storage Management: FS MaxMan, OEM, ASM, Storage Analytics, ACSLS
•
Automated Tiering: FS1 QoS Plus, DB Partitions, SAM QFS, VSM
•
Data Reduction: 11g ACO, HCC, RMAN, ZFS Storage Appliance Dedup/Comp
•
Data Protection: FS1 MaxRep, FS1 Data Protection Manager, Data Guard, RMAN, OSB
•
Security/Encryption: ASO, Oracle Key Manager, Disk/Tape Encryption
Oracle’s Complete Storage Portfolio
Engineered for Data Centers. Optimized for Oracle Software
Engineered Systems
Exadata
SL8500
SL3000
VSM
SL150
LTO
T9840
T10K
SPARC
SuperCluster
ZFS Storage Appliances
SAN Storage
Pillar
Axiom 600
NAS Storage
Exalogic
Big Data
Appliance
Cloud storage
•
Deployment Options: Private, Public, Hybrid
•
Services: IaaS, PaaS, SaaS
•
Consumption Options: Build, Manage, Subscribe
Tape and Virtual Tape
Oracle FS1
SAN Storage
Cost-Performance of Storage Technology
Order of magnitude difference must be exploited to optimize solution
0.25, $/IOP =10.00
1.00, $/IOP =3.00
4.12, $/IOP =0.31
7.50, $/IOP =0.13
0
2
4
6
8
10
12
0
1
2
3
4
5
6
7
8
$/I
OP
$/GB
As of January 2014, List prices, approximate Net values
Cap HDD
Perf HDD
Cap SSD
You cannot find a better
technology than Flash if you
need performance.
You cannot afford Flash if you don’t
need the performance.
Perf
SSD
Auto-Tiering
Oracle FS1: QoS Plus
CPU
Cache
Set QoS by
Business Priority
Heat Maps
Auto Tiering
Fine-Grain
Storage Domains =
Secure
Multi-Tenancy
Low
Priority
Medium
Priority
High
Priority
Premium
Priority
Archive
Priority
Capacity
Flash
Performance
Disk
Capacity
Disk
Performance
Flash
•
Access Frequency
•
Read / Write Bias
Random / Sequential Bias
QoS Plus
Tuning parameters for the volumes you create
Storage Domains
Program Agenda
Goals
Flash Storage System
Environment setup
Exec summary: best practice configuration settings
Detailed findings and recommendations
Conclusions/summary
1
2
3
4
5
6
Physical Setup
Logical Setup
Hardware
Workload Generator
FS1-2
16Gbps
FC
switch
Sun Server
X4-2
Generator
Workload
Server
Perf SSD
Cap SSD
Perf HDD
Cap HDD
Controller
Controller
FS1-2
Database
12c
IP Load Balancing
ASM Disk
•
Load generator designed to stress
test Oracle DBs
•
Consists of a load generator, a
coordinator and a cluster overview
•
Includes four benchmarks,
OrderEntry, SalesHistory,
CallingCircle and StressTest
•
A tool for predicting the performance of
an Oracle DB without having to install
Oracle or create a DB.
•
Designed for simulating Oracle DB IO
workloads using the same IO software
stack as Oracle.
•
Can also simulate the effect of striping
performed by ASM
•
Can run tests using different IO loads to
measure performance metrics such as
MBPS, IOPS, and IO latency
Perf SSD
Cap SSD
Perf HDD
Cap HDD
Controller
Controller
FS1-2
12c
IP Load Balancing
Perf SSD
Cap SSD
Perf HDD
Cap HDD
Controller
Controller
FS1-2
Orion
Swingbench
ASM Disk
Group
ASM Disk
Group
ASM Disk
Group
ASM Disk
Program Agenda
Goals
Flash Storage System
Environment setup
Exec summary: best practice configuration settings
Detailed findings and recommendations
Conclusions/summary
1
2
3
4
5
6
Exec summary: configuring ASM, Linux and FS1 for 12c
•3 ASM disk groups: +DATA, +REDO, +FRA
•2 LUNs per ASM disk group
ASM Disk Groups
•+DATA multiple storage tiers; OLTP: Raid 10; +DSS: Raid 5
•+REDO: Performance Disk, Raid 10
•+FRA: Capacity Disk, Raid 6
Storage QoS Plus
•Enable large IOs
•Change from default scheduler for SSDs
Linux IO Scheduler
•Isolate ASM disk groups into separate storage domains
•Let auto-tiering work its magic
Storage Domains
and Auto-Tiering
Program Agenda
Goals
Flash Storage System
Environment setup
Exec summary: best practice configuration settings
Detailed findings and recommendations
Conclusions/summary
1
2
3
4
5
6
ASM Disk Groups:
How many disk groups?
• Standard Oracle
recommendation is two:
+DATA and +FRA
How many disks per disk
group?
• Standard Oracle
recommendation for
normal and high
redundancy is 4 * number
of active IO paths
Create disk groups that map to very
different IO workloads to avoid disk
contention
•
+DATA:
–
For OLTP this is mainly small random writes
–
For DSS this is large sequential reads
•
+FRA
–
Large sequential read/write
•
+REDO
–
Small sequential read/write
But … when using a high-end storage controller with varying
storage tiers and QoS settings …
ASM Disk
group
File types
12c Parameter
+DATA
Data, temp
DB_CREATE_FILE_DEST
+REDO
Redo logs, control
DB_CREATE_ONLINE_LOG_
DEST_1
+FRA
Archive logs &
backup sets
DB_RECOVERY_FILE_DEST
Make sure your ASM disk groups are set for
External Redundancy
But … when using a high-end storage controller with varying
storage tiers and QoS settings …
•
2 LUNs (or multiples of two)
balanced across the two FS1-2
controllers
2, 4, 8
LUNs
Storage QoS Plus
Storage Profile
Name
Raid level
Read ahead Priority Stripe width
Writes
Preferred storage classes
ASM_DATA_OLTP Mirrored
Conservative High
Auto-select
Back
Perf Disk, Perf SSD
ASM_DATA_DSS
Single parity Aggressive High
Auto-select
Back
Perf Disk, Cap SSD
ASM_REDO
Single parity Normal
Premium All
Back
Perf Disk
ASM_FRA
Double parity Aggressive Archive Auto-select
Back
Cap Disk
Match the storage QoS Plus settings to the ASM disk groups
and their IO workloads
•
Linux uses IO scheduling to control
the order of block IOs submitted
to/from storage
•
Goals:
–
Reorder IOs to minimize disk seek
times
–
Balance IO bandwidth amongst
processes
–
Ensure IOs meet deadlines
–
Keep the HBA IO queues full
Linux IO Scheduler
Operating
System
Storage
Applications feed IOs
into OS IO queue
Linux changes
/sys/block/dm-*/queue
•
Change scheduler
echo noop > scheduler
•
Enable large IOs
echo 4096 > max_sectors_kb
BUT
for SAN controllers with lots of cache memory and SSD
drives need to push IOs to storage as quickly as possible
Storage Domains and Auto-Tiering
Storage Domains: FS1 software that
isolates data in storage “containers”
•
Domains are composed of RAID Groups within
Drive Enclosures
•
RAID Groups can be SSD or HDD or any
combination thereof
•
Domains physically segregate data, avoiding
data co-mingling
•
QoS Plus and all major FS software operates
uniquely on each Storage Domain. Neither
data nor data services can cross a domain
boundary.
•
Up to 64 Storage Domains per FS1
•
Online reallocation of physical storage to
domains
Perf HDD RAID Groups (10K-rpm)
RAID Groups in Drive Enclosures
Storage Domains
Perf SSD RAID Groups
Capacity SSD RAID Groups
Cap HDD RAID Grps. (7.2K-rpm)
RG
RG
RG
RG
RG
RG
RG
RG RG
Separate your ASM disk
groups into different
storage domains
Capacity HDDs
Performance SSDs Capacity SSDs Performance HDDs