Oracle Database 11gR1 RAC Build Standards
1.Introduction1.1. 64-Bit Required RPM's 1.2. Schematic
2. Prepare the cluster nodes for Oracle RAC 2.1. User Accounts
2.2. Networking
2.3. Configuring Kernel Parameters
2.4. Configuration of the Hangcheck-timer Module 2.5. Stage the Oracle Software
3. Prepare the shared storage for Oracle RAC
3.1. Example of Configuring Block Device Storage for OracleClusterware 4. Oracle Clusterware Installation and Configuration
4.1. CVU Pre Oracle Clusterware install check 4.2. Oracle Clusterware Install
5. Oracle Clusterware patching
6. Oracle ASM Home Software Install 6.1. CVU check before creating ASM instance 6.2. Oracle ASM Home Software Install
7. Oracle ASM Software Home Patching 8. Oracle RAC Database Listener Creation
CHAPTER 1
INTRODUCTION
This Document is meant for Installation of Real Application Cluster (RAC) on RHEL 5. RAC -11.1.0.7
OS- Linux x86-64 (RHEL 5)
The Installation involves follow steps Preparation
♦ Pre-reqs. to make sure the cluster is setup OK.
♦ Stage all the software on one node, typically Node1 Establish Oracle Clusterware
• Install the Oracle Clusterware (using the push mechanism to install on the other nodes in the
cluster)
• Patch the Clusterware to the latest patchset Establish ASM
• Install an Oracle Software Home for ASM
• Patch the ASM Home to the latest patchset
• Create the Listeners
1.1. 64-Bit Required RPM's
The following packages (or later versions) must be installed:
Red Hat Enterprise Linux 4 Red Hat Enterprise Linux 5 Packages binutils-2.15.92.0.2 binutils-2.17.50.0.6 binutils-2.16.91.0.5
compat-libstdc++-33-3.2.3 compat-libstdc++-33-3.2.3 compat-libstdc++-5.0.7-22.2 compat-libstdc++-33-3.2.3 (32 bit) compat-libstdc++-33-3.2.3 (32 bit)
elfutils-libelf-0.97 elfutils-libelf-0.125 libelf-0.8.5 Rac11gR1OnLinux < RACGuides < TWiki 1.1.1. Software required for install 2
elfutils-libelf-devel-0.97 elfutils-libelf-devel-0.125 gcc-3.4.5 gcc-4.1.1 gcc-4.1.0
gcc-c++-3.4.5 gcc-c++-4.1.1 gcc-c++-4.1.0 glibc-2.3.4-2.19 glibc-2.5-12 glibc-2.4-31.2
glibc-2.3.4-2.19 (32 bit) glibc-2.5-12 (32 bit) glibc-32bit-2.4-31.2 (32 bit) glibc-common-2.3.4 glibc-common-2.5
glibc-devel-2.3.4 glibc-devel-2.5 glibc-devel-2.4
glibc-devel-2.3.4 (32-bit) glibc-devel-2.5-12 (32 bit) glibc-devel-32bit-2.4 (32 bit) libaio-0.3.105 libaio-0.3.106 libaio-0.3.104
libaio-0.3.105 (32 bit) libaio-0.3.106 (32 bit) libaio-32bit-0.3.104 (32 bit) libaio-devel-0.3.105 libaio-devel-0.3.106
libgcc-3.4.5 libgcc-4.1.1 libgcc-4.1.0 libgcc-3.4.5 (32-bit) libgcc-4.1.1 (32 bit) libstdc++-3.4.5 libstdc++-4.1.1
libstdc++-3.4.5 (32 bit) libstdc++-4.1.1 (32 bit) libstdc++-4.1.0 libstdc++-devel 3.4.5 libstdc++-devel 4.1.1 libstdc++-devel-4.1.0 make-3.80 make-3.81 make-3.80
sysstat-5.0.5 sysstat-7.0.0 sysstat-6.0.2
To determine whether the required packages are installed, enter commands similar to the following:
# rpm -q package_name
If a package is not installed, then get it installed.
1.2.
SchematicThe following is a schematic of the software & hardware layout of a 2node RAC cluster. As explained in this
document the actual number of LUN’s required will vary depending on your mirroring requirements.
CHAPTER 2
Prepare the cluster nodes for Oracle RAC
This installation routine presumes that you have a 2-node Linux cluster. There are a number of items that
require checking before the install commences. Getting this bit right will enhance your install experience.
1. User Accounts 2. Networking 3. Time Sync
4. Stage the Oracle Software 5. Run CVU
It is essential that these items are checked and are configured correctly before the install commences.
2.1.
User Accounts2.1.1. Creating the OSDBA (DBA) Group
To determine whether the OSDBA group exists, enter the following command: # grep OSDBA_group_name /etc/group
If the OSDBA group does not exist or if you require a new OSDBA group, then create it as follows. In the
following command, use the group name dba unless a group with that name already exists. # /usr/sbin/groupadd dba
2.1.2. Creating an OSOPER Group (Optional)
If you require a new OSOPER group, then create it as follows. In the following command, use the group name
oper unless a group with that name already exists. # /usr/sbin/groupadd oper
2.1.3. Creating an OSASM Group
To determine whether the OSASM group exists, enter the following command: # grep OSASM_group_name /etc/group
If the OSASM group does not exist or if you require a new OSASM group, then create it as follows. In the
following command, use the group name asadmin unless a group with that name already exists. # /usr/sbin/groupadd asmadmin
2.1.4. Creating an OINSTALL Group
To determine whether the OINSTALL group exists, enter the following command: # grep OINSTALL_group_name /etc/group
If the OINSTALL group does not exist or if you require a new OINSTALL group, then create it as follows. In
the following command, use the group name dba unless a group with that name already exists. # /usr/sbin/groupadd oinstall
Note: The default OINSTALL group name is oinstall.
2.1.5. Determining Whether an Oracle Software Owner User Exists
To determine whether an Oracle software owner user named oracle exists, enter the following command:
# id oracle
If the oracle user exists, then the output from this command is similar to the following: uid=440(oracle) gid=200(oinstall) groups=201(dba),202(oper)
If the user exists, then determine whether you want to use the existing user or create another oracle user. If you want to use the existing user, then ensure that the user’s primary group is the Oracle Inventory group and that it is a member of the appropriate OSDBA and OSOPER groups. 2.1.6. Creating an Oracle Software Owner User
In the following procedure, use the user name oracle unless a user with that name already exists. If the Oracle
software owner user does not exist or if you require a new Oracle software owner user, then create it as
follows:
1.To create the oracle user, enter a command similar to the following: # /usr/sbin/useradd -g oinstall -G dba[,oper] oracle
In this command:
The -g option specifies the primary group, which must be the Oracle Inventory group, for example
oinstall
The -G option specifies the secondary groups, which must include the OSDBA group and if required,
the OSOPER group.dba or dba,oper
2.Set the password of the oracle user: # passwd oracle
2.1.9. Checking Existing SSH Configuration on the System To determine if SSH is running, enter the following command: $ pgrep sshd
If SSH is running, then the response to this command is one or more process ID numbers. In the home
directory of the software owner that you want to use for the installation (crs, oracle), use the command ls -al to ensure that the .ssh directory is owned and writable only by the user.
NOTE- If ssh is not configured properly as per capital One standards then get it installed by Unix team.
2.1.10 Set the Display properly
2.2.
NetworkingWe need a total of 3 IP addresses per node:
The public IP address, which should be recorded in hosts file on each node and, if available, DNS.
This IP Address should be bound to the public adapter before starting the install. It should be a static,
not DHCP, address
The private IP address, which should be from a different subnet than the public IP address. This
address does not require registering in DNS but you should place an entry in the hosts file on each
node. This IP Address should be bound to the private adapter before starting the install. It should be a
static, not DHCP, address
A VIP address, which should be from the same subnet as the public IP address and should be recorded in DNS and the hosts file on each node. This IP Address should NOT be bound to the public
adapter before starting the install. Oracle Clusterware is responsible for binding this address. It should
be a static, not DHCP, address 2.2.1. Network Ping Tests
There are a series of 'ping' tests that should be completed, and then the network adapter binding order should
be checked. You should ensure that the public IP addresses resolve correctly and that the private addresses are
of the form 'nodename-priv' and resolve on both nodes via the hosts file. Public Ping test
Pinging stnsp001 from stnsp001 should return stnsp001's public IP address Pinging stnsp002 from stnsp001 should return stnsp002's public IP address Pinging stnsp001 from stnsp002 should return stnsp001's public IP address Pinging stnsp002 from stnsp002 should return stnsp002's public IP address
Private Ping test
Pinging stnsp001 private from stnsp001 should return stnsp001's private IP address Pinging stnsp002 private from stnsp001 should return stnsp002's private IP address Pinging stnsp001 private from stnsp002 should return stnsp001's private IP address Pinging stnsp002 private from stnsp002 should return stnsp002's private IP address
VIP Ping test Pinging the VIP address at this point should fail. VIPs will be activated at the end of the
Oracle Clusterware install.
2.3.
Configuring Kernel ParametersOn all cluster nodes, verify that the kernel parameters shown in the following table are set to values greater than or equal to the recommended value shown. The procedure following the table describes how to verify and set the values.
2.4.
Configuration of the Hangcheck-timer ModuleBefore installing Oracle Real Application Clusters on Linux systems, verify that the hangcheck-timer module
(hangcheck-timer) is loaded and configured correctly. hangcheck-timer monitors the Linux kernel for
extended operating system hangs that could affect the reliability of a RAC node and cause a database
corruption. If a kernel/device driver hang occurs, then the module restarts the node in seconds. There are 3
parameters used to control the behavior of the module:
The hangcheck_tick parameter: it defines how often, in seconds, the hangcheck-timer checks the
node for hangs. The default value is 60 seconds. Oracle recommends to set it to 1 (hangcheck_tick=1).
1.
The hangcheck_margin parameter: it defines how long the timer waits, in seconds, for a response
from the kernel. The default value is 180 seconds. Oracle recommends to set it to 10 (hangcheck_margin=10)
2.
The hangcheck_reboot parameter: If the value of hangcheck_reboot is equal to or greater than 1,then the hangcheck-timer module restarts the system. If the hangcheck_ reboot parameter is set to
zero, then the hangcheck-timer module will not restart the node. It should always be set to 1.
2.5.
Stage the Oracle SoftwareIt is recommended that you stage the required software onto a local drive on Node 1 of your cluster.
NOTE-The 11gR1 now integrates the Oracle Clusterware, Database and Client install into one DVD with one
runInstaller program.
If you download the software from OTN you will not get the integrated installer. You will have separate
downloads for :
· Oracle Clusterware and · ASM/Database
2.7.
Cluster Verification Utility stage checkNow run the CVU (Cluster Verification Utility) to check the state of the Operating System configuration. CVU can be run from the installation media, but it is recommended to download the latest
version from:
http://www.oracle.com/technology/products/database/clustering/cvu/cvu_download_homepage.h tml
After the hardware and OS has been configured, it is recommended to run CVU to verify the nodes are
configured correctly:
CHAPTER 3
Prepare the shared storage for Oracle RAC
For all installations, you must choose the storage option that you want to use for Oracle Clusterware files,
Automatic Storage Manager (ASM) and Oracle Real Application Clusters databases (Oracle RAC). You do
not have to use the same storage option for each file type. Oracle Clusterware files include:
· Voting disks, used to monitor cluster node status.
· Oracle Cluster Registry (OCR) which contains configuration information about the cluster. There are two ways of storing Oracle Clusterware files:
* Block or Raw Devices: Oracle Clusterware files can be placed on either Block or RAW devices based
on shared disk partitions. Oracle recommends using Block devices for easier usage.
NOTE-When you create partitions with fdisk by specifying a device size, such as +256M, the actual device created
may be smaller than the size requested, based on the cylinder geometry of the disk. This is due to current fdisk
restrictions. Oracle configuration software checks to ensure that devices contain a minimum of 256MB of
available disk space. Therefore, Oracle recommends using at least 280MB for the device size. You can check
partition sizes by using the command syntax fdisk -s partition. For example: [root@node1]$ fdisk -s /dev/sdb1
281106
As root, now configure storage for cluster registry, voting disk and database files. You are presented with a
bunch of disks from the storage array. The output of fdisk –s /dev/sd[b-e] command may look as follows.
/dev/sdb: 116924416 /dev/sdc: 116924416 /dev/sdd: 116924416 /dev/sde: 116924416
In this example we have four 116Gig LUN’s: /dev/sdb 116G
/dev/sdc 116G /dev/sdd 116G /dev/sde 116G
3.1. Example of Configuring Block Device Storage for Oracle Clusterware
The procedure to create partitions for Oracle Clusterware files on block devices is asfollows: 1. log in as root
2. Enter the fdisk command to format a specific storage disk (for example,/sbin/fdisk /dev/sdb) 3. Create a new partition, and make the partition 280 MB in size for both OCR andvoting disk partitions.
Use the command syntax /sbin/partprobe diskpath on each node in thecluster to update the kernel partition table for the shared storage device on eachnode.
4.
The following is an example of how to use fdisk to create one partition on a shared storage block disk device
for an OCR file:
[root@stnsp001] # /sbin/fdisk /dev/sdb
The number of cylinders for this disk is set to 1024. Command (m for help): n
Command action e extended
P primary partition (1-4) p
Partition number (1-4): 1
First cylinder (1-1024, default 1): Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1024, default 1) Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1024, default 1024):+280m Command (m for help):w
The partition table has been altered! Calling ioctl () to re-read partition table. Synching disks.
Login as the root user on the remote nodes and execute the following: [root@stnsp002] # /sbin/partprobe /dev/sdb1
Note: Oracle recommends that you create partitions for Oracle Clusterware files on physically separate disks.
The user account with which you perform the installation (oracle or crs) must have write permissions to create
the files in the path that you specify.
3.1.1. Example of Creating a Udev Permissions File for Oracle Clusterware
The procedure to create a permissions file to grant oinstall group members write privileges to block devices is
as follows: 1. Log in as root.
Change to the /etc/udev/permissions.d directory: # cd /etc/udev/permissions.d
2.
Start a text editor, such as vi, and enter the partition information where you want to place the OCR
and voting disk files, using the syntax device_partitions:root:oinstall:0640 3.
Note that Oracle recommends that you place the OCR and the voting disk files on separate physical disks. For
example, to grant oinstall members access to SCSI disks to place OCR files on sdb1 and sdc1, and to grant the
Oracle Clusterware owner (in this example crs) permissions to place voting disks on sdb5, sdc5 and sda5, add
the following information to the file: # OCR disks sdb1:root:oinstall:0640 sdc1:root:oinstall:0640 # Voting disks sdb5:crs:oinstall:0640 sdc5:crs:oinstall:0640 sdd5:crs:oinstall:0640 1. Save the file: .
·
On Asianux 3, Enterprise Linux 5, Red Hat Enterprise Linux 5, and SUSE Enterprise Server 10 systems, save the file as 51-oracle.permissions.
·
1. Using the following command, assign the permissions in the udev file to the devices: # /sbin/udevstart
Use the following procedure above to create additional partitions to use for the OCR, Voting, and ASM disks.
Refer to the OS documentation for additional information on using the fdisk command. 3.1.2. Platform Specific Settings
As per Metalink Note:357472.1, on SuSE Linux SLES9 disable HOTPLUG_USE_SUBFS by setting it to
"no" in /etc/sysconfig/hotplug. The default setting of "yes" causes problems for multipath devices.
Chapter 4
Oracle Clusterware Installation and Configuration
Oracle Clusterware is an essential component of the OracleRAC database infrastructure.
4.1. CVU Pre Oracle Clusterware install check
Prior to installing CRS, verify the nodes are configured correctly for the CRS install runcluvfy.sh stage -pre crsinst -n stnsp001,stnsp002 -r 11gR1 -verbose
Verify the output . If it is successful then proceed further.
4.2. Oracle Clusterware Install
Start the installer by running "runInstaller" from the staged installation media. ./runInstaller
Step 1:
NOTE-Notice that the Oracle 11g Installer now combines the Oracle Database, Client and Clusterware components
¨ · Action
¨ Select the Oracle Clusterware radio button ¨ Click Nex
Step 2:
The OUI will name the Oracle Clusterware Home'OraCrs11g_home'. If you change this you should make sure that the name you use is unique
¨ ·
Actions
¨ Specify a location for the Oracle Clusterware Home ¨ Click Next
·
Rac11gR1OnLinux < RACGuides < TWiki 4.2.
Step 3:
The installer will validate the state of the cluster before continuing. If there are issues you should rectify them before continuing
¨ · Actions ¨ Click Next · Step 4:
The installer will validate the state of the cluster before continuing. If there are issues you should rectify them before continuing
¨ · Actions ¨ Click Next · Step 5: Notes
Each Cluster requires a name, this should be unique within your organisation, The default is a substring of the node name followed by _cluster
¨
This is where you specify details of all the nodes in the cluster. The installer will default names for the node it is running on. You must add other nodes manually
¨
¨ Oracle defaults the names to 'nodename', 'nodename-priv', 'nodename-vip' ·
Actions
¨ Confirm the Cluster Name selected is acceptable
Confirm the Details for the current node are OK. The defaults are:
Public Node Name : must resolve via hosts and or DNS to the public IP address and must be live
à
Private Node Name : must resolve via hosts to the interconnect IP address and must be live
à
Virtual Host Name : must resolve via hosts and or DNS to a new IP address and must not be live
à
à If these are not correct select the node entry and click Edit... to modify OR ¨
Step 6: Notes
¨ Here you specify the details of the node you wish to add to the cluster nodes list ·
Actions
Enter the new node details
Public Node Name : must resolve via hosts and or DNS to the public IP address and must be live
Private Node Name : must resolve via hosts to the interconnect IP address and must be live
Virtual Host Name : must resolve via hosts and or DNS to a new IP address and must not be live
¨
¨ Click OK to return to the node list for the cluster
Step 7:
Notes
¨ If you have more nodes repeast the Add... cycle ·
Actions ¨ Click Next Step 8:
The installer lists all the Network adapters. You should have one Adapter correctly identified as type 'Public' and at least one adapter correctly identified as type 'Private'. The installer will try and guess the use of an adapter based on the IP address bound. If it guesses incorrectly you must change the usage. Here it has guessed that all adapters are Private, which is incorrect.
¨ Actions
¨ Select the Adapter eth0 ¨ Click Edit...
Step 9:
The installer lists all the Network adapters. You should have one Adapter correctly identified as type 'Public' and at least one adapter correctly identified as type 'Private'. The installer will try and guess the use of an adapter based on the IP address bound. If it guesses incorrectly you must change the usage. Here it has guessed that all adapters are Private, which is
incorrect. ¨
Actions
¨ Select the Adapter eth0 ¨ Click Edit...
Step 10: Notes
Here you can see we have successfully configured the network adapter usage. Ideally you will have only 1 adapter set as public and 1 adapter set as private. Other adapters, if available, set to ‘do not use’. If you have multiple public or multiple private adapters it is better to team them at the OS adapter driver level before commencing the install.Type : Private
¨ ·
Actions ¨ Click Next Step 11:
Here we specify the shared storage devices that will be used by Oracle Clusterware. Ideally you will have 2 devices for the OCR, Oracle will mirror to these devices to protect you from a single OCR device failure. Also you will have an additional 3 vote devices to protect your cluster from the failure of a single vote device.
¨ ·
Actions
¨ Select the Normal radio button
¨ Enter the device to be used for the First OCR ¨ Enter the device to be used for the Second OCR ¨ Click Next
Step 12: Notes
¨ Next we specify the devices to be used for the Oracle Clusterware vote disks ·
Actions
¨ Select the Normal radio button
¨ Enter the device to be used for the First vote disk ¨ Enter the device to be used for the Second vote disk ¨ Enter the device to be used for the Third vote disk ¨ Click Next
Notes
¨ the installer lists a summary of the planned actions ·
Actions ¨ Click Install Step 14:
¨ the installer installs the software onto the local node Actions
¨ none required Step 15:
the installer installs the software onto the remote node ·
Actions
¨ none required Step 16:
The installer requires commands to be run as root on each f the nodes ·
Action
¨ On the first node open a root shell window Example :
bash-3.00# /scratch/11.1.0/crs/root.sh
WARNING: directory '/scratch/11.1.0' is not owned by root Checking to see if Oracle CRS stack is already configured Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/scratch/11.1.0' is not owned by root. Changing owner to root Successfully accumulated necessary OCR kegs.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenunber>: <nodenane> <private interconnect name> <hostnane> node 1: stnsp001 stnsp001-rac stnsp001
node 2: stnsp002 stnsp002-rac stnsp002
Creating OCR kegs for user 'root', privgrp 'root'.. Operation successful.
Now formatting voting device: /dev/sdb5 Now formatting voting device: /dev/sdc5
Now formatting voting devices /dev/sdd5 Format of 3 voting devices complete.
Startup will be queued to init within 30 seconds. Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds. Cluster Synchronization Services is active on these nodes. stnsp001
Cluster Synchronization Services is inactive on these nodes. stnsp002
Local node checking complete. Run root.sh on remaining nodes to start CRS Notes
¨ The output from the root.sh script should be similar to this ¨ here the Oracle Clusterware is configured
¨ This may take some time to run ·
Action
¨ open a root shell on the first node
¨ run the identified command as root on the first node
¨ You must wait for the command to complete before continuing
bash-3.00# /scratch/11.1.0/crs/root.sh
WARNING: directory '/scratch/11.1.0' is not owned by root Checking to see if Oracle CRS stack is already configured Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/scratch/11.1.0' is not owned be root. Changing owner to root clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenunber>: <nodenane> <private interconnect name> <hostnane> node 1: stnsp001 stnsp001-rac stnsp001
node 2: stnsp002 stnsp002-rac stnsp002 clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 30 seconds.
Adding deamons to inittab
Expecting the CRS daemons to be up within 600 seconds. Cluster Synchronization Services is active on these nodes.
stnsp001 stnsp002
Cluster Synchronization Services is active on all the nodes. Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(lM) Running vipca(silent) for configuring nodeapps Creating VIP application resource on (2) nodes... Creating GSD application resource on (2) nodes... Creating ONS application resource on (2) nodes... Creating VIP application resource on (2) nodes... Creating GSD application resource on (2) nodes... Creating ONS application resource on (2) nodes... Done.
Step 17: Notes
¨ The output from the root.sh script should be similar to this ¨ here the Oracle Clusterware is configured
¨ This may take some time to run ·
Action
¨ open a root shell on the second node
¨ run the identified command as root on the second node
¨ You must wait for the command to complete before continuing Step 18:
¨ You can now contnue with the install ·
Action
¨ return to the installer ¨ Click OK
Step 19
a series of configuration assistants are run at the end of the install ·
Actions
¨ none required Step 20
The installer will prompt for completed installation and configuration of Oracle Clusterware Actions
¨ Click Exit ·
Actions ¨ Click Yes
4.2.0.1. Verify cluster resources are online
[oracle11@stnsp001 bin]$ cd /scratch/11.1.0/crs/bin [oracle11@stnsp001 bin]$ ./crs_stat -t
Name Type Target State Host
---ora....001.gsd application ONLINE ONLINE stnsp001 ora....001.ons application ONLINE ONLINE stnsp001 ora....001.vip application ONLINE ONLINE stnsp001 ora....002.gsd application ONLINE ONLINE stnsp002 ora....002.ons application ONLINE ONLINE stnsp002 ora....002.vip application ONLINE ONLINE stnsp002 Notes
¨ You can see the resources configured inside Oracle Clusterware ·
Action
¨ change directory to Oracle Clusterware home bin directory ¨ run the ./crs_stat -t comand
4.2.0.2. Use CVU to verify the Oracle Clusterware install
Verify the Oracle Clusterware installation using the Oracle Cluster Verification Utility: ./cluvfy stage -post crsinst -n stnsp001,stnsp002 -verbose
You should get a meesage as given below
CHAPTER 5
Oracle Clusterware patching
At this point we have installed Oracle Clusterware 11.1.0.6. In this section we will patch the Oracle
Clusterware to the latest release of Oracle 11gR1 - 11.1.0.7. The patchset can be downloaded from metalink
Before we start we can query the clusterware versions # /scratch/11.1.0/crs/bin/crsctl query crs softwareversion Oracle Clusterware version on node [stnsp001] is [11.1.0.6.0] # /scratch/11.1.0/crs/bin/crsctl query crs activeversion Oracle Clusterware active version on the cluster is [11.1.0.6.0]
Step 1
Enter the following commands to start Oracle Universal Installer, where patchset_directory is the directory
where you unpacked the patch set software: $ cd patchset_directory/Disk1
$ ./runInstaller Notes
¨ The installer appears Actions
¨ Click Next Step 2 Notes
¨ You should ensure that the Clusterware home is selected in the first drop down list box ¨ The installer should default the directory to the correct location
Actions
¨ Click Next Step 3
the installer detects that this is a clustered home and automatically selects all the nodes in the
cluster Actions
♦ Click Next
Step 4
Some parameters are validated by the installer
•
Actions
♦ Click Next
Step 5
This screen is a summary of the actions the installer will complete
•
Actions
♦ Click Install Step 6
the installer stages the patch on all the nodes in the cluster Actions
♦ No action required Step 7
At the end the installer lists the mandatory steps that must be completed to apply this patch
•
Actions
Log in as the root user and enter the following command to shut down the Oracle
Clusterware:
♦ •
Run the root111.sh script. It will automatically start the Oracle Clusterware on the patched node: ♦ • # CRS_home/install/root111.sh Example bash-3.00# /scratch/11.1.0/crs/install/root111.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /opt/crs Relinking some shared libraries.
Relinking of patched files is complete.
Preparing to recopy patched init and RC scripts. Recopying init and RC scripts.
Startup will be queued to init within 30 seconds. Starting up the CRS daemons.
Waiting for the patched CRS daemons to start. This may take a while on some systems.
.
11107 patch successfully applied.
clscfg: EXISTING configuration version 4 detected. clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node : node 1: stnsp001 stnsp001-priv stnsp001 Creating OCR keys for user 'root
Continue with these two steps on all nodes, one by one, this will achieve a rolling upgrade of the Oracle
Clusterware. When done, verify the Oracle Clusterware is running on all nodes before exiting the installer
$ CRS_home/bin/crsctl check crs
Cluster Synchronization Services appears healthy Cluster Ready Services appears healthy
Event Manager appears healthy
# /scratch/11.1.0/crs/bin/crsctl query crs softwareversion
# /scratch/11.1.0/crs/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.1.0.7.0
Once the Oracle Clusterware is running on all nodes, exit the installer.
CHAPTER 6
Oracle ASM Home Software Install
6.1. CVU check before creating ASM instance
Verify the cluster is configured correctly for an instance creation./runcluvfy.sh stage -pre dbinst -n stnsp001,stnsp002 -r 11gR1 – verbose
You should get a message like below . Result: CRS health check passed.
CRS integrity check passed.
Pre-check for database installation was successful.
6.2. Oracle ASM Home Software Install
Step 1
Start the installer by running "runInstaller" from the staged installation media.
./runInstaller
Here we will create a ASM Software home on all the nodes in the cluster
•
Actions
♦ Select the Oracle Database 11g radio button
♦ Click Next Step 2
•
Actions
♦ Select the Enterprise Edition radio button
♦ Click Next Step 3
Here we specify the location of various components. An 11g install makes more use of the
ORACLE_BASE. Most logs will be stored in sub directories under the oracle base. The
oracle base will be common to all installs. Modify as required. Also this is where you specify
the location of the ASM software home. We usually modify the Home name and Home Path to
include the word ASM. This makes it easier to identify later on. If you change the path you
should ensure that you do not use the exact same path as the Oracle Clusterware home
Actions
♦ Confirm entries are OK
♦ Click Next Step 4
The installer has detected the presence of Oracle Clusterware and uses this to populate this
dialog box. To build a cluster which includes all nodes you must ensure that there are
check-boxes next to the node names
♦ •
Actions
♦ Click Select All
♦ Click Next Step 5
The installer will then complete some Product-Specific Prerequisite checks. These should all
pass OK – as you have already run the CVU check
♦ •
Actions
Step 6
We are going to install a Software only home and then subsequently configure the software
•
Actions
♦ Select the Install Software Only radio button
♦ Click Next • Notes ♦ • Action ♦ Click Step 7
Here we can see a summary of the install.
•
Actions
♦ Click Install
Step 8
Here the installer copies the software to all nodes in the cluster Actions
♦ none required Step 9
The installer pauses, some scripts need to be run as root on both nodes of the cluster
•
Action
♦ open a shell window on each node
♦ run the root.sh script Example
bash-3.00# /scratch/product/11.1.0/asm/root.sh Running Oracle 11g root.sh script...
The following environment variables are set as: ORACLE_OWNER= oracle11
ORACLE_HOME= /scratch/product/11.1.0/asm
Enter the full pathname of the local bin directory: [/usr/local/bin]:
/usr/local/bin is read only. Continue without copy (y/n) or retry (r)? [y]: Warning: /usr/local/bin is read only. No files will be copied.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script
Now product-specific root actions will be performed. Finished product-specific root actions.
Notes
♦ The output from the root.sh script should be similar to this
♦ it should only take a few seconds to run on each node Action
♦ run the scripts on all nodes
♦ then return to the installer and Click OK Step 10
After the software install completes you will see this End of Installation dialog
•
Actions
CHAPTER 7
Oracle ASM Software Home Patching
Once ASM software has been installed, the next step in the process is to apply the 11.1.0.7 patchset to the
ASM ORACLE_HOME.
7.1. Start the OUI for Oracle ASM Software Home
Patching
Enter the following commands to start Oracle Universal Installer, where patchset_directory is the directory
where you unpacked the patch set software: $ cd patchset_directory/Disk1
$ ./runInstaller Step 1
the Universal Installer screen appears
•
Action
♦ On the Welcome screen, click Next. Step 2
Specify the name and the location of the asm home Action
♦ Check that the name and location are correct Step 3
here you can specify your metalik credentials for this install.
♦ If you leave both fields blank you can opt out of notifications (see next screen).
Step 4
the installer detects that this is a clustered home and automatically selects all the nodes in the
cluster
♦
Actions
♦ Click Next Step 5
Some parameters are validated by the installer
•
Action
♦ Click Next
Step 6
This is a summary of the actions the installer will complete
•
Actions
♦ Click Install Step 7
The installer copies the patch to all the nodes in the cluster Actions
♦ No action required
The installer pauses, root.sh need to be run as root on both nodes of the cluster
•
Action
♦ open a shell window on each node and run root.sh (one after the other)
♦ then return to the installer and Click OK
Step 8 Notes
♦ The output from the root.sh script should be similar to this
♦ it should only take a few seconds to run on each node
•
Action
♦ run the scripts on all nodes
♦ then return to the installer and Click OK Step 9
After the software install completes you will see this End of Installation dialog, exit the
installer ♦ • Actions ♦ Click Yes
CHAPTER 8
Oracle RAC Database Listener Creation
8.1. Create Node specific network listeners
The Oracle network listeners traditionally run from the ASM home. Here we are going to create the listeners
using netca from the ASM home.
Step 1
run the Network Configuration Assistant Action
♦ Set environment variable ORACLE_HOME to ASM home location
♦ change directory to the ASM home bin directory
♦ run ./netca
Netca detects that the Oracle Clusterware layer is running and offers Cluster or Single Node
configuration
♦ •
Actions
♦ Select the Cluster configuration radio button
♦ Click Next
Step 2
Netca uses Oracle Clusterware to determine all the nodes in the cluster
•
Actions
♦ Click Select all nodes
♦ Click Next
Step 3
You get various options – we need to configure listeners Actions
♦ Select the Listener configuration radio button
♦ Click Next
•
Notes
♦ We need to add a listener
•
Actions
♦ Select the Add radio button Click Next
Step 4 Notes
♦ We need to add a listener
•
Actions
♦ Select the Add radio button
♦ Click Next Step 5
Here you get the opportunity to name the listener – Do not change this. The listeners will
eventually be called LISTENER_nodename1 & LISTENER_nodename2. This is important
for RAC Actions
♦ Click Next Step 6
Oracle Net supports various network protocols, although TCP is the most common.
•
Actions
♦ Ensure the Selected Protocols list includes TCP
♦ Click Next Step 7
It is possible to choose a non-default Port – Actions
♦ Ensure the Use the Standard port number of 1521 radio button is set
♦ Click Next
After configuring the node listeners you get the opportunity to configure more network components ♦ • Actions
♦ Select the No radio button
♦ Click Next
Step 9
You get the opportunity to configre other networking components
•
Action
♦ Click the Finish button to exit the tool
8.2. Verify the Listener resources are online
[oracle11@stnsp001 bin]$ cd /scratch/11.1.0/crs/bin[oracle11@stnsp001 bin]$ ./crs_stat -t Name Type Target State Host
---ora....01.lsnr application ONLINE ONLINE stnsp001
ora....001.gsd application ONLINE ONLINE stnsp001 ora....001.ons application ONLINE ONLINE stnsp001 ora....001.vip application ONLINE ONLINE stnsp001 ora....02.lsnr application ONLINE ONLINE stnsp002 ora....002.gsd application ONLINE ONLINE stnsp002 ora....002.ons application ONLINE ONLINE stnsp002 ora....002.vip application ONLINE ONLINE stnsp002 Notes
♦ You can see the listener resources inside Oracle Clusterware
•
Action
♦ change directory to the Oracle Clusterware home bin directory
♦ run the ./crs_stat -t command
Oracle ASM Instance and diskgroup Creation
9.1. Create ASM Instance and add the +DATA and +FLASH diskgroups using the Database Assistant
Step 1
cd /scratch/product/11.1.0/asm/bin ./dbca
♦ We use the dbca from the ASM install to create the ASM instances
•
Actions
♦ ensure the ORACLE_HOME environment variable is set to the ASM home directory
♦ run ./dbca from the ASM home bin directory
dbca detects the Oracle Clusterware layer is running and offers to create either cluster or
single instance database
♦ •
Actions
♦ Select the Oracle Real Application Clusters database radio button
♦ Click Next
Step 2 Actions
♦ Select the Configure Automatic Storage Management radio button
♦ Click Next
Step 3
You need to make sure you create ASM instances on all the cluster nodes
•
Actions
♦ Click Select All
Step 4
Here we need to specify the password for the ASM Oracle SYS user Actions
♦ Enter the SYS password
♦ Enter the same password for the Confirm SYS password Step 5
dbca will create and start ASM instances
•
Actions
♦ Click OK
Step 6
ASM requires disks to be group together into diskgroups. This section will be used to create 2
disk groups +DATA and +FLASH
♦ •
Actions
♦ Click Create New Step 7
At the moment no disks are visible
•
Actions
♦ Click Change Discovery Path Step 8
here we specifya filter to allow us to see the disks on the shared array
•
Action
♦ Enter a filter to allow the installer to see the disks
♦ Click OK
Now we will assign disks to specific disk groups and create the DATA diskgroup
•
Actions
♦ In the Disk Group Name enter DATA
♦ Select the External Redundancy radio button
♦ Select the Show All radio button
♦ Select the 6 disks to be used for the DATA diskgroup
♦ Click OK Step 10
Here we can see the DATA diskgroup has been created and is mounted on 2/2 instances. We
now need to create the FLASH diskgroup
♦ •
Actions
♦ Click Create New Step 11
We need to allow the installer to see the disks we haev reserved for the FLASH disk group
•
Action
♦ Click Change Discovery Path
Step 12
We need to modify the disk discovery string
•
Action
♦ Modify the string
♦ Click OK Step 13
Now we will assign disks to specific disk groups and create the FLASH diskgroup
•
Actions
♦ In the Disk Group Name enter FLASH
♦ Select the Normal radio button
♦ Select the Show Candidate disks radio button
♦ Select the remaining disks allocated for the FLASH diskgroup
♦ Click OK Step 14
Here we can see the DATA and FLASH diskgroups have been created and are mounted on
2/2 instances. This completed the ASM configuration.
♦ •
Actions
♦ Click Finish
♦ A confirmation dialog box Actions
♦ Click No
9.2. Verify ASM instances are online
[oracle11@stnsp001 bin]$ cd /scratch/11.1.0/crs/bin [oracle11@stnsp001 bin]$ ./crs_stat -tName Type Target State Host
---ora....SM1.asm application ONLINE ONLINE stnsp001
ora....01.lsnr application ONLINE ONLINE stnsp001 ora....001.gsd application ONLINE ONLINE stnsp001 ora....001.ons application ONLINE ONLINE stnsp001 ora....001.vip application ONLINE ONLINE stnsp001 ora....SM2.asm application ONLINE ONLINE stnsp002 ora....02.lsnr application ONLINE ONLINE stnsp002 ora....002.gsd application ONLINE ONLINE stnsp002 ora....002.ons application ONLINE ONLINE stnsp002 ora....002.vip application ONLINE ONLINE stnsp002 Notes
•
Action
♦ change directory to the Oracle Clusterware bin directory
♦ run ./crs_stat –t
Once this step is complete, please follow the 11g database build standards to continue the installation