• No results found

OIM 11gR2 Schema Backup and Restoration Using Data Pump Client Utility

N/A
N/A
Protected

Academic year: 2021

Share "OIM 11gR2 Schema Backup and Restoration Using Data Pump Client Utility"

Copied!
14
0
0

Loading.... (view fulltext now)

Full text

(1)

OIM 11gR2: Schema Backup and Restoration using Data Pump Client Utility [ID 1492129.1] Modified:Sep 18, 2012 Type:REFERENCE Status:PUBLISHED Priority:3

In this Document

Purpose Overview

Oracle Identity Manager 11g R2 Database Schemas OIM 11gR2 Database Backup and Restoration Logical Backup of OIM Schema

Restoration of OIM Schema

Remote Restoration Vs Local Restoration

What has changed for the Logical Export/Import in OIM 11g R2 Schema(s) Generic High Level steps in Logical Export of OIM 11g R2 Schema(s) Generic High Level steps in Logical Import of OIM 11g R2 Schema(s) Details

Detailed Steps for Export of OIM 11g R2 Schema(s) Pre-Export Steps

Data Pump Export Post-Export Steps

Detailed Steps for Restoration of OIM Schema(s) Pre-Import Steps

Data Pump Import Post-Import Steps

OIM 11gR2 mid-tier side JDBC Connection configuration Data Source Configuration editing

MBean Configuration changes ANNEXURE

FAQs References

Applies to:

Identity Manager - Version 11.1.2 and later Information in this document applies to any platform.

Purpose

Overview

The Database Layer (OIM and its dependent Schemas) are an integral part of the OIM Application Infrastructure.

This document intends to provide for a Step-By-Step process in detail for the Logical Backup and Restoration of the OIM Application Schema(s) particularly focusing on the OIM 11g R2 Database Schema Topology using Oracle’s Data Pump utility.

What this Technical Note covers

A) Backup of OIM 11g R2 and its dependant Schemas of a running deployment.

B) Restoration from scratch on a different1/same database instance.

C) The methodology depicted is not intended for a Production level Backup Restoration and Recovery solution for an enterprise customer deployment.

1 - a different instance on the same or on a different physical machine.

What this Technical Note does not cover

A) Backup/Restoration methodology for the OIM 11g Upgrade environments

B) Backup/Restoration methodology for T2P scenarios.

Similar Oracle Support Tech-note published for OIM 11gR1/PS1 Schemas (Version 11.1.1.3.0 to 11.1.1.5.0 [Release 11g R1/PS1]) is available at support.oracle.com as “OIM 11gR1: Schema Backup and Restoration using Data Pump Client Utility ( Document 1359656.1 )”

Oracle Identity Manager 11g R2 Database Schemas

From the operational perspective, OIM 11g R2 Database Schema topology consists of the following FIVE mandatory schemas:

a) Oracle Identity Manager (OIM) Schema [OIM 11gR2]

b) Meta Data Service (MDS) Schema

c) Service Oriented Architecture (SOAINFRA) Infra and ORASDPM Schema

(2)

OIM 11gR2 Database Backup and Restoration

Logical Backup of OIM Schema

For the Logical Backup of OIM 11g Schema(s) (and its subsequent Restoration) the recommended tool is Oracle11g R1/R2 Data Pump Export Utility.

Restoration of OIM Schema

For the restoration of the Logical Backup (taken using the Oracle 11g/10g Data Pump Export utility), the corresponding utility i.e. the Data Pump Import utility is to be essentially used.

Following are the possible scenarios of restoration based on the location of restore:

a) Local restoration [Restoration in the same Database Instance post Schema drop]

b) Remote restoration [Restoration in a different Database Instance]

Data Pump is a tool of choice over conventional export/import utility as it not only provides a platform independent data dump but also by way of its architectural enhancements it is faster.

Data Pump command line clients are available out-of-the-box with Oracle Database Client/Server (Oracle DB 10g and onwards) with no additional licensing cost.

NOTE –Check for the compatibility of the Data Pump Client wrt to the Target Database in use. Compatibility between the DB version and the Data Pump Client:

A newer version of Data Pump Client cannot be used in order to export data from a lower database version.

For example to export data from database version 10.2.0.3 or 9.2 the 11.1.0.7 Data Pump Client would not suffice.

Remote Restoration Vs Local Restoration

Steps for restoration on Local or Remote DB Instances are same almost the same except for:

a) In Remote Restoration; schemas are imported in a different database instance (which can be on a different machine as well).This Database may or may not have any prior OIM 11g Schemas or even any FMW Schema created on it, depending on this some additional steps may be required.

b) In the case of Remote DB restoration, the JDBC connection information is to be modified as per the steps in the section ‘OIM 11gR2 mid-tier side JDBC Connection configuration’.

What has changed for the Logical Export/Import in OIM 11g R2 Schema(s)

a) In the case of export/import of dependent schemas

There is a new dependent schema OPSS introduced in OIM 11gR2. 1.

MDS has introduced VPD based access policies for few tables. 2.

SOAINFRA has a new DBMS_AQ job. 3.

b) On the OIM side in 11gR2, Catalog and ADF features have introduced three DBMS SCHEDULER JOBS in the Database

a) FAST_OPTIMIZE_CAT_TAGS

b) REBUILD_OPTIMIZE_CAT_TAGS

c) PURGE_ADF_BC_TXN_TABLE

(3)

NOTE

Order of Export of OIM, MDS, SOAINFRA, ORASDPM, and OPSS Schemas does not matter from the OIM 11g R2 Application functionality perspective.

What to retain along with the exported data dump:

For ease of subsequent restoration, following details are to be retained for later reference along with the OIM export dump:

Name of the OIM/dependent Schemas 1.

Default and Temporary Tablespace names of OIM/its dependant schemas, along with names of any other tablespaces involved in OIM/other Schema objects. 2.

Data Pump/Conventional Export log file 3.

Generic High Level steps in Logical Import of OIM 11g R2 Schema(s)

NOTE

Steps 2) and 4) can be skipped if restoring on the same DB Instance (using the same schema names).

Details

Detailed Steps for Export of OIM 11g R2 Schema(s)

(4)

1) Database Directory creation [DATA PUMP specific]

Data Pump operations require a directory object to be created, it is a logical mapping in the DB for the disk location on the Oracle DB Server m/c. Ensure it has sufficient disk space for intermediate storage of all the dump files.

SQL> CREATE DIRECTORY <directory_name> AS '<absolute path of the dir or folder on the DB m/c>';

If you take this location as your dump file location, all your log files etc can be created here, use an appropriate location with sufficient disk space.

NOTE - Oracle RDBMS 10gR2 and onwards provide for a default directory out of the box by the name of DATA_PUMP_DIR. Following SQL can be used to see the directory name and its corresponding server file path.

SQL> SELECT directory_name,directory_path FROM dba_directories; [run as SYS/SYSTEM user]

2) Shutdown OIM and SOA Managed Server Instances:

i) Shutdown the OIM Weblogic Managed Server Instance

ii) Shutdown the SOA Weblogic Managed Server Instance

Either mode can be used to shutdown the managed instances.

a) EM Grid Control

b) Command line shell script

[./stopManagedWebLogic.sh <OIM/SOA Managed Server Instance Name>]

3) Stop the SOA queues

Connect as SOAINFRA user:

SQL> SELECT name,enqueue_enabled,dequeue_enabled FROM USER_QUEUES where queue_type = 'NORMAL_QUEUE' To start all the Queues: use the PL/SQL API Methods of DBMS_AQADM

Sample PL/SQL Block – BEGIN DBMS_AQADM.STOP_QUEUE ('B2B_BAM_QUEUE'); DBMS_AQADM.STOP_QUEUE ('EDN_OAOO_QUEUE'); DBMS_AQADM.STOP_QUEUE ('EDN_EVENT_QUEUE'); DBMS_AQADM.STOP_QUEUE ('IP_IN_QUEUE'); DBMS_AQADM.STOP_QUEUE ('IP_OUT_QUEUE'); DBMS_AQADM.STOP_QUEUE ('TASK_NOTIFICATION_Q'); END; /

4) Stop any running DBMS_SCHEDULER jobs (Catalog sync and ADF realted).

Connect as OIM user:

Run the following SQLs to identify any running DBMS_SCHEDULER jobs:

SQL> SELECT job_name,session_id,running_instance,elapsed_time FROM user_scheduler_running_jobs In case of any running jobs, either wait till its completion or stop the job ‘gracefully’ using: BEGIN

DBMS_SCHEDULER.stop_job('REBUILD_OPTIMIZE_CAT_TAGS'); END;

/ BEGIN

(5)

DBMS_SCHEDULER.stop_job('FAST_OPTIMIZE_CAT_TAGS'); END; / BEGIN DBMS_SCHEDULER.stop_job('PURGE_ADF_BC_TXN_TABLE’); END; /

NOTE - If the job is not running ORA-27366 is returned on executing the above blocks, this exception can be ignored.

5) MDS has implemented certain VPD based access policies which prevent an export data pump job initiated from SYSTEM user with the following ORA-39181 error.

To circumvent this, grant the EXEMPT ACCESS POLICY to SYSTEM from SYS user before initiating the export data pump job.

As SYS user:

SQL>GRANT EXEMPT ACCESS POLICY TO SYSTEM;

NOTE

Also see Metalink Support Note for additional information: “ORA-39181:Only Partial Table Data Exported Due To Fine Grain Access Control [ Document 422480.1 ]”

6) Shutdown/Startup of DB [Optional Step]

Post shutdown on OIM and SOA server instances, SOA queues and OIM 11g R2 DBMS_SCHEDULER jobs, do a clean shutdown and then startup of the OIM Database Instance this is to eliminate any possibility of a functional consistency in the data being exported.

Figure 1- Sample SQL*Plus command output for SHUTDOWN IMMEDATE and STARTUP of Database

Data Pump Export

1) Sample command (exporting OIM,SOA,ORASDPM,MDS,OPSS Schemas from SYSTEM user, edit to add the required values):

expdp system/password@<SID> DIRECTORY=<dir name>

SCHEMAS=<OIM-Schema,SOA-Schema,MDS-Schema,ORASDPM-Schema,OPSS-Schema> DUMPFILE=<filename.dmp> PARALLEL=4

(6)

JOB_NAME= <job name> EXCLUDE=STATISTICS

2) Sample command (exporting SCHEMA_VERSION_REGISTRY view and its underlying table from the SYSTEM Schema

expdp system/password@<SID> DIRECTORY=<dir name> SCHEMAS=SYSTEM

DUMPFILE=<filename.dmp>

INCLUDE= VIEW:"IN('SCHEMA_VERSION_REGISTRY')" TABLE:"IN('SCHEMA_VERSION_REGISTRY$')" LOGFILE=<log file name> JOB_NAME= <job name>

Figure 2 - Data Pump Export sample console output

Figure 3 - Data Pump Export sample console output contd.

NOTE Verify the Data Pump Export log file for any significant errors encountered during the operation.

Post-Export Steps

After the Export completes successfully, do the following:

1) Document the name of the OIM Schema and the dependent Schemas

2) Document default, temporary tablespace name of OIM Schema, along with names of any other tablespace involved in OIM Schema objects

SQL> SELECT DISTINCT tablespace_name,owner FROM

dba_segments

WHERE owner IN ( ‘OIM’,’SOA’,’ORASDPM’,’MDS’,’OPSS’ Schema Names)

3) Generate the database schema tablespace and user creation script (should contain the user default tablespace, temporary tablespace etc.).

Following method using DBMS_METADATA can be used to generate the tablespace/user creation script from the data dictionary directly:

SET LONG 10000 SET LINES 200 SET PAGES 400

SQL> SELECT DBMS_METADATA.GET_DDL(‘TABLESPACE','<TablespaceName>') FROM DUAL Run replacing the tablespace names as per the o/p of previous SQL in pt 2) above. SQL> SELECT DBMS_METADATA.GET_DDL('USER','<Schema Name>') FROM DUAL

(7)

NOTE - Copy the output of the SQLs and append ‘;’ after each statement. Save the output as a .sql file to run later on the destination database.

4) Capture all the System and Object Grants to the OIM, MDS, SOA Schema Users from the data dictionary using DBMS_METADATA Package.

ü Capturing SYSTEM GRANTS

ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('SYSTEM_GRANT','<OIM Schema Name>') FROM DUAL; ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('SYSTEM_GRANT','<MDS Schema Name>') FROM DUAL; ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('SYSTEM_GRANT', '<SOAINFRA Schema Name>') FROM DUAL; ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('SYSTEM_GRANT', '<ORASDPM Schema Name>') FROM DUAL;

NOTE

Copy the output of the SQLs and append ‘;’ after each statement. Save the output as a ‘.sql’ file to run on the destination database.

ü

Capturing OBJECT GRANTS

ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('OBJECT_GRANT', '<OIM Schema Name>') FROM DUAL; ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('OBJECT_GRANT', '<MDS Schema Name>') FROM DUAL; ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('OBJECT_GRANT', '<SOAINFRA Schema Name>') FROM DUAL; ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('OBJECT _GRANT', '<ORASDPM Schema Name>') FROM DUAL;

NOTE

Copy the output of the SQLs and append ‘;’ after each statement. Save the output as a ‘.sql ‘file to run on the destination database.

Detailed Steps for Restoration of OIM Schema(s)

Pre-Import Steps

All the Steps 1) to 4) are required in case of restoration on a different database instance. In case of restoration in the same database, all the steps 1) to 4) can be ignored and DROP USER <username> CASCADE; should suffice.

1) OIM and dependant Schemas’ Tablespace creation:

In case of a fresh import of this dump (on the destination database), create the tablespaces as captured in step ‘Post Export Steps -> Step 2’.This is where the OIM Schema export dump is to be restored:

SQL> CREATE TABLESPACE <tablespace name > DATAFILE '<file path>/<datafile>.dbf' SIZE 100 M AUTOEXTEND ON NEXT 10M; Datafile path can be fetched like this:

As SYS user : SQL>SELECT * FROM dba_data_files.

FILE_NAME column will give the existing data files' location for the current database instance. Create the data and temp tablespaces for all the schemas i.e. OIM, MDS, SOA, ORASDPM and OPSS.

2) Check for a count of 2 for each of the following SQLs on the target database where the OIM Schema export dump is to be restored, if count <2,perform step 3) else skip and go to step 4).

--To verify for the creation of DBMS_SHARED_POOL Package SQL> SELECT COUNT(*)

FROM dba_objects WHERE owner = 'SYS'

AND object_name = 'DBMS_SHARED_POOL'

AND object_type IN ('PACKAGE','PACKAGE BODY')

--To verify the creation of the v$xatrans$,v$pending_xatrans$ views SELECT COUNT(*)

(8)

FROM dba_objects WHERE owner = 'SYS'

AND object_name like '%XATRANS%'

3) Create the required generic database objects like DBMS_SHARED_POOL/xatrans view etc.

To create the DBMS_SHARED_POOL and xatrans view, run the following script from the RDBMS location:

a) For DBMS_SHARED_POOL creation: SQL> @<$ORACLE_HOME>/rdbms/admin/dbmspool SQL> @<$ORACLE_HOME>/rdbms/admin/prvtpool.plb

b) For xatrans$ views creation: SQL> @<$ORACLE_HOME>/rdbms/admin/xaview

Replace $ORACLE_NAME with the corresponding file path.

4) Create the Schema Users with the same default data and temp tablespace along with the system and object grants as per the instructions in ‘Post Export Steps’ Steps 3) and 4)

Data Pump Import

1) The import of the data dump (generated using the Data Pump Export ) using data pump can be imported via the following:

Run the sample import data pump command:

impdp system/password@<SID>

SCHEMAS=<OIM-Schema,SOA-Schema,MDS-Schema,ORASDPM-Schema,OPSS- Schema> DUMPFILE=<filename.dmp> PARALLEL=4

LOGFILE=<log file name> JOB_NAME= <job name>

2) Import the SCHEMA_VERSION_REGISTRY view in the SYSTEM Schema

Run the sample import data pump command:

impdp system/password@<SID> DIRECTORY=<dir name> SCHEMAS=SYSTEM

DUMPFILE=<filename.dmp> LOGFILE=<log file name> JOB_NAME= <job name> TABLE_EXISTS_ACTION=APPEND

Post successful import, do the following:

(9)

SQL> CREATE PUBLIC SYNONYM schema_version_registry FOR system.schema_version_registry;

3) IGNORE the following type of errors if any:

a) Procedure/Package/Function/Trigger compilation warnings

b) DBMS_AQ errors if any

Post-Import Steps

1) Change all the Schema Passwords (preferably to the old passwords)

2) Compile INVALID Schema Objects

To identify INVALID Schema Objects:

SELECT owner,object_type,object_name, status FROM dba_objects

WHERE status = 'INVALID'

AND owner in (‘<Schema Name1>’,’<Schema Name2>’..) ORDER BY owner, object_type, object_name;

To compile INVALID Schema Objects:

Any appropriate method can be used, one such sample method using (UTL_RECOMP) shown below:

Execute the block for each of the affected Schemas: BEGIN

UTL_RECOMP.recomp_serial('<Schema Name>'); END;

NOTE

Rerun the above block for all Schemas to make sure no objects are INVALID

3) Start the SOAINFRA DBMS Queues:

Connect as SOAINFRA user:

SQL> SELECT name,enqueue_enabled,dequeue_enabled FROM USER_QUEUES where queue_type='NORMAL_QUEUE'

To start all the Queues: use the PL/SQL API Methods of DBMS_AQADM

Sample PL/SQL Block – BEGIN DBMS_AQADM.START_QUEUE ('B2B_BAM_QUEUE'); DBMS_AQADM.START_QUEUE ('EDN_OAOO_QUEUE'); DBMS_AQADM.START_QUEUE ('EDN_EVENT_QUEUE'); DBMS_AQADM.START_QUEUE ('IP_IN_QUEUE'); DBMS_AQADM.START_QUEUE ('IP_OUT_QUEUE'); DBMS_AQADM.START_QUEUE ('TASK_NOTIFICATION_Q'); END;

4) Collect the DB Schema stats [optional]: Refer to Oracle® Fusion Middleware Performance and Tuning Guide 11g Release 2 (11.1.2) (26 Oracle Identity Manager Performance Tuning) [

http://docs.oracle.com/cd/E27559_01/doc.1112/e28552/oim.htm#autoId17 ]

(10)

Data Source Configuration editing

1) Login to the Oracle FMW EM:

http://hostname:weblogic_adminconsole_portno/em

2) Goto Weblogic Domain->base_domain->JDBC Data Sources:

(11)

4) Goto Security->Credentials

5) Edit the OIMSchemaPassword attribute in case there is a change in the OIM Schema password

MBean Configuration changes

(12)

7) Goto the System MBean Browser -> Configuration MBeans ->Security-> myrealmOIMAuthenticationProvider->Attributes

8) Edit the DBUrl/DBUser for connection related information

9) Goto the System MBean Browser -> Application Defined MBeans ->oracle.iam-> Server:oim_server1->XML Config -> DirectDB

10) Goto the Attributes tab -> Look for Url and UserName

11) Edit the values for Connection information

12) Bounce the OIM and SOA Managed Servers

13) Connect to the OIM Application

ANNEXURE

FAQs

1) Data Pump Documentation

ANSWER:

(13)

[http://docs.oracle.com/cd/E11882_01/server.112/e22490/toc.htm#BEGIN]

2) Data Pump Performance considerations

ANSWER:

http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_perf.htm#i1005665

3) How to estimate the export dump file size even before initiating the actual export datapump export process?

ANSWER:

To have a quantitative idea of the dump file size (at least for the first time export/import of OIM Schemas) expdp provides the ESTIMATE_ONLY parameter which might be of help in this case.

Sample command:

expdp system/somepassword@somedb SCHEMAS=DEV_OIM, DEV_MDS,DEV_SOAINFRA,DEV_ORASDPM,DEV_OPSS ESTIMATE_ONLY=Y PARALLEL=4

4) Why are explicit GRANTs important as Pre-Import Step,are they not exported via the Data Pump export process?

ANSWER:

We need explicit handling of GRANTs else quite a few of Packages, Stored Procedures in the OIM Schema will get INVALID.

It is primarily due to the following reason - The grants are imported when the objects they belong to are imported, not when the schema that the grants were granted to is imported.

For example- when SYS did the grant on SYS.DBMS_SHARED_POOL to DEV_OIM, this grant will only be imported when SYS.DBMS_SHARED_POOL is being imported.

They also only get exported when the object is exported.

Since SYS.DBMS_SHARED_POOL was not exported, this grant won't be exported when only schema DEV_OIM is exported.

5) What are the steps involved for the exported data dump to be restored on the same DB Instance?

ANSWER: Steps involved:

a) DROP the existing Schema(s).

b) In case of small sized OIM setups, the existing tablespaces can be retained, else drop and recreate the data and temp tablespaces.

c) Restore the OIM and dependant Schemas using the import data pump utility

d) Change the passwords to the same value as earlier if need be otherwise passwords will remain same if the users are created using “IDENTIFIED BY VALUES..” clause .

Note – In this scenario all the post import steps on the OIM Application configuration side can be skipped.

6) What are the steps involved for the exported data dump to be restored on a different DB Instance?

ANSWER:

a) Perform the pre-import steps (Refer to the Section 'Detailed Steps for Restoration of OIM Schema(s)' ->'Pre-Import Steps')

b) Import the OIM Schema and dependant Schemas using Data Pump Import

c) Update the OIM Application Server configuration for JDBC DataStores and MBeans

7) Where can I find the Schema creation scripts executed when RCU runs?

ANSWER:

RCU follows the following nomenclature:

RCUHOME/rcu/integration/component/<component_name>/<component_name.xml>sql/

(14)

8) What options does Oracle Database11g/10g offers for Backup?

ANSWER:

Logical Backup (Offline/Online) - involves reading a set of database records and writing them to a file. These records are read independently of their physical location. This is performed by choosing specific logical database structures such as named tables, indexes and perhaps even schemas. Logical backups cannot be used for a recovery; it can be used only for restore.

Physical Backup – involves copying the files that constitute the database.

This is performed on the entire database, table spaces, or even at the data file level without regard for the underlying logical data structures such as tables or schemas.

It is also referred as file system backups because they involve using operating system file backup commands. These backups are platform dependent.

Oracle supports two different types of physical file backups:

Cold Backup (Offline) – are performed with the database completely shutdown so that the data files are in a consistent state and nothing is being accessed or changed.

Hot Backup (Online) – is performed while the database is up and end users can remain connected to the database.

RMAN utility is used to perform all physical backups in a more controlled and configurable manner.

All the physical backup techniques are standard and are typically applied to the full DB Instance (nothing specific and restricted to OIM Schemas).

References

References

Related documents

objectives (at the required standard) by using: The final core objective report forms; The final core objective report forms;..

2.2 High-level view of automatic incremental, interactive RDB2 RDF mapping 11 3.1 Simple inter-model matching scenario with persons and their addresses (target ontology, and

Since 1997, the International Crops Research Insti- tute for the Semi-Arid Tropics (ICRISAT) has been conducting a program of FPR in Zimbabwe to iden- tify practical and

We evaluate the scheme under real network conditions using collected keyboard timings for ssh, and our evaluation demonstrates that the steganographic covert channel provides

Eu, Vinícius Gonçalves Lima RA 21272254, aluno (a) do Curso de Educação Física do Centro Universitário de Brasília - UniCEUB, autor(a) do artigo do trabalho de conclusão de

لصاح جیاتن وژپ زا لصاح جیاتن اب شهوژپ نیا زا ینامیرن یاه شه (ناراکمه و 3133 یتیصخش تافص هطبار ناونع تحت ) درکلمع اب سرتسا اب هلباقم یاه تراهم و سرتسا

The purpose of the present study was to provide a descriptive analysis of concussion as well as identify concussive risk factors for a sample of National Hockey League players..

This  Creative  Nail  Design  treatment  is  warming,  soothing  and calming ‐ Earth SpaPedicure and Almond SpaManicure  systems  condition  and  deeply