September 11, 2012

Return code from sqlplus session in Shell script

if you execute a sqlplus block inside a shell script and you need to perform different actions based on its return code. However no metter wheather the sqlplus session terminates with success or failures, its return code always equals 0 which means a successful result. It can be demostrated as:

${ORACLE_HOME}/bin/sqlplus "/ as sysdba" >> ${LogFile} << EOF
startup mount restrict;
exit;
EOF
rc=$?
echo returncode=$rc

the output:

SQL*Plus: Release 10.2.0.5.0 - Production on Tue Sep 11 11:32:41 2012
Copyright (c) 1982, 2010, Oracle.  All Rights Reserved.
Connected to an idle instance.
SQL> ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '/u01/app/oracle/admin/STDBY10G/pfile/spfileSTDBY10G.ora'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
SQL> Disconnected

returncode=0

Solution

“whenever sqlerror exit sql.sqlcode” can be used to terminate a sqlplus session and return an error code, similarly, “whenever oserror exit oscode” returns OS error code. These options are valid since Oracle 9i.
An example:

${ORACLE_HOME}/bin/sqlplus "/ as sysdba" >> ${LogFile} << EOF
whenever oserror exit oscode
whenever sqlerror exit sql.sqlcode
startup mount restrict;
exit;
EOF
echo returncode=$?

the output:


SQL*Plus: Release 10.2.0.5.0 - Production on Tue Sep 11 11:43:57 2012

Copyright (c) 1982, 2010, Oracle.  All Rights Reserved.

Connected to an idle instance.

SQL> SQL> ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '/u01/app/oracle/admin/STDBY10G/pfile/spfileSTDBY10G.ora'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Disconnected
Returncode=54

September 10, 2012

RMAN-06024: No Backup Or Copy Of The Control File Found To Restore

A full backup of database plus archivelog and controlfile for standby has just be completed.

“alter system switch logfile” command  was issued after backup was completed as suggested in note

RMAN Duplicate For Standby Fails with Rman-06024: No Backup Or Copy Of The Control File Found To Restore [ID 466321.1]

But we still get RMAN-06024 error which is demonstrated below:

 RMAN> run {
 2> allocate auxiliary channel c1 type disk format '/BACKUP_STAGE/%u';
 3> duplicate target database for standby dorecover;
 4> }

released channel: ORA_DISK_1
 released channel: ORA_AUX_DISK_1
 allocated channel: c1
 channel c1: sid=648 devtype=DISK

Starting Duplicate Db at 2012-09-10 17:34:19

contents of Memory Script:
 {
 set until scn  8223255;
 restore clone standby controlfile;
 sql clone 'alter database mount standby database';
 }
 executing Memory Script

executing command: SET until clause

Starting restore at 2012-09-10 17:34:32

released channel: c1
 RMAN-00571: ===========================================================
 RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
 RMAN-00571: ===========================================================
 RMAN-03002: failure of Duplicate Db command at 09/10/2012 17:34:32
 RMAN-03015: error occurred in stored script Memory Script
 RMAN-06026: some targets not found - aborting restore
 RMAN-06024: no backup or copy of the control file found to restore

Solution

This corresponds to Rman Duplicate For Standby Failing with Rman-06024 [METALINK DOCID 881674.1]
The solution is to run RMAN without the dorecover option


RMAN> run {
 2> allocate auxiliary channel c1 type disk format '/BACKUP_STAGE/%u';
 3> duplicate target database for standby;
 4> }

After a success duplication of standby database and run


SQL> recover managed standby database using current logfile disconnect;

December 30, 2011

Execution of dbms_logstdby.instantiate_table gets error: ORA-31634: job already exists

I got such an error when recreating a table on logical standby database.


SQL> alter database stop logical standby apply;

SQL> alter database guard standby;

SQL> exec dbms_logstdby.unskip('DML', 'FATTURATO', 'ANALISI_ORDINI_PWH');

SQL> exec dbms_logstdby.instantiate_table('FATTURATO', 'ANALISI_ORDINI_PWH', 'PRIM_DBLINK');
BEGIN dbms_logstdby.instantiate_table('FATTURATO', 'ANALISI_ORDINI_PWH', 'PRIM_DBLINK'); END;

*
ERROR at line 1:
ORA-31634: job already exists
ORA-06512: at "SYS.DBMS_LOGSTDBY", line 636
ORA-06512: at line 1

It seems Oracle only allows 100 datapump Master Tables to exist in a database, if it exceeds this numbers, you will get the error “ORA-31634”.

Get a list of Master Tables


select job_name, state from dba_datapump_jobs;

Solution

drop table sys.SYS_IMPORT_TABLE_nn where nn is 0 .. 99 and restart instantiate_table.

December 19, 2011

Standby: V$ARCHIVE_DEST_STATUS.APPLIED_SEQ# has incorrect value

Problem

SQL> select dest_id, RECOVERY_MODE, DATABASE_MODE, PROTECTION_MODE, ARCHIVED_SEQ#, APPLIED_SEQ#  from v$archive_dest_status where dest_id=2;

DEST_ID RECOVERY_MODE           DATABASE_MODE   PROTECTION_MODE      ARCHIVED_SEQ# APPLIED_SEQ#
———- ———————– ————— ——————– ————- ————
2 MANAGED REAL TIME APPLY MOUNTED-STANDBY MAXIMUM PERFORMANCE          84671        84671

SQL> select sequence#, applied, archived from v$archived_log;
….

SEQUENCE# APP ARC
———- — —
….
84696 YES YES
84696 YES YES
84697 YES YES
84697 YES YES
84698 NO  YES
84698 YES YES
84699 NO  YES
84699 YES YES
84700 YES YES
84700 YES YES
84701 YES YES
84701 YES YES

where 84701 > 84671

Cause

Bug 8822832 – V$ARCHIVE_DEST_STATUS has incorrect value for APPLIED_SEQ# [ID 8822832.8]
Modified 17-JUN-2011     Type PATCH     Status PUBLISHED

Bug 8822832  V$ARCHIVE_DEST_STATUS has incorrect value for APPLIED_SEQ#
This note gives a brief overview of bug 8822832.
The content was last updated on: 17-JUN-2011
Click here for details of each of the sections below.
Affects:

Product (Component)    Oracle Server (Rdbms)
Range of versions believed to be affected     Versions >= 9 but BELOW 12.1
Versions confirmed as being affected

11.2.0.1
11.1.0.7
10.2.0.4

Platforms affected    Generic (all / most platforms affected)

Fixed:

This issue is fixed in

12.1 (Future Release)
11.2.0.2 (Server Patch Set)
11.2.0.1.2 (Patch Set Update)
11.2.0.1 Bundle Patch 6 for Exadata Database
11.2.0.1 Patch 2 on Windows Platforms

December 15, 2011

Creation database 11.2.0.2 on ASM using dbca causes crash of existing instances

During creation of new instance with ASM on Oracle 11.2.0.2, some existing instances of the same ORACLE_HOME suddently crashes! We’ve got such error message in alert.log.

*** 2011-11-22 17:36:00.962
Unexpected error 27140 in job slave process
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 1000 (oinstall), current egid = 1200 (dba)

On the same machine we have installed grid infrastructure of 11.2.0.2. The clusterware (owned by grid) manages  all instances of 11.2..0.2 server as cluster resources in active/passive mode. It’s worth of pointing out that this problem never happens when we use local file system or clustered file system as storage in place of ASM.

Analysis

we realized that the binary file ‘oracle’ in directory $ORACLE_HOME/bin has been changed with ownership. It was changed from “oracle:oinstall” as current instances had started to ‘oracle:dba’ during creation of new instance, which made current instances crashed.

Cause

This is identified as a Bug 9786198 [http://bug.oraclecorp.com/pls/bug/webbug_edit.edit_info_top?rptno=9786198] – SRVCTL START DATABASE ORA-0178 FAILURE IN PROCESSING SYSTEM PARAMETERS

This happens when you chose “dba” for “ASM Database Administrator”,”ASM Instance Administration Operator” and “ASM Instance Administrator”
group when installing grid infrastracture.

If all other database instances are starting up on all nodes, there is no need to do any changes to setasmgid.
This change needs only to be done if you hit same issue again.

Solution

check setasmgidwrap script under grid home.
It should contain entry as below.
.
SETASMGID_LOC=<directory>
.
Check if setasmgid exists under SETASMGID_LOC. rename $SETASMGID_LOC/setasmgid to $SETASMGID_LOC/setasmgid.orig.

restore the permission of oracle binary to oinstall.

Tags: ,
December 12, 2011

Export Bump on Logical standby database failed with error ORA-39068 and ORA-39097

ORA-39006, ORA-39068, ORA-16224, ORA-39097 Errors Raised During Datapump Export With Logical Standby Dataguard Guard_Status Set to ‘ALL’ [ID 1236184.1]

Modified 13-OCT-2010     Type PROBLEM     Status MODERATED

In this Document
Symptoms
Cause
Solution


This document is being delivered to you via Oracle Support’s Rapid Visibility (RaV) process and therefore has not been subject to an independent technical review.

Applies to:

Oracle Server – Enterprise Edition – Version: 10.2.0.4 and later   [Release: 10.2 and later ]
Information in this document applies to any platform.

Symptoms

The following errors are seen during execution of Datapump export:
ORA-39006: internal error
ORA-39068: invalid master table data in row with PROCESS_ORDER=-3
ORA-16224: Database Guard is enabled
ORA-39097: Data Pump job encountered unexpected error -16224

The logical standby Dataguard guard_status has a value of ‘ALL.’
This value can be confirmed by the following query:

     SQL > select guard_status from v$database;

Cause

This is expected behavior when Datapump export is executed on a standby database when Dataguard guard_status has a value of ‘ALL.’ Datapump execution must be able to create a Master Table to store metadata for the Datapump task. When the standby database guard_status is set to ‘ALL,’ no updates are allowed including the creation of the Master Table which is required for Datapump to execute.

Solution

Alter the database so that the Dataguard guard_status is set to a value of either ‘STANDBY’ or ‘NONE’ then repeat the Datapump export task.

November 20, 2011

ORA-00845: MEMORY_TARGET not supported on this system

Problem

SQL> startup nomount pfile=/tmp/initpriman.ora.standby.tmp
ORA-00845: MEMORY_TARGET not supported on this system

Cause

ORA-00845 occurs when shared memory is not mapped to /dev/shm or /dev/shm has not sufficient memory to hold MEMORY_TARGET.

1) check current memory_target setting

[oracle@host01 dbs]$ more initpriman.ora |grep -i memory_target
*.memory_target=629145600

if spfile is used

[oracle@host01 dbs]$strings spfilepriman.ora |grep -i memory_target
*.memory_target=629145600

2) current available shared memory


[oracle@host01 dbs]$ df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00   36533824  21684528  12963500  63% /
/dev/sda1               101086     19374     76493  21% /boot
tmpfs                  1029756    717532    312224  70% /dev/shm
/dev/sdf1              2062336    319160   1743176  16% /ocfs2/clusterware

As current available shared memory 312M is less than required memory_target 600M, ORA-00845 raises.

Solution

Add more shared memory to /dev/shm

$> mount -t tmpfs tmpfs -o size=2g /dev/shm

Modify this line in /etc/fstab to make change persistent across reboot.

tmpfs                   /dev/shm                tmpfs   size=2g        0 0

September 3, 2011

ORA-15077 and ORA-15001

Problem ORA-15077 and ORA-15001
================================

RMAN> restore controlfile from ‘/app/oracle/oradata/rac10g/control01.ctl’;

Starting restore at 02-SEP-11
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 09/02/2011 18:05:34
ORA-19504: failed to create file “+DATA”
ORA-17502: ksfdcre:4 Failed to create file +DATA
ORA-15001: diskgroup “DATA” does not exist or is not mounted
ORA-15077: could not locate ASM instance serving a required diskgroup
ORA-19600: input file is control file  (/app/oracle/oradata/rac10g/control01.ctl)
ORA-19601: output file is control file  (+DATA)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Analysis
=========

1) check if CRS is running on the node

[oracle@guang ]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

2) check if ASM instance is running

sys@+asm> select status from v$instance;

STATUS
————
STARTED

ASM instance is always in “STARED” mode, that is different from normal instance in “OPEN” “MOUNT” and “STARTED” modes.

3) check diskgroup status

sys@+asm> select name, state from v$asm_diskgroup;

NAME                           STATE
—————————— ———–
DATA                           DISMOUNTED
FRA                            DISMOUNTED

Problem is here !!!

sys@+asm> show parameter asm_diskgroups;

NAME                                 TYPE        VALUE
———————————— ———– ——————————
asm_diskgroups                       string

Cause
======

The diskgroups are dismounted when trying to restore control file to them. Since the asm_diskgroups string is empty
so that diskgroups are not mounted automatically at instance startup.

Solution
=========

1) manually mount the diskgroups

alter diskgroup data mount;
alter diskgroup fra mount;

2) set asm_diskgroups string to “DATA, FRA” so that ASM instance will mount them automatically at next startp

alter system set asm_diskgroups=”DATA, FRA” sid=”*”;
shutdown immediate; — need to shutdown connected instance if any
startup
select name, state from v$asm_diskgroup;

September 3, 2011

Converting a single-instance database to RAC Manually

Working environment

single-instance database 10.2.0.5 running on CentOS 5.5
ORACLE_HOME=/app/oracle/product/10.2.0/db
ORACLE_SID=rac10g
Datafiles, control files, redo logs are located on local file system: /app/oracle/oradata/rac10g

Steps to convert single-instance to RAC

Step 1) install clusterware 10.2.0.5 on all nodes of the cluster

Refer to instructions in chapter 4 and 5 of  “Oracle Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide”. You can also follow a detailed article “Building an Inexpensive Oracle RAC 10gR2 on Linux – (RHEL 5.3 / iSCSI) by Jeff Hunter

The version of clusterware has to be equal to or high than rdbms version(10.2.0.5). Patching clusterware to lastest patchset. Refer to metalink patch 8202632.

Step 2) install rdbms 10.2.0.5 rac version (software only) to all nodes of the cluster.

you need to install it to a different directory than the original ORACLE_HOME.

the new ORACLE_HOME is located at

/app/oracle/product/10.2.0/racdb

refer to patch 8202632 for patching to 10205.

Step 3) create ASM instance from new ORACLE_HOME and mount shared disk groups DATA and FRA.

Step 4) Make a full backup of single instance database.

Step 5) Move single-instance database’s datafiles, tempfiles, control files and redo logs from local file system to clustered ASM.

refer to another note “Move database from local file system to shared ASM diskgroups

Afer completing this step

control files


+DATA/rac10g/controlfile/current.258.760834963
 +DATA/rac10g/controlfile/current.257.760834965
 +FRA/rac10g/controlfile/current.256.760834967

datafiles and tempfiles


+DATA/rac10g/datafile/system.256.760835015
 +DATA/rac10g/datafile/undotbs1.259.760835071
 +DATA/rac10g/datafile/sysaux.260.760835115
 +DATA/rac10g/datafile/users.261.760835151
 +DATA/rac10g/tempfile/temp.263.760835669

redo logs


GROUP# MEMBER
 ---------- --------------------------------------------
 1 +DATA/rac10g/onlinelog/group_1.264.760836295
 1 +FRA/rac10g/onlinelog/group_1.257.760836301
 2 +FRA/rac10g/onlinelog/group_2.258.760837077
 2 +DATA/rac10g/onlinelog/group_2.265.760837071
 3 +FRA/rac10g/onlinelog/group_3.259.760837185
 3 +DATA/rac10g/onlinelog/group_3.266.760837179

spfile


+DATA/rac10g/parameterfile/spfilerac11g.ora

pfile


spfile='+DATA/rac10g/parameterfile/spfilerac11g.ora'

Step 6)  create a new pfile with entries about all rac instances


SQL> create pfile='/tmp/initrac1g.ora' from spfile;

Insert these lines into /tmp/initrac10g.ora:


*.cluster_database = TRUE
 *.cluster_database_instances = 2
 *.undo_management=AUTO
 <SID1>.undo_tablespace=undotbs (undo tablespace which already exists)
 <SID1>.instance_name=<SID1>
 <SID1>.instance_number=1
 <SID1>.thread=1
 <SID1>.local_listener=<LISTENERNAME>_<SID>
 <SID1>.remote_listener=LISTENERS_<DB_UNIQUE_NAME>
 <SID2>.instance_name=<SID2>
 <SID2>.instance_number=2
 <SID2>.local_listener=<LISTENERNAME>_<SID>
 <SID2>.remote_listener=LISTENERS_<DB_UNIQUE_NAME>
 <SID2>.thread=2
 <SID2>.undo_tablespace=UNDOTBS2
 <SID2>.cluster_database = TRUE
 <SID2>.cluster_database_instances = 2

in my case it has such entires


*.cluster_database = TRUE
 *.cluster_database_instances = 2
 *.undo_management=AUTO
 rac10g1.undo_tablespace=UNDOTBS1
 rac10g1.instance_name=rac10g1
 rac10g1.instance_number=1
 rac10g1.thread=1
 rac10g1.local_listener=listener_rac10g1
 rac10g1.remote_listener=listeners_rac10g
 rac10g2.instance_name=rac10g2
 rac10g2.instance_number=2
 rac10g2.local_listener=listener_rac10g2
 rac10g2.remote_listener=listeners_rac10g
 rac10g2.thread=2
 rac10g2.undo_tablespace=UNDOTBS2
 rac10g2.cluster_database = TRUE
 rac10g2.cluster_database_instances = 2

Step 7) create new spfile in shared ASM storage and a pfile with reference of spfile in new ORACLE_HOME


export ORACLE_SID=rac10g1

sqlplus / as sysdba

SQL> create spfile='+DATA/rac10g/parameterfile/spfilerac11g.ora' from pfile='/tmp/initrac10g.ora';

In the new ORACLE_HOME/dbs create a pfile ‘initrac10g1.ora’ containing a reference to spfile


[oracle@guang ~]$ export ORACLE_HOME=/app/oracle/product/10.2.0/racdb
 [oracle@guang ~]$ cd $ORACLE_HOME/dbs
 [oracle@guang dbs]$ vi initrac10g1.ora

spfile='+DATA/rac10g/parameterfile/spfilerac11g.ora'

Step 8 ) create password file for instance 1 in new ORACLE_HOME


[oracle@guang dbs]$ orapwd file=orapwrac10g1 password=oracle

Step 9) Set up cluster listeners and tnsnames entries for local_listener and remote_listener.

each cluster node is configured with a listener LISTENER_NODE

listener.ora on node 1


LISTENER_GUANG =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = guang-vip.localdomain)(PORT = 1522)(IP = FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.56.101)(PORT = 1522)(IP = FIRST))
)
)

SID_LIST_LISTENER_GUANG =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /app/oracle/product/10.2.0/racdb)
(PROGRAM = extproc)
)
)

listener.ora on node 2


LISTENER_YANGMEI =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = yangmei-vip.localdomain)(PORT = 1522)(IP = FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.56.102)(PORT = 1522)(IP = FIRST))
)
)

SID_LIST_LISTENER_YANGMEI =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /app/oracle/product/10.2.0/racdb)
(PROGRAM = extproc)
)
)

tnsnames.ora on node 1


# connection to database rac10g
rac10g =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = guang-vip)(PORT = 1522))
(ADDRESS = (PROTOCOL = TCP)(HOST = yangmei-vip)(PORT = 1522))
)
(CONNECT_DATA =
(SERVICE_NAME = rac10g)
)
)

# connection to only instance 1
rac10g1 =
(description=
(address=(protocol=tcp)(host=guang-vip)(port=1521))
(connect_data=
(service_name=rac10g)
(instance_name=rac10g1)))

# connection to only instance 2
rac10g2 =
(description=
(address=(protocol=tcp)(host=yangmei-vip)(port=1521))
(connect_data=
(service_name=rac10g)
(instance_name=rac10g2)))

# setting for local_listener of instance 1
listener_rac10g1 = (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=guang-vip)(PORT=1522)))

# Setting for remote_listener of instance 2
listeners_rac10g =
(address_list=
(address=(protocol=tcp)(host=guang-vip)(port=1522))
(address=(protocol=tcp)(host=yangmei-vip)(port=1522)))

tnsnames.ora on node 2


# connection to database rac10g
rac10g =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = guang-vip)(PORT = 1522))
(ADDRESS = (PROTOCOL = TCP)(HOST = yangmei-vip)(PORT = 1522))
)
(CONNECT_DATA =
(SERVICE_NAME = rac10g)
)
)

# connection to only instance 1
rac10g1 =
(description=
(address=(protocol=tcp)(host=guang-vip)(port=1521))
(connect_data=
(service_name=rac10g)
(instance_name=rac10g1)))

# connection to only instance 2
rac10g2 =
(description=
(address=(protocol=tcp)(host=yangmei-vip)(port=1521))
(connect_data=
(service_name=rac10g)
(instance_name=rac10g2)))

# setting for local_listener of instance 2
listener_rac10g2 = (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=yangmei-vip)(PORT=1522)))

# Setting for remote_listener of instance 2
listeners_rac10g =
(address_list=
(address=(protocol=tcp)(host=guang-vip)(port=1522))
(address=(protocol=tcp)(host=yangmei-vip)(port=1522)))

NOTE: the values for local_listener and remote_listener must match settings in step 6!

For syntax of setting local_listener and remote_listener, refer to Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide in Chapter 9 Understanding the Oracle Real Application Clusters Installed Configuration

Step 10) Start instance 1 in mount state

startup instance 1 using pfile created in step 7.

export ORACLE_SID=rac10g1
export ORACLE_HOME=/app/oracle/product/10.2.0./racdb
sqlplus / as sysdba
SQL> startup mount

Step 11) Add redo log thread for intance 2

Currently it has 3 redo log groups of thread 1, 2 members each as:

select l.group#, l.thread#, member, l.bytes, l.status from v$log l, v$logfile lf where lf.group# = l.group# order by group#;

GROUP#    THREAD# MEMBER                                                  BYTES STATUS
---------- ---------- -------------------------------------------------- ---------- ----------------
1          1 +DATA/rac10g/onlinelog/group_1.264.760836295         52428800 INACTIVE
1          1 +FRA/rac10g/onlinelog/group_1.257.760836301          52428800 INACTIVE
2          1 +FRA/rac10g/onlinelog/group_2.258.760837077          52428800 CURRENT
2          1 +DATA/rac10g/onlinelog/group_2.265.760837071         52428800 CURRENT
3          1 +FRA/rac10g/onlinelog/group_3.259.760837185          52428800 INACTIVE
3          1 +DATA/rac10g/onlinelog/group_3.266.760837179         52428800 INACTIVE

6 rows selected.

I will add 3 groups for instance 2


alter database add logfile thread 2
group 4 ('+DATA',
         '+FRA') size 50M,
group 5 ('+DATA',
         '+FRA') size 50M,
group 6 ('+DATA',
         '+FRA') size 50M;
# alter database enable public thread 2;

Step 12) open the database and enable redo log thread 2

alter database open;
alter database enable public thread 2;

Step 13) create a new undo tablespace for the second instance


create undo tablespace undotbs2 datafile '+DATA' size 100M;

The undo tablespace name ‘UNDOTBS2’ must match setting of <SID2>.undo_tablespace in step 6.

Step 14) create cluster specific view


sql> spool create_cluser_view.log
sql> @?/rdbms/admin/catclust.sql
sql> spool off

Step 15) add instance to /etc/oratab on node 1 and node 2

on node 1:
rac10g1:/app/oracle/product/10.2.0/racdb:N

on node 2
rac10g2:/app/oracle/product/10.2.0/racdb:N

set oracle environment in node 2

. .oraenv

Step 16) create a pfile in $ORACLE_HOME/dbs on node 2

vi initrac10g2.ora
spfile='+DATA/rac10g/parameterfile/spfilerac11g.ora'

Step 17) create password file in $ORACLE_HOME/dbs on node 2

orapwd file=orapwrac10g2 password=oracle

Step 18) create directories on node 2 for trace, audit, dump files same as in node 1

## on node 1
tree -d /app/oracle/admin/rac10g
/app/oracle/admin/rac10g
|-- adump
|-- bdump
|-- cdump
|-- dpdump
|-- pfile
`-- udump

## on node 2
cd /app/oracle/admin
mkdir rac10g
cd rac10g
mkdir adump bdump cdump dpdump pfile udump

Step 19) start up the second instance


. oraenv
ORACLE_SID = [rac10g2]

sqlplus / as sysdba
SQL> startup

Step 20) register database and instances to clusterware

on one of the nodes


srvctl add database -d rac10g -o /app/oracle/product/10.2.0/racdb -p '+DATA/rac10g/parameterfile/spfilerac11g.ora'
srvctl add instance -d rac10g -i rac10g1 -n guang
srvctl add instance -d rac10g -i rac10g2 -n yangmei

-- add ASM dependency to intances
srvctl modify instance -d rac10g -i rac10g1 -s +ASM1
srvctl modify instance -d rac10g -i rac10g2 -s +ASM2

September 2, 2011

Move database from local file system to shared ASM diskgroups

We have a single instance database ‘rac10g’ created on local file system and plan to migrate it to a RAC system using ASM.

At this point, We’ve installed clusterware 10.2.0.5 on all cluster nodes and a shared ASM system with diskgroups DATA and FRA.

The article demostrates steps to migrate database from local file system to clustered ASM storage.

Step 1) Create a pfile and locate control file to ASM diskgroups

SQL> create pfile from spfile;

vi initrac10g.ora

*.control_files=’+DATA’,’+DATA’,’+FRA’   # 2copies in DATA and 1 copy in FRA

Step 2) startup nomount

SQL> startup nomount pfile=’initrac10g.ora’

Step 3) connect target db from RMAN

RMAN> connect target /

Step 4) Restore control file to shared ASM storage

RMAN> restore controlfile from ‘/app/oracle/oradata/rac10g/contorl01.ctl’;

channel ORA_DISK_1: copied control file copy
output filename=+DATA/rac10g/controlfile/current.256.760821943
output filename=+DATA/rac10g/controlfile/current.257.760821947
output filename=+FRA/rac10g/controlfile/current.256.760821953
.

Note if you meet errors ORA-15077 as I did, refer to troubleshoot section in the end of this article for solution.
ORA-15001: diskgroup “DATA” does not exist or is not mounted
ORA-15077: could not locate ASM instance serving a required diskgroup

Step 5) Mount the database

RMAN> sql ‘alter database mount’;

Step 6) Copy datafiles to ASM diskgroup ‘DATA’

RMAN> backup as copy database format ‘+DATA’;

Step 7)  Update controlfile with datafile copies

RMAN> switch database to copy;

Step 8 )  Update controlfile with tempfile copy

RMAN> run {
set newname for tempfile 1 to ‘+DATA’;
set newname for tempfile … to ‘+DATA’;
switch tempfile all;
}

executing command: SET NEWNAME

renamed temporary file 1 to +DATA in control file

Step 9) open the database

RMAN> sql ‘alter database open’;

Step 10) Move redo log files to ASM diskgroups

sys@rac10g> select member, l.status, l.group# from v$log l, v$logfile lf where lf.group# = l.group#;

MEMBER                                   STATUS               GROUP#
—————————————- —————- ———-
/app/oracle/oradata/rac10g/redo03.log    CURRENT                   3
/app/oracle/oradata/rac10g/redo02.log    INACTIVE                  2
/app/oracle/oradata/rac10g/redo01.log    INACTIVE                  1

— add logfile members to ‘DATA’ and “FRA’

SQL> alter database add logfile member ‘+DATA’, ‘+FRA’ to group 1;

sys@rac10g> select member, l.status, l.group#, l.archived from v$log l, v$logfile lf where lf.group# = l.group#;

MEMBER                                             STATUS               GROUP# ARC
————————————————– —————- ———- —
/app/oracle/oradata/rac10g/redo03.log              CURRENT                   3 NO
/app/oracle/oradata/rac10g/redo02.log              INACTIVE                  2 YES
/app/oracle/oradata/rac10g/redo01.log              INACTIVE                  1 YES
+DATA/rac10g/onlinelog/group_1.264.760836295       INACTIVE                  1 YES
+FRA/rac10g/onlinelog/group_1.257.760836301        INACTIVE                  1 YES

— drop old logfile member from local filesystem

sys@rac10g> alter database drop logfile member ‘/app/oracle/oradata/rac10g/redo01.log’;
alter database drop logfile member ‘/app/oracle/oradata/rac10g/redo01.log’
*
ERROR at line 1:
ORA-00362: member is required to form a valid logfile in group 1
ORA-01517: log member: ‘/app/oracle/oradata/rac10g/redo01.log’

sys@rac10g> alter database clear logfile group 1;

Database altered.

sys@rac10g> alter database drop logfile member ‘/app/oracle/oradata/rac10g/redo01.log’;

Database altered.

Repeat Step 10) for all the other logfile groups, if necessary force log switches and checkpoints;

SQL> alter system switch logfile;
or
SQL> alter system checkpoint;

Finally we have all datafiles, tempfiles, controlfiles and redo log files on ASM


sys@rac10g> select file_name from dba_data_files
2    union all
3* select file_name from dba_temp_files

FILE_NAME
------------------------------------------------------------
+DATA/rac10g/datafile/users.261.760835151
+DATA/rac10g/datafile/sysaux.260.760835115
+DATA/rac10g/datafile/undotbs1.259.760835071
+DATA/rac10g/datafile/system.256.760835015
+DATA/rac10g/tempfile/temp.263.760835669

sys@rac10g> show parameter control

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_file_record_keep_time        integer     7
control_files                        string      +DATA/rac10g/controlfile/curre
nt.258.760834963, +DATA/rac10g
/controlfile/current.257.76083
4965, +FRA/rac10g/controlfile/
current.256.760834967

sys@rac10g> select member, l.status, l.group#, l.archived from v$log l, v$logfile lf where lf.group# = l.group# order by group#;

MEMBER                                             STATUS               GROUP# ARC
-------------------------------------------------- ---------------- ---------- ---
+DATA/rac10g/onlinelog/group_1.264.760836295       CURRENT                   1 NO
+FRA/rac10g/onlinelog/group_1.257.760836301        CURRENT                   1 NO
+FRA/rac10g/onlinelog/group_2.258.760837077        ACTIVE                    2 YES
+DATA/rac10g/onlinelog/group_2.265.760837071       ACTIVE                    2 YES
+FRA/rac10g/onlinelog/group_3.259.760837185        ACTIVE                    3 YES
+DATA/rac10g/onlinelog/group_3.266.760837179       ACTIVE                    3 YES

Step 11) replace control_files settings with real controlfiles and create spfile

vi initrac10g.ora

control_files=’+DATA/rac10g/controlfile/current.258.760834963′, ‘+DATA/rac10g/controlfile/current.257.760834965’, ‘+FRA/rac10g/controlfile/current.256.760834967’

otherwise you can create alias for the controlfiles

SQL> create spfile=’+DATA/RAC10G/PARAMETERFILE/spfilerac11g.ora’ from pfile;

vi initrac10g.ora

spfile=’+DATA/RAC10G/PARAMETERFILE/spfilerac11g.ora’

SQL>shutdown immediate;
SQL> startup

Done!

Troubleshooting

Problem ORA-15077 and ORA-15001
================================

at step 4) when restoring contorlfile to ASM I got this error

RMAN> restore controlfile from ‘/app/oracle/oradata/rac10g/control01.ctl’;

Starting restore at 02-SEP-11
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 09/02/2011 18:05:34
ORA-19504: failed to create file “+DATA”
ORA-17502: ksfdcre:4 Failed to create file +DATA
ORA-15001: diskgroup “DATA” does not exist or is not mounted
ORA-15077: could not locate ASM instance serving a required diskgroup
ORA-19600: input file is control file  (/app/oracle/oradata/rac10g/control01.ctl)
ORA-19601: output file is control file  (+DATA)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Analysis
=========

1) check if CRS is running on the node

[oracle@guang ]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

2) check if ASM instance is running

sys@+asm> select status from v$instance;

STATUS
————
STARTED

ASM instance is always in “STARED” mode, that is different from normal instance in “OPEN” “MOUNT” and “STARTED” modes.

sys@+asm> select name, state from v$asm_diskgroup;

NAME                           STATE
—————————— ———–
DATA                           DISMOUNTED
FRA                            DISMOUNTED

Problem is here !!!

sys@+asm> show parameter asm_diskgroups;

NAME                                 TYPE        VALUE
———————————— ———– ——————————
asm_diskgroups                       string

Cause
======

The diskgroups are dismounted when trying to restore control file to them. Since the asm_diskgroups string is empty
so that diskgroups are not mounted automatically at instance startup.

Solution
=========

1) manually mount the diskgroups

alter diskgroup data mount;
alter diskgroup fra mount;

2) set asm_diskgroups string to “DATA, FRA” so that ASM instance will mount them automatically at next startp

alter system set asm_diskgroups=”DATA, FRA” sid=”*”;
shutdown immediate; — need to shutdown connected instance if any
startup
select name, state from v$asm_diskgroup;