Quantcast
Channel: Doyensys Allappsdba Blog..
Viewing all 1640 articles
Browse latest View live

Steps to Flashback to Particular Restore Point

$
0
0
set pagesize 1000

col name format a70

col scn format 9999999999999999999

col time format a50

select SCN,NAME,TIME,STORAGE_SIZE from v$restore_point;

shutdown immediate

startup mount

exit



flashback the database to before deployment

--------------------------------------------

rman target /



flashback database to restore point xxxxxxxxxxx;

 ( if you want to execute from sqlplus remove quotes and if you want to execute from rman you need quotes)

alter database open resetlogs;



exit



drop all restore points including the one which used as restore point

-------------------------------------------------------------------------

dba



set pagesize 1000

select 'drop restore point '||NAME||';' from v$restore_point;

RETRIEVING DROPPED TABLE IN ORACLE USING RMAN AND FLASHBACK TECHNOLOGY

$
0
0


Step 1: Check whether DB has recyclebin on or off

SYS@xxxx> sho parameter recyclebin;

NAME TYPEVALUE
------------------------------------ ----------- ------------------------------
recyclebin   string  ON


SELECT * FROM RECYCLEBIN;

ALTER SESSION SET recyclebin = OFF; 

ALTER SYSTEM SET recyclebin = OFF;

ALTER SESSION SET recyclebin = ON;

ALTER SYSTEM SET recyclebin = ON;



Step 2:  drop table oracle;

Table dropped.

SQL> select original_name from dba_recyclebin; or show recyclebin;

ORIGINAL_NAME 
-------------------------------- 
oracle 

SQL> flashback table oracle to before drop;

Flashback complete.

SQL> select * from oracle;

ID 
---------- 

6

-------------------------------------------------------------------------------------

Recover the tables PMP and DEPTER using the following clauses in the RECOVER command: DATAPUMP DESTINATION, DUMP FILE, REMAP TABLE, and NOTABLEIMPORT.
The following RECOVER command recovers the PMP and DEPTER tables.( Here i am using SYSDATE 1 day before ) You can also use Until Sequence )

RECOVER TABLE SCOTT.PMP, SCOTT.DEPTER
UNTIL TIME 'SYSDATE-1'
AUXILIARY DESTINATION '/tmp/oracle/recover'
DATAPUMP DESTINATION '/tmp/recover/dumpfiles'
DUMP FILE 'pmp_depter_exp_dump.dat'
NOTABLEIMPORT;




Step to recover OCR & Voting Disk When It got corrupted

$
0
0
When we have lost or having a corruption issue on the OCR & VOTING Disk, we have follow the below procedure to bring it back.

When using an ASM disk group for CRS there are typically 3 different types of files located in the disk group that potentially need to be restored/recreated for function of the cluster.
Oracle Cluster Registry file (OCR)
Voting files
 Shared SPFILE for the ASM instances
In this scenario, we are trying to restore the corrupted OCR Disk & Voting Disk from the backup.


Step #1 Stop cluster on each node(Root user).

# crsctl stop crs -f

Step #2 we are starting the cluster in the excusive mode(Root user)

As root start GI in exclusive mode on one node only:
In 11201 RAC, we have to use below option to start the cluster in the exclusive mode.
# crsctl start crs -excl

In 11202 RAC, we have to use below option to start the cluster in the exclusive mode.
# crsctl start crs -excl -nocrs

Note: A new option '-nocrs' has been introduced with  11.2.0.2, which prevents the start of the ora.crsd resource. It is vital that this option is specified; otherwise the failure to start the ora.crsd resource will tear down ora.cluster_interconnect.haip, which in turn will cause ASM to crash.


If you don’t have the OCR DISK GROUP, then create it else move to restoring OCR DISK


Step #3 OCR RESTORE

To Know the OCR Location on the cluster environment
$ cat /etc/oracle/ocr.loc  -- In Linux

To Check whether ocrcheck is corrupted or not

# ocrcheck

Check whether ocrcheck is able to complete it successfully

OCR CHECK Ex
# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4404
         Available space (kbytes) :     257716
         ID                       : 1306201859
         Device/File Name         :  +OCR_VOTE
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded

         Logical corruption check succeeded
     

Note: 1) Check whether cluster registry integrity check is successful.
          2) When you run as oracle user, logical corruption check will be bypassed. You can see this line end of the “ocrcheck” output.

Recreate the Voting file (root user)
The Voting file needs to be initialized in the CRS disk group
# crsctl replace votedisk +OCR_DISK
Note: 1) Above command will Re-create/move your voting disk in the specified ASM Disk Group, if you query the voting disk it will display your voting disk location in the DISK Group which has been specified above.
2)  Voting File is that it is no longer supported to take a manual backup of it with dd.  Instead, the Voting File gets backed up automatically into the OCR.

Query Voting Disk location

# $GRID_HOME/bin/crsctl query css votedisk

Note: You cannot create more than 1 voting disk in the same or on another/different Disk group disk when using External Redundancy in 11.2. The rules are as follows:
External = 1 voting disk
Normal= 3 voting disk
High= 5 voting disk

Step #5 Stop & start the cluster

Shutdown CRS è CRS is running in exclusive mode, it needs to be shutdown (Root User).

# crsctl stop crs -f

Start CRS è Start the CRS in one node, if everything is ok then start the CRS in other nodes (root user).

# crsctl start crs

CRS Status è Once it is start, you can check the status of the CRS(Root / Oracle user)

# crsctl stat res –t –init      à if you are checking for one node
# crsctl check cluster –all  à if you are checking for entire cluster.

Steps to re create Oracle Inventory

$
0
0
1)  No down time is required for recreating the global inventory (oraInventory)
2)  if you have corrupted oracle inventory or improper oracle inventory, you can rename the directory to avoid the confusion.

      mv oraInventory oraInventory_orig

Central Inventory
-----------------
Central Inventory contains the information relating to all Oracle products
Installed on a host. Central inventory (oraInventory) is an inventory that
lists ORACLE_HOMEs installed in the system using the inventory.xml file.
Each central inventory consists of a file called inventory.xml, which
contains the list of Oracle Homes installed.

Local Inventory
---------------
Oracle home inventory or local inventory is present inside each Oracle home.
It contains information relevant to the particular Oracle home only.
This inventory contains, among other things, a file called comps.xml,
which contains all the components  as well as patchsets or interim patches
installed in the ORACLE_HOME.

To determine where oraInventory is located

/var/opt/oracle/oraInst.loc or /etc/oraInst.loc depending upon the Platform.

Sample oraInst.loc file

/var/opt/oracle/oraInst.loc
inst_group=dba
inventory_loc=/apps/oracle/product/oraInventory

ORACLE_HOME/oraInst.loc
inst_group=dba
inventory_loc=/apps/oracle/product/oraInventory




To find the ORACLE_HOME & ORACLE_HOME_NAME

if you have oracle old inventory, then you can view from
/apps/oracle/product/oraInventory/ContentsXML/inventory.xml
you can see ORACLE_HOME & ORACLE_HOME_NAME

SAMPLE OUTPUT

<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 2009 Oracle Corporation. All rights Reserved -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>10.2.0.5.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="orahome_102" LOC="/apps/oracle/product/10.2.0.2" TYPE="O" IDX="1"/>
<HOME NAME="agent10g" LOC="/apps/oracle/product/agent10g" TYPE="O" IDX="2"/>
</HOME_LIST>
</INVENTORY>


Go to Oracle Universal installer location for creating Oracle Inventory
$ORACLE_HOME/oui/bin

if you have more than one Oracle product, you have to update the inventory for all the oracle home
Ex: Oracle DB home(different versions includes)
    Oracle Agent

Note: when you are running for different homes, you have run the oui in there respective home only
   
Ex: output for ORACLE_HOME(run it in $ORACLE_HOME/oui/bin)

SAMPLE OUTPUT

$ ./runInstaller -silent -ignoreSysPrereqs -attachHome ORACLE_HOME="/apps/oracle/product/10.2.0.2" ORACLE_HOME_NAME="orahome_102"
Starting Oracle Universal Installer...

No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.

>>> Ignoring required pre-requisite failures. Continuing...

The inventory pointer is located at /var/opt/oracle/oraInst.loc
The inventory is located at /apps/oracle/product/oraInventory
'AttachHome' was successful.

Ex: output for AGENT_HOME(run it in $AGENT_HOME/oui/bin)




SAMPLE OUTPUT

$ ./runInstaller -silent -ignoreSysPrereqs -attachHome ORACLE_HOME="/apps/oracle/product/agent10g" ORACLE_HOME_NAME="agent10g"
Starting Oracle Universal Installer...

No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.

>>> Ignoring required pre-requisite failures. Continuing...

The inventory pointer is located at /var/opt/oracle/oraInst.loc
The inventory is located at /apps/oracle/product/oraInventory
'AttachHome' was successful.



We can apply the patch by specify oracle inventory location by the below option.

-- Apply the Patch with the address of the Oracle Inventory

$ opatch apply -invPtrLoc /app/oracle/oraInst.loc

$ opatch lsinventory -invPtrLoc /app/oracle/oraInst.loc

STEPS TO RENAME THE UNKNOW FILE IN THE STANDBY DATABASE

$
0
0
ALERT LOG:-
File #45 added to control file as 'UNNAMED045' because
the parameter STANDBY_FILE_MANAGEMENT is set to MANUAL
The file should be manually created to continue.
Errors with log /oracle/Doyen/archive/10G_Doyen_660003944_1_257734.arc
MRP0: Background Media Recovery terminated with error 1274
Sat Sep 25 07:22:47 2010
Errors in file /oracle/Doyen/home/admin/Doyen/bdump/Doyen_mrp0_24491.trc:
ORA-01274: cannot add datafile '/oracle/Doyen/user02.dbf' - file could not be created
Some recovered datafiles maybe left media fuzzy
Media recovery may continue but open resetlogs may fail
Sat Sep 25 07:22:50 2010
Errors in file /oracle/Doyen/home/admin/Doyen/bdump/Doyen_mrp0_24491.trc:
ORA-01274: cannot add datafile '/oracle/Doyen/user02.dbf' - file could not be created
Sat Sep 25 07:22:50 2010
MRP0: Background Media Recovery process shutdown (Doyen)


----File #45 added to control file as 'UNNAMED045' because

select FNNAM,FNONM from x$kccfn where FNFNO=45;
/oracle/Doyen/home/product/10.2.0.4/dbs/UNNAMED045
/oracle/Doyen/user02.dbf


alter database create datafile '/oracle/Doyen/home/product/10.2.0.4/dbs/UNNAMED04554' as '/oracle/Doyen/user02.dbf';

alter database datafile 45 online;

if you are going make  standby_file_management as "AUTO in the init parameter. oracle will use the
db_file_name_convert option to put the files in the respective directory in the standby database.

Article 1

$
0
0
              Warning: Missing charsets in String to FontSet conversion Warning:




PROBLEM:

[oracle@rac1 Desktop]$ xclock
Warning: Missing charsets in String to FontSet conversion
Warning: Unable to load any usable fontset

Before i start installing 12c installation  , i was pre-checking the status at Unix (RHEL 6) level to make sure my installation go smooth and found "xclock" was not working and it says,
# xclock
-bash: xclock: command not found

I have installed it using $ yum install xorg-x11-apps.x86_64
After installation i was getting below error and i don't see any solution in any of the website.


SOLUTION:
After surfing multiple Unix e-books and found the solution:

AddLC_ALL=en_US; export LC_ALL=en_US in .bashrc or in .bash_profile
[oracle@rac1 ~]$ . .bashrc


Issue fixed.

Article 0

$
0
0



                              Query to find the concurrent process are locking in rac.


This query is used for find out the concurrent process are locked while bouncing them in RAC environment , If you return any values logon to the respective nodes and kill them , using alter sytem kill session.


Query
---------

SELECT gv$access.sid, gv$session.serial#,gv$session.inst_id,gv$session.status,gv$session.process  FROM gv$session,gv$access WHERE gv$access.sid = gv$session.sid and gv$access.object = 'FND_CP_FNDSM' GROUP BY gv$access.sid, gv$session.serial#,gv$session.inst_id,gv$session.status,gv$session.process;

Article 0

$
0
0
                                          Query to check the block corruption 


Query
--------

This query is used to find the block corruption present in the database.


SELECT e.owner, e.segment_type, e.segment_name, e.partition_name, c.file#
     , greatest(e.block_id, c.block#) corr_start_block#
     , least(e.block_id+e.blocks-1, c.block#+c.blocks-1) corr_end_block#
     , least(e.block_id+e.blocks-1, c.block#+c.blocks-1)
       - greatest(e.block_id, c.block#) + 1 blocks_corrupted
     , null description
  FROM dba_extents e, v$database_block_corruption c
 WHERE e.file_id = c.file#
   AND e.block_id <= c.block# + c.blocks - 1
   AND e.block_id + e.blocks - 1 >= c.block#
UNION
SELECT s.owner, s.segment_type, s.segment_name, s.partition_name, c.file#
     , header_block corr_start_block#
     , header_block corr_end_block#
     , 1 blocks_corrupted
     , 'Segment Header' description
  FROM dba_segments s, v$database_block_corruption c
 WHERE s.header_file = c.file#
   AND s.header_block between c.block# and c.block# + c.blocks - 1
UNION
SELECT null owner, null segment_type, null segment_name, null partition_name, c.file#
     , greatest(f.block_id, c.block#) corr_start_block#
     , least(f.block_id+f.blocks-1, c.block#+c.blocks-1) corr_end_block#
     , least(f.block_id+f.blocks-1, c.block#+c.blocks-1)
       - greatest(f.block_id, c.block#) + 1 blocks_corrupted
     , 'Free Block' description
  FROM dba_free_space f, v$database_block_corruption c
 WHERE f.file_id = c.file#
   AND f.block_id <= c.block# + c.blocks - 1
   AND f.block_id + f.blocks - 1 >= c.block#
order by file#, corr_start_block#;

Renew the standby control file

$
0
0


Renew the standby control file:
---------------------------------


This action shows how to renew the standby control file in a Data Guard environment
with OMF.
1. In the primary database, create a backup of the standby control file with the
following RMAN statements:
$rman target /
Recovery Manager: Release 11.2.0.1.0 - Production on Wed Dec 19
22:18:05 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All
rights reserved.
connected to target database: ORCL (DBID=1319333016)
RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT 'standbyctl.
bkp';

You'll see that a file named standbycf.bkp is generated under the $ORACLE_
HOME/dbs directory. This file will be used to restore the standby control file in the
standby database.
2. Copy this backup file from the primary database to the standby site by using the scp
or ftp protocols:
scp $ORACLE_HOME/dbs/standbyctl.bkp standbyhost:/tmp/standbyctl.
bkp


3. Query the current online and standby logfile paths in the physical standby database:
SQL> SELECT * FROM V$LOGFILE WHERE TYPE = 'ONLINE';
GROUP# STATUS TYPE MEMBER IS_
------ ------ ------ ----------------------------------------- ---
3 ONLINE /u01/app/oracle2/datafile/ORCL/redo03.log NO
2 ONLINE /u01/app/oracle2/datafile/ORCL/redo02.log NO
1 ONLINE /u01/app/oracle2/datafile/ORCL/redo01.lo NO
SQL> SELECT * FROM V$LOGFILE WHERE TYPE = 'STANDBY';
GROUP# STATUS TYPE MEMBER IS_
------ ------- ---- ------------------------------------------ ---
4 STANDBY /u01/app/oracle2/.../o1_mf_4_85frxrh5_.log YES
5 STANDBY /u01/app/oracle2/.../o1_mf_5_85fry0fc_.log YES
6 STANDBY /u01/app/oracle2/.../o1_mf_6_85fry7tn_.log YES
7 STANDBY /u01/app/oracle2/.../o1_mf_7_85fryh0n_.log YES
4. Shut down the standby database and delete all the online and standby logfiles:
$ sqlplus / as sysdba
SQL> SHUTDOWN IMMEDIATE
$ rm /u01/app/oracle2/datafile/ORCL/redo0*.log
$ rm /u01/app/oracle2/fra/INDIA_PS/onlinelog/o1_mf_*.log
Depending on whether you use the filesystem or the ASM to store the database
files, you must run the rm command on the shell or on asmcmd respectively.
5. Start up the physical standby database in the NOMOUNT mode:
$ sqlplus / as sysdba
SQL> STARTUP NOMOUNT
6. On the standby server, connect to RMAN and restore the standby control file from
the backup file:
$rman target /
RMAN> RESTORE STANDBY CONTROLFILE FROM '/tmp/standbyctl.bkp';

7. Mount the standby database as follows:
RMAN> ALTER DATABASE MOUNT;
database mounted
released channel: ORA_DISK_1
8. If OMF is not being used, and the datafile paths and names are the same for both
the primary and standby databases, skip this step and continue with the next step.
At this stage, in an OMF-configured Data Guard environment, the physical standby
database is mounted, but the control file doesn't show the correct datafile names
because it still contains the primary database's datafile names. We need to change
the datafile names in the standby control file. Use the RMAN CATALOG and SWITCH
commands for this purpose:
RMAN> CATALOG START WITH '/oradata/datafile/';
For ASM, use the following commands:
RMAN> CATALOG START WITH '+DATA1/MUM/DATAFILE/';
RMAN> SWITCH DATABASE TO COPY;
9. If the flashback database is ON, turn it off and on again in the standby database:
SQL> ALTER DATABASE FLASHBACK OFF;
Database altered.
SQL> ALTER DATABASE FLASHBACK ON;
Database altered.
10. If standby redo logs exist in the primary database, we only need to execute the clear
logfile statement in the standby database so that they will be created automatically
(the log_file_name_convert parameter must already be set properly):
SQL> SELECT GROUP# FROM V$STANDBY_LOG;
GROUP#
----------
4
5
6
7
SQL> ALTER DATABASE CLEAR LOGFILE GROUP 4;
Database altered.
SQL> ALTER DATABASE CLEAR LOGFILE GROUP 5;
Database altered.
SQL> ALTER DATABASE CLEAR LOGFILE GROUP 6;
Database altered.
SQL> ALTER DATABASE CLEAR LOGFILE GROUP 7;
Database altered.
If standby redo logs don't exist in the primary database, the following query will not
return any rows. In this case, we need to create the standby redo logs manually:
SQL> SELECT GROUP# FROM V$STANDBY_LOG;
no row selected
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M;
Database altered.
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M;
Database altered.
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M;
Database altered.
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 SIZE 50M;
Database altered.
11. Start a media-recovery process in the physical standby database. The online logfiles
will be cleared automatically.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT
LOGFILE DISCONNECT FROM SESSION;
Database altered.



Fixing NOLOGGING changes in the standby database with incremental database backups

$
0
0



Fixing NOLOGGING changes in the standby database with incremental database backups:
-----------------------------------------------------------------------------------

Determine the SCN that we'll use in the RMAN incremental database backup by
querying the minimum FIRST_NONLOGGED_SCN column of the V$DATAFILE view
in the standby database:
SQL> SELECT MIN(FIRST_NONLOGGED_SCN) FROM V$DATAFILE WHERE FIRST_
NONLOGGED_SCN>0;
MIN(FIRST_NONLOGGED_SCN)
------------------------
20606544
2. Stop Redo Apply on the standby database:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
3. Now we'll take an incremental backup of the database using the FROM SCN
keyword. The SCN value will be the output of the execution of the query in the
first step. Connect to the primary database as the RMAN target and execute the
following RMAN BACKUP statement:
RMAN> BACKUP INCREMENTAL FROM SCN 20606344 DATABASE FORMAT '/data/
DB_Inc_%U' TAG 'FOR STANDBY';
4. Copy the backup files from the primary site to the standby site with FTP or SCP:
scp /data/DB_Inc_* standbyhost:/data/
5. Connect to the physical standby database as the RMAN target and catalog the
copied backup files to the control file with the RMAN CATALOG command:
RMAN> CATALOG START WITH '/data/DB_Inc_';
6. Recover the standby database by connecting it as the RMAN target. RMAN will use
the incremental backup automatically because those files were registered to the
control file previously:
RMAN> RECOVER DATABASE NOREDO;
7. Run the query in the first step again to ensure that there're no more datafiles with
NOLOGGING changes:
SQL> SELECT FILE#, FIRST_NONLOGGED_SCN FROM V$DATAFILE WHERE
FIRST_NONLOGGED_SCN > 0;

8. Start Redo Apply on the standby database:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT
LOGFILE DISCONNECT;

note:

If the state of a tablespace that includes the affected datafiles is READ
ONLY, those files will not be backed up with the RMAN BACKUP
command. We need to put these tablespaces in the read-write mode
before the backup operation. Change the state of a tablespace with the
following statements:
SQL> ALTER TABLESPACE <TABLESPACE_NAME> READ WRITE;
SQL> ALTER TABLESPACE <TABLESPACE_NAME> READ ONLY;

9. Put the primary database in the FORCE LOGGING mode:
SQL> ALTER DATABASE FORCE LOGGING;

Article 5

$
0
0
                                                            Output Post Processing 

Opp issue:
After the restart of application or server OPP process actual is 0 and target is 3.

Cause: OPP process is still alive in the system.


Solution:
Click on the concurrent manager then go to the processes  button You will see the active processes.
Kill those process in the system corresponding to the node.

E.g If node 1:
ps -ef | grep process-id <23242>
Kill -9 23242



Then verify the internal concurrent manager  then you will see the OPP process will get start




Resolving UNNAMED datafile errors

$
0
0




Resolving UNNAMED datafile errors:
-----------------------------------


ORA-01111: name for data file 10 is unknown - rename to correct file
ORA-01110: data file 10: ' /u01/app/oracle2/product/11.2.0/dbhome_1/dbs/
UNNAMED00010'
ORA-01157: cannot identify/lock data file 10 - see DBWR trace file


Now we'll see how to resolve an UNNAMED datafile issue in a Data Guard configuration:
1. Check for the datafile number that needs to be recovered from the standby
database:
SQL> SELECT * FROM V$RECOVER_FILE WHERE ERROR LIKE '%MISSING%';
FILE# ONLINE ONLINE_ ERROR CHANGE# TIME
---------- ------- ------- ----------------- ---------- ----------
10 ONLINE ONLINE FILE MISSING 0
2. Identify datafile 10 in the primary database:
SQL> SELECT FILE#,NAME FROM V$DATAFILE WHERE FILE#=10;
FILE# NAME
---------- -----------------------------------------------
536 /u01/app/oracle2/datafile/ORCL/users03.dbf

3. Identify the dummy filename created in the standby database:
SQL> SELECT FILE#,NAME FROM V$DATAFILE WHERE FILE#=10;
FILE# NAME
---------- -------------------------------------------------------
536 /u01/app/oracle2/product/11.2.0/dbhome_1/dbs/
UNNAMED00010
4. If the reason for the creation of the UNNAMED file is disk capacity or a nonexistent
path, fix the issue by creating the datafile in its original place.
5. Set STANDBY_FILE_MANAGEMENT to MANUAL:
SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=MANUAL;
System altered.
6. Create the datafile in its original place with the ALTER DATABASE CREATE
DATAFILE statement:
SQL> ALTER DATABASE CREATE DATAFILE '/u01/app/oracle2/
product/11.2.0/dbhome_1/dbs/UNNAMED00010' AS '/u01/app/oracle2/
datafile/ORCL/users03.dbf';
Database altered.
If OMF is being used, we won't be allowed to create the datafile with the preceding
statement. We'll come across the following error:
SQL> ALTER DATABASE CREATE DATAFILE '/u01/app/oracle2/
product/11.2.0/dbhome_1/dbs/UNNAMED00010' AS '/u01/app/oracle2/
datafile/ORCL/users03.dbf';
*
ERROR at line 1:
ORA-01276: Cannot add file
/u01/app/oracle2/datafile/ORCL/users03.dbf. File has an Oracle
Managed Files file name.
In order to avoid the error, run the following command:
SQL> ALTER DATABASE CREATE DATAFILE '/u01/app/oracle2/
product/11.2.0/dbhome_1/dbs/UNNAMED00010' AS NEW;
Database altered.


7. Set STANDBY_FILE_MANAGEMENT to AUTO and start Redo Apply:
SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO SCOPE=BOTH;
System altered.


SQL> SHOW PARAMETER STANDBY_FILE_MANAGEMENT
NAME TYPE VALUE
----------------------------------- ----------- ------------------
standby_file_management string AUTO
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT
LOGFILE DISCONNECT FROM SESSION;
Database altered.

8. Check the standby database's processes, or the alert log file, to monitor Redo Apply:
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS
FROM V$MANAGED_STANDBY;

changing the redo transport user

$
0
0




changing the redo transport user:
----------------------------------

If we often need to change the SYS user's password in the primary database, it may be
troublesome to copy the password file to the standby site every time, especially when
there's more than one standby destination. In this case, the REDO_TRANSPORT_USER
parameter comes to our rescue. It's possible to change the default redo transport user from
SYS to another database user by setting this parameter.

Follow these steps to change the redo transport user in the Data Guard configuration:
1. Create a new database, which will be used for redo transport in the primary
database. Grant the SYSOPER privileges to this user and ensure that the standby
database has applied these changes:
SQL> CREATE USER DGUSER IDENTIFIED BY SOMEPASSWORD;
SQL> GRANT SYSOPER to DGUSER;

note:Don't forget that if the password expires periodically for this user,
this will pose a problem in Data Guard redo transport. So ensure
that the default profile does not include the PASSWORD_LIFE_
TIME and PASSWORD_GRACE_TIME settings. If it does, choose
another profile for this user.

2. Stop the redo transport from the primary database to the standby databases. We
can execute the DEFER command to defer the log destination with the ALTER
SYSTEM statement:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = 'DEFER';
3. Change the redo transport user by setting the REDO_TRANSPORT_ USER parameter
in the primary and standby databases:
SQL> ALTER SYSTEM SET REDO_TRANSPORT_USER = DGUSER;
4. Copy the primary database's password file to the standby site:
$ cd $ORACLE_HOME/dbs
$ scp orapwTURKEY standbyhost:/u01/app/oracle/product/11.2.0/
dbhome_1/dbs/orapwINDIAPS
5. Start redo transport from the primary database to the standby databases:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = 'ENABLE';
6. Check whether the redo transport service is running normally by switching redo logs
in the primary database:
SQL> ALTER SYSTEM SWITCH LOGFILE;
Check the standby database processes or the alert log file to see redo transport
service status:
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS
FROM V$MANAGED_STANDBY ;



Oracle Packages Used for Performance Tuning

$
0
0




Oracle Packages Used for Performance Tuning:
--------------------------------------------

DBMS_ADDM
This package provides procedures to manage Oracle Automatic Database Diagnostic Monitor
Procedures
The most relevant procedures are:
ANALYZE_DB: creates an ADDM task to analyze the database and execute it
ANALYZE_INST: creates an ADDM task for analyzing in instance analysis mode
and executes it
GET_REPORT: retrieves the default text report of an executed ADDM task

DBMS_ADVISOR
This package helps in managing the Advisors, a set of expert systems that identify and
help resolve performance problems related to various database server components.
Procedures
The most relevant procedures are:
SET_DEFAULT_TASK_PARAMETER: sets the default values for task parameters
QUICK_TUNE: performs an analysis on a single SQL statement
EXECUTE_TASK: executes the specified task

DBMS_JOB
Schedules and manages jobs in the database job queue.
Oracle recommends using the DBMS_SCHEDULER package.
Procedures
The most relevant procedures are:
SUBMIT: submits a new job to the job queue
RUN: forces a specified job to run
NEXT_DATE: alters the next execution time for a specified job
BROKEN: deletes a job execution
REMOVE: removes the specified job from the job queue


DBMS_LOB
This package provides procedures to work with BLOBs, CLOBs, NCLOBs, BFILEs, and
temporary LOBs.
Procedures
The most relevant procedures are:
GET_LENGTH: gets the length of the LOB value
FILEOPEN: opens a file
LOADFROMFILE: loads LOB data from a file
APPEND: appends the contents of a source LOB to a destination LOB
OPEN: opens an LOB
READ: reads data from the LOB starting at the specified offset
WRITE: writes data to the LOB from a specified offset
CLOSE: closes a previously opened LOB


DBMS_MVIEW
This package helps the management of Materialized Views, refreshes them and helps
understanding the capabilities for materialized views and potential materialized views.
Procedures
The most relevant procedures are:
EXPLAIN_MVIEW: explains what is possible with a materialized view or potential
materialized view
EXPLAIN_REWRITE: explains why a query failed to rewrite or why the optimizer
chose to rewrite a query with a particular materialized view(s)
REFRESH: refreshes one or more materialized views
REFRESH_ALL_MVIEWS: refreshes all the materialized views


DBMS_OUTLN
This package contains the functional interface to manage stored outlines.
To use this package the EXECUTE_CATALOG_ROLE role is needed. There is also a public
synonym OUTLN_PKG.

Procedures
The most relevant procedures are:
CLEAR_USED: clears the outline "used" flag
DROP_BY_CAT: drops outlines which belong to a specific category
UPDATE_BY_CAT: updates the category of outlines to a new category
DROP_UNUSED: drops outlines never applied in the compilation of a SQL statement

DBMS_OUTLN_EDIT
This package contains the functional interface to manage stored outlines.
The public role has execute privileges on DBMS_OUTLN_EDIT, which is defined with
invoker's rights.
Procedures
The most relevant procedures are:
CREATE_EDIT_TABLES: creates outline editing tables in calling a user's schema;
beginning from Oracle 10g, you will not need to use this procedure because the
outline editing tables are part—as temporary tables—of the SYSTEM schema
REFRESH_PRIVATE_OUTLINE: refreshes the in-memory copy of the outline,
synchronizing its data with the edits made to the outline hints
DROP_EDIT_TABLES: drops the outline editing tables from the calling user's schema

DBMS_SHARED_POOL
This package allows access to information about sizes of the objects stored in the shared
pool and marks them for keeping or not-keeping.
Procedures
The most relevant procedures are:
KEEP: keeps an object in the shared pool, so it isn't subject to aging
UNKEEP: unkeeps an object from the shared pool
PURGE: purges the object
SIZES: shows objects in the shared pool larger than the specified size


DBMS_SPACE
This package enables the analysis of segment growth and space requirements.
Procedures
The most relevant procedures are:
CREATE_TABLE_COST: determines the size of a table
CREATE_INDEX_COST: determines the size of an index
FREE_BLOCKS: returns information about free blocks in an object
SPACE_USAGE: returns information about free blocks in a segment
managed by automatic space management



DBMS_SPM
This package provides an interface to manipulate plan history and SQL plan baselines.
Procedures
The most relevant procedures are:
LOAD_PLANS_FROM_CURSOR_CACHE: loads one or more plans from
the cursor cache for a SQL statement
LOAD_PLANS_FROM_SQLSET: loads plans stored in a SQL tuning set into
SQL plan baselines
EVOLVE_SQL_PLAN_BASELINE: evolves SQL plan baselines associated with
one or more SQL statements, changing them to accepted if they are found to be
better than the SQL plan baseline performance and if the user asks such action
DROP_SQL_PLAN_BASELINE: drops a single plan or all the plans associated
with a SQL statement


DBMS_SQL
This package provides an interface to use dynamic SQL to parse both DML and DDL
statements using PL/SQL.

Procedures
The most relevant procedures are:
EXECUTE: executes a cursor
OPEN_CURSOR: returns the cursor ID number of the new cursor
PARSE: parses the given statement
BIND_VARIABLE: binds a given value to a given variable
CLOSE_CURSOR: closes a given cursor and frees associated memory


DBMS_SQLTUNE
This package provides an interface to tune SQL statements.
Procedures
The most relevant procedures related to the SQL tuning set are:
CREATE_SQLSET: creates a SQL tuning set object in the database
DROP_SQLSET: drops a SQL tuning set if not active
SELECT_SQLSET: collects SQL statements from an existing SQL tuning set
LOAD_SQLSET: populates the SQL tuning set with a set of selected SQL statements
SELECT_CURSOR_CACHE: collects SQL statements from the cursor cache
The most relevant procedures to manage SQL tuning tasks are:
CREATE_TUNING_TASK: creates a tuning of a single statement or tuning set
EXECUTE_TUNING_TASK: executes a previously created tuning task
REPORT_TUNING_TASK: displays the results of a tuning task
INTERRUPT_TUNING_TASK: interrupts the currently executing tuning task
RESUME_TUNING_TASK: resumes a previously interrupted tuning task



DBMS_STATS
This package allows you to view and modify optimizer statistics.

Procedures
The most relevant procedures are:
GATHER_SCHEMA_STATS: gathers optimizer statistics for a schema class
GATHER_DATABASE_STATS: gathers optimizer statistics for a database class
GATHER_TABLE_STATS: gathers table statistics
GATHER_INDEX_STATS: gathers index statistics
CREATE_STAT_TABLE: creates the user statistics table
DROP_STAT_TABLE: drops the user statistics table
EXPORT_SCHEMA_STATS: exports schema statistics to a user statistics table
IMPORT_SCHEMA_STATS: import schema statistics from a user statistics table


DBMS_UTILITY
This package provides various utility subprograms.
Procedures
The most relevant procedures are:
ANALYZE_SCHEMA: analyzes all the tables, indexes, and clusters in a schema
ANALYZE_DATABASE: analyzes all the tables, indexes, and clusters in a database
GET_TIME: returns the current time in hundredths of a second

DBMS_WORKLOAD_REPOSITORY
This package allows management of Workload Repository.
Procedures
The most relevant procedures are:
CREATE_SNAPSHOT: creates a manual snapshot
MODIFY_SNAPSHOT_SETTINGS: modifies the snapshot settings
CREATE_BASELINE: creates a single baseline







Article 1

$
0
0

Lost SYSMAN password - 13c


If the current SYSMAN password is unknown, then do the following:

1.    Stop OMS instances:

    cd <OMS_HOME>/bin

    emctl stop oms

2.    Modify the SYSMAN password:

    cd <OMS_HOME>/bin

    emctl config oms -change_repos_pwd -use_sys_pwd -sys_pwd <sys user password> -new_pwd <new sysman password>

    The '-use_sys_pwd' is used to connect to the database as a SYS user and modify the SYSMAN password in the Repository database.

    The current SYSMAN password is not prompted for and only the new password needs to be entered. This will allow the reset of the old password to the new password entered.

    The password will be modified at the Repository Database and the monitoring credentials for the 'OMS and Repository' target.

    Along with the SYSMAN password, this command will modify the password for the EM users (SYSMAN_MDS, BIP, SYSMAN_OPSS, SYSMAN_APM, SYSMAN_RO) created in the Repository Database.
3.    Stop the Admin server and re-start all the OMS:

    cd <OMS_HOME>/bin

    emctl stop oms -all

    emctl start oms

Article 0

$
0
0

High Level Steps to integrate Oracle EBS R12 with OAM for Single Sign-On


Follow below high-level steps to integrate Oracle E-Business Suite with Oracle Access Manager
1. Install Database for IAM (OID/OAM)
2. Install Oracle Internet Directory (OID)
3. Install Oracle Access Manager (OAM)
4. Integrate OAM with OID
5. Integrate EBS with OID
6. Install Oracle HTTP Server (OHS)
7. Install WebGate
8. Integrate EBS with OAM
9. Test OAM-EBS Integration

Article 0

$
0
0

Missing command fuser


Issue:

opatchauto apply is failing with below errors:

    Prerequisite check “CheckSystemCommandAvailable” failed.
    The details are: Missing command :fuser

Solution:

Install below package as a root user:

    yum install psmisc

The package contains the following programs:

    fuser - identifies what processes are using files.
    killall - kills a process by its name, similar to a pkill found in some other Unices.
    pstree - Shows currently running processes in a tree format.
    peekfd - Peek at file descriptors of running processes.

Report to identify the I/O at the object level

$
0
0
To view the oracle waiting event at the object level.

Please use the below query.

Query:
=====
col block_type for a18
col obj for a20
col otype for a15
col event for a15
col blockn for 999999
col f_minutes new_value v_minutes
col p1 for 9999
col tablespace_name for a15

select &minutes f_minutes from dual;

select io.cnt cnt,
       io.aas aas,
       io.event event,
       substr(io.obj,1,20) obj,
       io.p1 p1,
       f.tablespace_name tablespace_name
from
(
  select
        count(*) cnt,
        round(count(*)/(&v_minutes*60),2) aas,
        substr(event,0,15) event,
        nvl(o.object_name,decode(CURRENT_OBJ#,-1,0,CURRENT_OBJ#)) obj,
        ash.p1,
        o.object_type otype
   from v$active_session_history ash,
        all_objects o
   where ( event like 'db file s%' or event like 'direct%' )
      and o.object_id (+)= ash.CURRENT_OBJ#
      and sample_time > sysdate - &v_minutes/(60*24)
   group by
       substr(event,0,15) ,
       CURRENT_OBJ#, o.object_name ,
       o.object_type ,
       ash.p1
) io,
  dba_data_files f
where
   f.file_id = io.p1
Order by io.cnt
/


Output


       CNT        AAS EVENT           OBJ                     P1 TABLESPACE_NAME
---------- ---------- --------------- -------------------- ----- ---------------
         1          0 db file sequent 0                        1 SYSTEM

SQL>

Steps to Modify Scan listener port number

$
0
0
Due to security reasons some of environment require using a different port than default for the listener. Initially installing Grid Infrastructure, the administrator is not given the choice of which port wants to use. This means that  changing the listener port must happen after the installation has completed successfully.This scenario happens in implementing phase before project live.

Below are steps to modify the scan listener port from default.

Default Port : 1521
New Port : 1621

Step 1: Checking scan listener configurations (run as grid user).
[oracle@node1 ]$ srvctl  status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): node2, node1

[oracle@node1 ]$ srvctl config listener
Name: LISTENER
Network: 1, Owner: oracle
Home: <CRS home>
End points: TCP:1521

[oracle@node1 ]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node2
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node node1
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node node1

[oracle@node1 ]$  srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521

Check below command in Database on all nodes:-

SQL> show parameter listeners.

Step 2: Modifying port number using srvctl as grid user.

Change port number of the traditional  listener:
#srvctl modify listener -l LISTENER -p 1621

Change port number of the SCAN listener:
#srvctl modify scan_listener -p TCP:1621

Note : Changes are not effective until the listeners are restarted.

Step 3: Reload Both Listeners:

# Traditional listener
srvctl stop listener
srvctl start listener

# Scan listener
srvctl stop scan_listener
srvctl start scan_listener

Verify the listeners have picked up the new port.
[oracle@node1 admin]$ srvctl config listener
Name: LISTENER
Network: 1, Owner: oracle
Home: <CRS home>
End points: TCP:1621

[oracle@node1 admin]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1621
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1621
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1621

Steps 4: Modify remote_listener parameter:
sql> alter system set remote_listener ='scan:1621' scope=both;

Step 5: Modify TNSNAMES.ORA files used for connectivity to reflect the new port.

Modify tnsnames.ora file in Oracle database home

ORCL =
 (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = scan)(PORT = 1621))
    (CONNECT_DATA =
     (SERVER = DEDICATED)
     (SERVICE_NAME = ORCL)
    )
 )

Restart database:
#srvctl stop database -d ORCL
#srvctl start database -d ORCL

Verify connection to the database using sqlplus
[oracle@node1 ~]$ sqlplus system/########@ORCL



Migrate Database from Non-ASM to ASM

$
0
0
Assumption(s):
Existing Database Name: - test_db (non-asm)
New Database Name: - test_db (asm)
SQL> select name from v$datafile;
NAME
------------------------------------------------------------
/data/mount01/test_db/system_01.dbf
/data/mount01/test_db/sysaux_01.dbf
/data/mount01/test_db/undo_t01_01.dbf
/data/mount01/test_db/tools_t01_01.dbf
/data/mount01/test_db/users_t01_01.dbf
/data/mount01/test_db/xdb_01.dbf
/data/mount01/test_db/test_c.dbf
7 rows selected.
SQL> select member from v$logfile;
MEMBER
----------------------------------------------------------------------
/data/mount03/test_db/ora_log_03_01.rdo
/data/mount03/test_db/ora_log_03_02.rdo
/data/mount03/test_db/ora_log_02_01.rdo
/data/mount03/test_db/ora_log_02_02.rdo
/data/mount03/test_db/ora_log_01_01.rdo
/data/mount03/test_db/ora_log_01_02.rdo
6 rows selected.
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
ALTER DATABASE DISABLE BLOCK CHANGE TRACKING
ERROR at line 1:
ORA-19759: block change tracking is not enabled
SQL> show parameter db_create_file_dest
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest                  string
SQL>  show parameter spfile
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      /u01/app/oracle/product/12.1.0.2/test_db/dbs/spfitetest_db.ora
SQL> !ls -ltr /u01/app/oracle/product/12.1.0.2/test_db/dbs/spfiletest_db.ora
-rw-r----- 1 oracle dba 4608 Dec 27 00:00 /u01/app/oracle/product/12.1.0.2/test_db/dbs/spfiletest_db.ora
/*
  This parameter (db_create_file_dest ) define the default location for data files,   control_files etc, if no location for these files is specified  at the time of their creation.
*/
SQL> alter system set db_create_file_dest='+DG1' scope=spfile;
System altered.
/*
If you set db_create_online_log_dest_n, controlfile will get created at the location specified by db_create_online_log_dest. the database does not create a control file in DB_CREATE_FILE_DEST or in DB_RECOVERY_FILE_DEST
We skipped this step as redo log creating in diskgroup can be taken care later.
SQL> alter system set db_create_online_log_dest_1='XXX' scope=spfile;
System altered.
“Specifying Control Files at Database Creation”
*/
SQL> SHOW PARAMETER control_files
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_files                        string      /data/mount03/test_db/control1.ctl
/*
   Here we removed the control_files parameter from spfile.  So next time we restore the control file it will automatically go to +DG1 diskgroup since it is defined in db_create_file_dest, and the new path will be automatically updated in spfile.
*/
SQL> alter system reset control_files scope=spfile sid='*';
System altered.
SQL> SHUT IMMEDIATE;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> STARTUP NOMOUNT;
ORACLE instance started.
Total System Global Area  835104768 bytes
Fixed Size                  2257840 bytes
Variable Size             671091792 bytes
Database Buffers          159383552 bytes
Redo Buffers                2371584 bytes
SQL> SHOW PARAMETER control_files
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_files                        string      /u01/app/oracle/product/12.1.0.2/test_db/dbs/cntrtest_db.dbf ----Dummy Controlfile
SQL> show parameter db_create_file_dest
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest                  string      +DG1
SQL> alter database mount;
alter database mount
*
ERROR at line 1:
ORA-00205: error in identifying control file, check alert log for more info
$ ./rman target /
RMAN> restore controlfile from '/data/mount03/test_db/control1.ctl';
Starting restore at 08-JAN-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=178 device type=DISK
channel ORA_DISK_1: copied control file copy
output file name=+DG1/test_db/controlfile/current.301.900620801
Finished restore at 08-JAN-16
RMAN> alter database mount;
database mounted
RMAN>run
{
BACKUP AS COPY DATAFILE 1 FORMAT "+DG2";
BACKUP AS COPY DATAFILE 2 FORMAT "+DG2";
BACKUP AS COPY DATAFILE 3 FORMAT "+DG1";
BACKUP AS COPY DATAFILE 4 FORMAT "+DG1";
BACKUP AS COPY DATAFILE 5 FORMAT "+DG1";
BACKUP AS COPY DATAFILE 6 FORMAT "+DG1";
BACKUP AS COPY DATAFILE 7 FORMAT "+DG1";
}
RMAN> report schema;
Report of database schema for database with db_unique_name TEST_DB
List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    500      SYSTEM               ***     /data/mount01/test_db/system_01.dbf
2    500      SYSAUX               ***     /data/mount01/test_db/sysaux_01.dbf
3    1000     UNDO_T01             ***     /data/mount01/test_db/undo_t01_01.dbf
4    100      TOOLS_T01            ***     /data/mount01/test_db/tools_t01_01.dbf
5    1024     USERS_T01            ***     /data/mount01/test_db/users_t01_01.dbf
6    200      XDB                  ***     /data/mount01/test_db/xdb_01.dbf
7    100      TEST_C               ***     /data/mount01/test_db/test_c.dbf
RMAN> SWITCH DATABASE TO COPY;
datafile 1 switched to datafile copy "+DG2/test_db/datafile/system.294.900618889"
datafile 2 switched to datafile copy "+DG2/test_db/datafile/sysaux.300.900618895"
datafile 3 switched to datafile copy "+DG1/test_db/datafile/undo_t01.297.900618897"
datafile 4 switched to datafile copy "+DG1/test_db/datafile/tools_t01.301.900618905"
datafile 5 switched to datafile copy "+DG1/test_db/datafile/users_t01.257.900618907"
datafile 6 switched to datafile copy "+DG1/test_db/datafile/xdb.267.900618913"
datafile 7 switched to datafile copy "+DG1/test_db/datafile/test_c.268.900618917"
RMAN> run
 { set newname for tempfile 1 to "+DG1";
   switch tempfile all;
 }
executing command: SET NEWNAME
renamed tempfile 1 to +DG1 in control file
RMAN> alter database open;
database opened
RMAN> report schema;
Report of database schema for database with db_unique_name TEST_DB
List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    500      SYSTEM               ***        +DG2/test_db/datafile/system.297.900620831
2    500      SYSAUX               ***         +DG2/test_db/datafile/sysaux.298.900620837
3    1000     UNDO_T01             ***    +DG1/test_db/datafile/undo_t01.299.900620839
4    100      TOOLS_T01            ***     +DG1/test_db/datafile/tools_t01.296.900620847
5    1024     USERS_T01            ***    +DG1/test_db/datafile/users_t01.269.900620849
6    200      XDB                  ***            +DG1/test_db/datafile/xdb.268.900620855
7    100      TEST_C               ***         +DG1/test_db/datafile/test_c.267.900620857
List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    500      TEMP_T01             500         +DG1/test_db/tempfile/temp_t01.257.900620955
Update the redo log file location from non-asm to asm
SQL> SELECT a.group#, b.member, a.status FROM v$log a, v$logfile b WHERE a.group#=b.group#;
    GROUP# MEMBER                                              
---------- -------------------------------------------------------
         3 /data/mount01/test_db/ora_log_03_01.rdo         
         3 /data/mount01/test_db/ora_log_03_02.rdo         
         2 /data/mount01/test_db/ora_log_02_01.rdo         
         2 /data/mount01/test_db/ora_log_02_02.rdo         
         1 /data/mount01/test_db/ora_log_01_01.rdo         
         1 /data/mount01/test_db/ora_log_01_02.rdo         
6 rows selected.
SQL> ALTER DATABASE DROP LOGFILE GROUP 3;
Database altered.
SQL> ALTER DATABASE ADD LOGFILE group 3 ('+REDO1');
Database altered.
SQL>  ALTER DATABASE ADD LOGFILE MEMBER '+REDO2' TO GROUP 3;
Database altered.
SQL>  ALTER DATABASE DROP LOGFILE GROUP 2;
Database altered.
SQL>  ALTER DATABASE ADD LOGFILE group 2 ('+REDO1');
Database altered.
SQL> ALTER DATABASE ADD LOGFILE MEMBER '+REDO2' TO GROUP 2;
Database altered.
SQL> SELECT a.group#, b.member, a.status FROM v$log a, v$logfile b WHERE a.group#=b.group#;
    GROUP# MEMBER                                              
---------- ----------------------------------------------------------
         3 +REDO1/test_db/onlinelog/group_3.257.898874349         
         3 +REDO2/test_db/onlinelog/group_3.269.898874371        
         2 +REDO1/test_db/onlinelog/group_2.268.898874411        
         2 +REDO2/test_db/onlinelog/group_2.267.898874417        
         1 /data/mount01/test_db/ora_log_01_01.rdo         
         1 /data/mount01/test_db/ora_log_01_02.rdo         
6 rows selected.
SQL> alter system switch logfile;
System altered.
SQL> /
System altered.
SQL> alter system checkpoint;
System altered.
SQL>  ALTER DATABASE DROP LOGFILE GROUP 1;
Database altered.
SQL> ALTER DATABASE ADD LOGFILE group 1 ('+REDO1');
Database altered.
SQL> ALTER DATABASE ADD LOGFILE MEMBER '+REDO2' TO GROUP 1;
Database altered.
SQL> SELECT a.group#, b.member, a.status FROM v$log a, v$logfile b WHERE a.group#=b.group#;
    GROUP# MEMBER                                       
---------- ---------------------------------------------
         3 +REDO1/test_db/onlinelog/group_3.257.898874349
         3 +REDO2/test_db/onlinelog/group_3.269.898874371
         2 +REDO1/test_db/onlinelog/group_2.268.898874411
         2 +REDO2/test_db/onlinelog/group_2.267.898874417
         1 +REDO1/test_db/onlinelog/group_1.266.898874499
         1 +REDO2/test_db/onlinelog/group_1.265.898874509
Multiplex Controlfile
SQL> select name from v$controlfile;
NAME
--------------------------------------------------
+DG1/test_db/controlfile/current.301.900620801
SQL> alter system set control_files='+DG1/test_db/controlfile/current.301.900620801','+REDO1','+DG1' scope=spfile sid='*';
System altered.
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>  startup nomount
ORACLE instance started.
Total System Global Area  835104768 bytes
Fixed Size                  2257840 bytes
Variable Size             671091792 bytes
Database Buffers          159383552 bytes
Redo Buffers                2371584 bytes
$ ./rman target /
RMAN> restore controlfile from '+DG1/test_db/controlfile/current.301.900620801';
Starting restore at 08-JAN-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=416 device type=DISK
channel ORA_DISK_1: copied control file copy
output file name=+DG1/test_db/controlfile/current.301.900620801
output file name=+REDO1/test_db/controlfile/current.272.900623351
output file name=+DG1/test_db/controlfile/current.304.900623351
Finished restore at 08-JAN-16
RMAN> alter database mount;
database mounted
released channel: ORA_DISK_1
RMAN> alter database open;
database opened
SQL> select name from v$controlfile;
NAME
----------------------------------------
+DG1/test_db/controlfile/current.301.900620801
+REDO/test_db/controlfile/current.272.900623351
+DG1/test_db/controlfile/current.304.900623351
Enable Block change tracking
SQL> select status from V$BLOCK_CHANGE_TRACKING;
 STATUS
----------
DISABLED
 SQL> SELECT filename FROM V$BLOCK_CHANGE_TRACKING;
 FILENAME
-----------------------------------------------------------------------------  
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;
 Database altered.
SQL>  select status from V$BLOCK_CHANGE_TRACKING;
 STATUS
----------
ENABLED
SQL>  SELECT filename FROM V$BLOCK_CHANGE_TRACKING;
 FILENAME
-----------------------------------------------------------------------------
+DG1/test_db/changetracking/ctf.563.900723605
Move spfile in diskgroup
SQL>create pfile=’/tmp/inittest_db.ora’ from spfile;
SQL>create spfile=’+DG1’ from pfile=’/tmp/inittest_db.ora’;
------------------------------------------------End of Document-----------------------------------
Note :- When I used below script
RMAN>run {
BACKUP AS COPY DATAFILE 7 FORMAT "+REDO";
BACKUP AS COPY DATABASE FORMAT “+DG1”;
}
RMAN> SWITCH DATABASE TO COPY;
All the datafiles including datafile 7 was present in “+DG1” diskgroup only. So we should do the mapping for each datafile with their respective diskgroup.
Viewing all 1640 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>