Quantcast
Channel: Doyensys Allappsdba Blog..
Viewing all 1640 articles
Browse latest View live

Find DBID in NOMOUNT Stage

$
0
0
How To Find DBID in NOMOUNT Stage
  Oracle database identifier in short DBID is an internal,unique identifier for an oracle database.
  Database administrator must note down the DBID in safe place,so that any
  miss-happening to the database could be easily identified and recovered.
  In case it is required to recover spfile or control file from autobackup,
  such as disaster recovery,you will need to set DBID.So lets see how to get
  to get DBID in NOMOUNT state.
 Why DBID IS IMPORTANT
=> It is an unique identifier for a database
=> In case of backup and recovery RMAN distinguishes by DBID.
=> When DBID of a database is changed,all previous backups and
    archived logs of the database become unusable.
=> After you change the DBID,you must open the database with the
    RESETLOGS option,which re-creates the online redo logs and resets their log sequence.
First shut down the database using shut immediate command
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
Now startup database in nomount state 
SQL> startup nomount;                   
ORACLE instance started.
Total System Global Area  939495424 bytes
Fixed Size      2258840 bytes
Variable Size    251660392 bytes
Database Buffers   679477248 bytes
Redo Buffers      6098944 bytes
You can also set tracefile identifier for easily identification of tracefile. 
SQL> show parameter tracefile_identifier;
NAME                                   TYPE         VALUE
------------------------------------ ----------- ------------------------------
tracefile_identifier                 string
SQL>
SQL>
SQL> alter session set tracefile_identifier=kumar;
Session altered.
Now, dump first ten block of datafile, because each block header contains dbid information. 
SQL> alter system dump datafile '/home/oracle/ram/oradata/data/system.dbf'
  2  block min 1 block max 10;
System altered.
Now find the location of Trace file. 
SQL> show parameter user_dump_dest;
NAME                       TYPE              VALUE
----------------------      -----------        ------------------------------
user_dump_dest        string          /home/oracle/ram/admin/diag/diag/rdbms/ram/ram/trace
search for Db ID inside the trace file
[oracle@OEL ~]$ cd /home/oracle/ram/admin/diag/diag/rdbms/ram/ram/trace/
[oracle@OEL trace]$ pwd
/home/oracle/ram/admin/diag/diag/rdbms/ram/ram/trace
[oracle@OEL trace]$ head -50  ram_ora_5172_KUMAR.trc 
 Start dump data block from file /home/oracle/ram/oradata/data/system.dbf minblk 1 maxblk 10
V10 STYLE FILE HEADER:
Compatibility Vsn = 186646784=0xb200100
Db ID=1478419057=0x581ee271, Db Name='RAM'
Activation ID=0=0x0
Control Seq=314=0x13a, File size=51200=0xc800
File Number=1, Blksiz=8192, File Type=3 DATA
Dump all the blocks in range:
buffer tsn: 0 rdba: 0x00400002 (1024/4194306)
scn: 0x0000.00032c0f seq: 0x02 flg: 0x04 tail: 0x2c0f1d02
frmt: 0x02 chkval: 0xda65 type: 0x1d=KTFB Bitmapped File Space Header
Hex dump of block: st=0, typ_found=1
you can also get it using v$database:
SQL> alter database mount;
Database altered.
SQL> alter database open;
Database altered.
SQL> select name, dbid from v$database;
NAME  DBID
--------- ----------
RAM   1478419057
DBID is also displayed by the RMAN client when it starts up and connects to your database.
[oracle@OEL ~]$ export ORACLE_SID=ram
[oracle@OEL ~]$ rman target/
Recovery Manager: Release 12.2.0.1.0 - Production on Fri Nov 1 13:21:14 2018
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
connected to target database: RAM (DBID=1478419057)

Oracle Online Redefinition

$
0
0
Oracle Online Redefinition

     When a table records fragmented, the select statement takes more I/O.
Its causes a performance issue. 
      Defragmentation help to increase the performance of the data.
      Online Redefinition is one of the Method to Defragment the table.
Step 1. Create a table.
SQL> create table ram_test_tab(
  2  id number(5),
  3  name varchar2(10) default 'RAM',
  4  edate date default sysdate);
Table created.

SQL> desc ram_test_tab;
 Name                                         Null?      Type
 ----------------------------------------- -------- ----------------------------
 ID                                                          NUMBER(5)
 NAME                                                   VARCHAR2(10)
 EDATE                                                  DATE

Step. 2. The Table Must have a Primary Key:
SQL> alter table ram_test_tab add constraint pk primary key(id);
Table altered.


 SQL> begin
  2  for i in 1..99999 loop
  3  insert into ram_test_tab (id) values(i);
  4  end loop;
  5  end;
  6  /                                                                                                                                                                                                                                                                                                                         PL/SQL procedure successfully completed.

SQL> commit;
Commit complete.

Step.3 . Collect The Stats about the table.
SQL>  analyze table ram_test_tab compute statistics;
Table analyzed.

SQL> select table_name, round((blocks*8),2) "BLOCK",
  2  round((num_rows*avg_row_len/1024),2) "SIZE",
  3  to_char(last_analyzed,'hh:mm:ss dd/mm/yy')
  4  from user_tables
  5  where table_name='RAM_TEST_TAB';
TABLE_NAME                BLOCK       SIZE TO_CHAR(LAST_ANAL
--------------------                ----------           ---------- -----------------
RAM_TEST_TAB              2960           2050.76 04:10:06 31/10/18

Step. 4. Fragment the data:

SQL> delete from ram_test_tab
  2  where id between 200 and 20000;                                                                                                                                                                                                                                                                             19801 rows deleted.

SQL>  analyze table ram_test_tab compute statistics;
Table analyzed.
After collecting statistics the dictionary table updated with latest records.
SQL>select table_name, round((blocks*8),2) "BLOCK",
  2  round((num_rows*avg_row_len/1024),2) "SIZE",
  3  to_char(last_analyzed,'hh:mm:ss dd/mm/yy')
  4  from user_tables
  5  where table_name='RAM_TEST_TAB';
 
TABLE_NAME                BLOCK       SIZE TO_CHAR(LAST_ANAL
--------------------               ----------              ---------- -----------------
RAM_TEST_TAB              2960              1644.69 04:10:07 31/10/18
Step. 5. Create a Duplicate Table:
If its confirmed fragmentation lead to performance issue. then defragmentation is recommended. Before going to online redefinition package. Create a Dummy table which have same metadata of the fragmented table.

SQL> create table ram_dummy
  2  as select * from ram_test_tab
  3  where 1=2;                                                                                                                                           Table created.

SQL> desc ram_dummy;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                                           NUMBER(5)
 NAME                                                    VARCHAR2(10)
 EDATE                                                   DATE
SQL> select * from ram_dummy;
no rows selected
Step. 6. Online redefinition package Run:
SQL> conn sys as sysdba
Enter password:
Connected.
SQL>  exec dbms_redefinition.can_redef_table('SAKTHI','RAM_TEST_TAB');
PL/SQL procedure successfully completed.
SQL>  exec  dbms_redefinition.start_redef_table('SAKTHI','RAM_TEST_TAB','_DUMMY');
PL/SQL procedure successfully completed.
SQL> exec dbms_redefinition.sync_interim_table('SAKTHI','RAM_TEST_TAB','RAM_DUMMY');
PL/SQL procedure successfully completed.
SQL>  exec dbms_redefinition.finish_redef_table('SAKTHI','RAM_TEST_TAB','RAM_DUMMY');
PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_schema_stats('SAKTHI');
PL/SQL procedure successfully completed.
 So, the fragmentation Switch to another table. And defragment the tables.

 SQL> select table_name, round((blocks*8),2) "BLOCK",
  2  round((num_rows*avg_row_len/1024),2) "SIZE",
  3  to_char(last_analyzed,'hh:mm:ss dd/mm/yy')
  4  from dba_tables
  5  where table_name='RAM_TEST_TAB';                                                                                                                                                                                                                                                    TABLE_NAME                BLOCK      SIZE     TO_CHAR(LAST_ANAL
--------------------------------- ------------ ------------ --------------------------------
RAM_TEST_TAB              2160    1409.73   05:10:51 31/10/18

 SQL> select table_name, round((blocks*8),2) "BLOCK",
  2  round((num_rows*avg_row_len/1024),2) "SIZE",
  3  to_char(last_analyzed,'hh:mm:ss dd/mm/yy')
  4  from dba_tables                                                                                                                                5 where table_name='RAM_DUMMY';                                                                                                                                                                                                                                                                         TABLE_NAME                BLOCK    SIZE TO_CHAR(LAST_ANAL
------------------------------- --------------- ----------- ----------------------------
RAM_DUMMY                 2960      1409.73     10:10:24 01/11/18
Drop the Fragmented table.
SQL> drop table ram_dummy purge;
Table dropped.

Article 1

$
0
0

Data Guard Configuration


Primary d/b:- prac1 and Standby d/b :-standby

Step-1:-   In primary data base prac1.
        SQL >alter database force logging

Step-2:- Create a password file for the primary d/b.
  $ orapwd file=orapwprac1 password=oracle entries=5 force=y

Step-3:- Configure Standby redo-logs.
The number and size of standby redo logs should be equal to or more than the number of online redo logs of the primary d/b i.e. in this case prac1.
     Size of the log file can be obtained from
          SQL> select byte/1024/1024 from v$log;

Add standby logfile accordingly:-
   >alter database add standby logfile group 4(‘/$ORACLE_HOME/prac1/redo04.log’) size 50M;
   >alter database add standby logfile group 5(‘/$ORACLE_HOME/prac1/redo05.log’) size 50M;
   >alter database add standby logfile group 6(‘/$ORACLE_HOME/prac1/redo06.log’) size 50M;

To check standby redo logs:-
     sql> select group#,status from v$standby_log;

See:  Usage, Benefits and Limitations of Standby Redo Logs (SRL) (Doc ID 219344.1)

Step-4:- Set Primary d/b Initialization parameters.

Edit the pfile of primary d/b prac1 i.e. initprac1.ora in $ORACLE_HOME/dbs
DB_NAME=prac1
DB_UNIQUE_NAME=prac1
LOG_ARCHIVE_CONFIG='DG_CONFIG=(prac1,standby)'
CONTROL_FILES='/home/leo/oracle/product/12.2.0/db_1/prac1/control01.ctl', '/home/leo/oracle/product/12.2.0/db_1/prac1/control02.ctl', '/home/leo/oracle/product/12.2.0/db_1/prac1/control03.ctl';
LOG_ARCHIVE_DEST_1=
 'LOCATION=/home/leo/oracle/product/12.2.0/db_1/prac1/
  VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
  DB_UNIQUE_NAME=prac1'
LOG_ARCHIVE_DEST_2=
 'SERVICE=standby LGWR ASYNC
  VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
  DB_UNIQUE_NAME=standby'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30

FAL_SERVER=standby
FAL_CLIENT=prac1
DB_FILE_NAME_CONVERT=
'/home/leo/standby/','/home/leo/oracle/product/12.2.0/db_1/prac1/'
LOG_FILE_NAME_CONVERT=
'/home/leo/standby/','/home/leo/oracle/product/12.2.0/db_1/prac1/'
STANDBY_FILE_MANAGEMENT=AUTO



Step – 5:-  In prac1:
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE OPEN;

Step – 6:- Create a Backup Copy of the Primary Database Datafiles (if using ADG then no need of backup)

Startup database;
$export ORACLE_SID=prac1;
$rman target / nocatalog;
Rman> backup database;

>shutdown database;
Even take backup of database physically by copying the datafiles to the standby file.
Step -7:- Create a Control File for the Standby Database
>startup mount;
> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/home/leo/control01.ctl';
> ALTER DATABASE OPEN;

Step-8:- Prepare an Initialization Parameter File for the Standby Database
 Copy the primary database parameter file to the standby database.
In primary database:-
>startup pfile=’/home/leo/oracle/product/12.2.0/db_1/dbs/initprac1.ora’;
>create spfile from pfile;
>create pfile=’ home/leo/oracle/product/12.2.0/db_1/dbs/initstandby.ora’ from spfile;

Step-9:-  Set initialization parameters on the physical standby database

 DB_NAME=prac1
DB_UNIQUE_NAME=standby
LOG_ARCHIVE_CONFIG='DG_CONFIG=(prac1,standby)'
CONTROL_FILES='/home/leo/standby/control1.ctl'
DB_FILE_NAME_CONVERT=’/home/leo/oracle/product/12.2.0/db_1/prac1/','/home/leo/standby/'
LOG_FILE_NAME_CONVERT=
 ’/home/leo/oracle/product/12.2.0/db_1/prac1/','/home/leo/standby/'
LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
LOG_ARCHIVE_DEST_1=
 'LOCATION=/home/leo/standby/
  VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
  DB_UNIQUE_NAME=standby'
LOG_ARCHIVE_DEST_2=
 'SERVICE=prac1 LGWR ASYNC
  VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
  DB_UNIQUE_NAME=prac1'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
STANDBY_FILE_MANAGEMENT=AUTO
FAL_SERVER=prac1
FAL_CLIENT=standby

Step-10:-  Copy Files from the Primary System to the Standby System

Physically copy datafiles from primary d/b to the location at standby
Prac1>  cp *.dbf –v ~/standby

Step-11:- Create standby database password file
Dbs>orapwd file=orastandby password=oracle entries=5 force=y;

Setup listeners for “prac1” and “standby”, as well as Tnsnames for “prac1” and “standby”

Step-12:-  Start the physical standby database.
At standby database:-
 SQL>startup mount;
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Step-13:- Test the data guard connection.
In primary d/b
>alter system switch logfile;
>archive log list;

In standby db
>archive log list

The log sequence number should be the same for both databases
ex:

ON PRIMARY :

SQL>  select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

   THREAD#  MAX(SEQUENCE#)
----------       --------------
         1             6045

or

SQL> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)
————–
6045

ON STANDBY:

SQL>  select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

   THREAD#  MAX(SEQUENCE#)
----------         --------------
         1           6045

or

SQL> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)
————–
6045

Here, the maximum sequence# generated on the Primary database is 6045 and the maximum sequence# applied on the standby database is also 6045 which means that the standby database is in sync with the primary database.

Article 0

$
0
0

Removing a NODE from RAC


  1. Delete the instance on the node to be removed
  2. Clean up ASM
  3. Remove the listener from the node to be removed
  4. Remove the node from the database
  5. Remove the node from the clusterware
You can delete the instance by using the database creation assistant (DBCA), invoke the 
program choose the RAC database, choose instance management and then choose 
delete instance, enter the sysdba user and password then choose the instance to delete.

To clean up ASM follow the below steps
  1. From node 1 run the below command to stop ASM on the node to be removed

      
    srvctl stop asm -n rac3
      
    srvctl remove asm -n rac3
  2. Now run the following on the node to be removed

      
    cd $ORACLE_HOME/admin
      rm -rf +ASM
      
    cd $ORACLE_HOME/dbs
      
    rm -f *ASM*
  3. Check that /etc/oratab file has no ASM entries, if so remove them
Now remove the listener for the node to be removed
·         Login as user oracle, and set your DISPLAY environment variable, then start the Network 
         Configuration Assistant
     $ORACLE_HOME/bin/netca
·         Choose cluster management
·         Choose listener
·         Choose Remove
·         Choose the  name as LISTENER
Next we remove the node from the database
  1. Run the below script from the node to be removed
    cd $ORACLE_HOME/bin
    ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac3}" -local
    ./runInstaller
  2. Choose to deinstall products and select the dbhome
  3. Run the following from node 1
      cd $ORACLE_HOME/oui/bin
      
    ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac1,rac2,rac3}"
Lastly we remove the clusterware software
  1. Run the following from node 1, you obtain the port number from remoteport section in the ons.config file in $ORA_CRS_HOME/opmn/conf 
  2.   $CRS_HOME/bin/racgons remove_config rac3:6200
  3. Run the following from the node to be removed as user root
      cd $CRS_HOME/install
      
    ./rootdelete.sh
  4. Now run the following from node 1 as user root, obtain the node number first 
      $CRS_HOME/bin/olsnodes -n
      
    cd $CRS_HOME/install
      
    ./rootdeletenode.sh rac3,3
  5. Now run the below from the node to be removed as user oracle

       cd $CRS_HOME/oui/bin
      
    ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac3}" CRS=TRUE -local
      ./runInstaller
  6. Choose to deinstall software and remove the CRS_HOME
  7. Run the following from node as user oracle

      cd $CRS_HOME/oui/bin 
      
    ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac1,rac2,rac3}" CRS=TRUE
  8. Check that the node has been removed, the first should report "invalid node", the second you should not see any output and the last command you should only see nodes rac1 and rac2 

      
    srvctl status nodeapps -n rac3
      
    crs_stat |grep -i rac3
      
    olsnodes -n

Tablespace Thresholds and Alerts

$
0
0

How to set tablespace thresholds?

If you have very less no. of targets and you are using dbcontrol for each database, then use DBMS_SERVER_ALERT package to set. If you have OEM Cloud control 12c / 13c, then you can create a new Metric extension and and Rule and apply it. Here we ll discuss how to use DBMS_SERVER_ALERT package.

Use of the DBMS_SERVER_ALERT package as an early warning mechanism for space issues. The DBMS_SERVER_ALERT package as an early warning mechanism for space issues. These can be set database-wide, or for individual tablespaces. When the threshold is crossed warnings are sent by the Enterprise Manager (DB Control, Grid Control or Cloud Control).

Setting the OBJECT_NAME parameter to NULL sets the default threshold for all tablespace in the database. Setting the OBJECT_NAME parameter to a tablespace name sets the threshold for the specified tablespace and overrides any default setting.

There are two types of tablespace thresholds that can be set.

TABLESPACE_PCT_FULL : Percent full. 

When the warning or critical threshold based on percent full is crossed a notification occurs.

TABLESPACE_BYT_FREE : Free Space Remaining (KB). 

The constant name implies the value is in bytes, but it is specified in KB. When the warning or critical threshold based on remaining free space is crossed a notification occurs. When you view these thresholds in different tools the units may vary, for example

Cloud Control displays and sets these values in MB.
The thresholds are set using a value and an operator.

OPERATOR_LE : Less than or equal.
OPERATOR_GE : Greater than or equal.


Setting Thresholds:

Note:  You should know of your existing thresholds before changing them, so you know what to set them back to.

The following examples show how to set the different types of alerts.

Example-1:  Database-wide KB free threshold.

Begin
  DBMS_SERVER_ALERT.set_threshold(
    metrics_id              => DBMS_SERVER_ALERT.tablespace_byt_free,
    warning_operator        => DBMS_SERVER_ALERT.operator_le,
    warning_value           => '1024000',
    critical_operator       => DBMS_SERVER_ALERT.operator_le,
    critical_value          => '102400',
    observation_period      => 1,
    consecutive_occurrences => 1,
    instance_name           => NULL,
    object_type             => DBMS_SERVER_ALERT.object_type_tablespace,
    object_name             => NULL);
end;
/

Example-2:    Database-wide percent full threshold.

Begin
  DBMS_SERVER_ALERT.set_threshold(
    metrics_id              => DBMS_SERVER_ALERT.tablespace_pct_full,
    warning_operator        => DBMS_SERVER_ALERT.operator_ge,
    warning_value           => '80',
    critical_operator       => DBMS_SERVER_ALERT.operator_ge,
    critical_value          => '90',
    observation_period      => 1,
    consecutive_occurrences => 1,
    instance_name           => NULL,
    object_type             => DBMS_SERVER_ALERT.object_type_tablespace,
    object_name             => NULL);
end;
/

Example-3:  Tablespace-specific KB free threshold.

begin
  DBMS_SERVER_ALERT.set_threshold(
    metrics_id              => DBMS_SERVER_ALERT.tablespace_byt_free,
    warning_operator        => DBMS_SERVER_ALERT.operator_le,
    warning_value           => '1024000',
    critical_operator       => DBMS_SERVER_ALERT.operator_le,
    critical_value          => '102400',
    observation_period      => 1,
    consecutive_occurrences => 1,
    instance_name           => NULL,
    object_type             => DBMS_SERVER_ALERT.object_type_tablespace,
    object_name             => 'USERS');
end;
/

Example-4:    Tablespace-specific percent full threshold.

begin
  DBMS_SERVER_ALERT.set_threshold(
    metrics_id              => DBMS_SERVER_ALERT.tablespace_pct_full,
    warning_operator        => DBMS_SERVER_ALERT.operator_ge,
    warning_value           => '80',
    critical_operator       => DBMS_SERVER_ALERT.operator_ge,
    critical_value          => '90',
    observation_period      => 1,
    consecutive_occurrences => 1,
    instance_name           => NULL,
    object_type             => DBMS_SERVER_ALERT.object_type_tablespace,
    object_name             => 'USERS');
end;
/

Example-5: Tablespace-specific reset to defaults ( Set warning and critical values to NULL)

  --DBMS_SERVER_ALERT.set_threshold(
  --  metrics_id              => DBMS_SERVER_ALERT.tablespace_pct_full,
  --  warning_operator        => DBMS_SERVER_ALERT.operator_ge,
  --  warning_value           => NULL,
  --  critical_operator       => DBMS_SERVER_ALERT.operator_ge,
  --  critical_value          => NULL,
  --  observation_period      => 1,
  --  consecutive_occurrences => 1,
  --  instance_name           => NULL,
  --  object_type             => DBMS_SERVER_ALERT.object_type_tablespace,
  --  object_name             => 'USERS');


>> Setting the warning and critical levels to '0' disables the notification.

Displaying Thresholds
The threshold settings can be displayed using the DBA_THRESHOLDS view.

SET LINESIZE 200

COLUMN tablespace_name FORMAT A30
COLUMN metrics_name FORMAT A30
COLUMN warning_value FORMAT A30
COLUMN critical_value FORMAT A15

SELECT object_name AS tablespace_name,
       metrics_name,
       warning_operator,
       warning_value,
       critical_operator,
       critical_value
FROM   dba_thresholds
WHERE  object_type = 'TABLESPACE'
ORDER BY object_name;

Start / Stop / Relocate SCAN listener in Oracle 11gR2 RAC

$
0
0

Start/ Stop / Relocate SCAN listener in Oracle 11gR2 RAC

1) Check listener status ( login to grid home)

a) Check the cluster resource status

$ crsctl stat res -t

verify the output for listener
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS     
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       node1                                       
               ONLINE  ONLINE       node2                                       
ora.FRA.dg
               ONLINE  ONLINE       node1                                       
               ONLINE  ONLINE       node2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1                                       
               ONLINE  ONLINE       node2                                       
ora.asm
               ONLINE  ONLINE       node1                    Started           
               ONLINE  ONLINE       node2                    Started           
ora.gsd
               OFFLINE OFFLINE      node1                                       
               OFFLINE OFFLINE      node2                                       
ora.net1.network
               ONLINE  ONLINE       node1                                       
               ONLINE  ONLINE       node2                                       
ora.ons
               ONLINE  ONLINE       node1                                       
               ONLINE  ONLINE       node2                                       
ora.registry.acfs
               ONLINE  ONLINE       node1                                       
               ONLINE  ONLINE       node2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node2                                       
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       node1                                       
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       node1                                       
ora.cvu
      1        ONLINE  ONLINE       node1                                       
ora.node1.vip
      1        ONLINE  ONLINE       node1                                       
ora.node2.vip
      1        ONLINE  ONLINE       node2                                       
ora.PROD.db
      1        ONLINE  ONLINE       node1                    Open               
      2        ONLINE  ONLINE       node2                    Open               
ora.oc4j
      1        ONLINE  ONLINE       node1                                       
ora.scan1.vip
      1        ONLINE  ONLINE       node2                                       
ora.scan2.vip
      1        ONLINE  ONLINE       node1                                       
ora.scan3.vip
      1        ONLINE  ONLINE       node1

b) Check the scan listener status

$ srvctl status scan_listener

SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node2
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node node1
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node node1

c) Check the listener home. That sholud run in grid home:

LSNRCTL for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production on 23-NOV-2018 14:02:27
Copyright (c) 1991, 2011, Oracle.  All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
Start Date                29-NOV-2017 21:46:52
Uptime                    115 days 16 hr. 15 min. 34 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      ON
Listener Parameter File   /u01/app/grid/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/node1/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.20.30.40)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.20.30.41)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "PROD" has 1 instance(s).
  Instance "PROD1", status READY, has 1 handler(s) for this service...
Service "PRODXDB" has 1 instance(s).
  Instance "PROD1", status READY, has 1 handler(s) for this service...
The command completed successfully

2) Start scan listener:

a) Start scan listener

$ srvctl start scan_listener

b) If lsnrctl status showing rdbms home, then do the following

$ lsnrctl stop
$ export ORACLE_HOME=/u01/app/grid
$ lsnrctl start


3) Relocate SCAN listener

a) To relocate
When you find all 3 scan listeners are running on single node, then you may relocate any one of the listener

$ srvctl relocate scan_LISTENER -i 1 -n node2

b) Check current status after Relocate SCAN_LISTENER:

bash-3.2$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node2
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node node1
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node node1

c) inherit status

$ ps -ef|grep inherit
  oracle 49741838  9633998   0 14:10:00  pts/0  0:00 grep inherit
  oracle 18547030        1   0 13:26:56      -  0:00 /u01/app/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
  oracle 31588762        1   0 13:20:20      -  0:14 /u01/app/grid/bin/tnslsnr LISTENER -inherit


So What is difference is between relocating the SCAN using srvctl relocate scan and SCAN_LISTENER  by using srvctl relocate scan_listener command?

Regarding questions; The difference between a SCAN VIP and a normal RAC VIP, is that the RAC VIP has a node it want’s to run on and each node has one (whereas you only have 3 SCANs). If it fails over to another node, the normal VIP exists, but does not accept connections, whereas the SCAN is not fix to a node and can run on any node in the cluster (and will accept connections anytime).

Now that this works, the SCAN VIP will always move with the SCAN listener (otherwise it would not make any sense). Hence there is really no difference in moving the SCAN VIP (because this will trigger a relocate of the listener) or to move the SCAN_Listener (since this will move the VIP it depends on).

3) Checking SCAN IPs

$ srvctl config scan
SCAN name: scandb.production.com, Network: 1/10.20.30.0/255.255.255.192/en8
SCAN VIP name: scan1, IP: /scandb.production.com/10.20.30.42
SCAN VIP name: scan2, IP: /scandb.production.com/10.20.30.43
SCAN VIP name: scan3, IP: /scandb.production.com/10.20.30.44

Article 0

$
0
0

DELETE/REMOVE Non Executing Datapump Jobs

Step 1:-
Normally, we can run the below query to find the datapump jobs and get their status:-

SQL> SET lines 150 pages 999
COL OWNER_NAME         for a18
COL JOB_NAME           for a25
COL OPERATION          for a14
COL JOB_MODE           for a15
COL STATE              for a18
COL DEGREE             for 99999
COL ATTACHED_SESSIONS  for 99999
COL DATAPUMP_SESSIONS  for 99999
SELECT *
  FROM dba_datapump_jobs;

OWNER_NAME         JOB_NAME                  OPERATION      JOB_MODE        STATE              DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS
------------------ ------------------------- -------------- --------------- ------------------ ------ ----------------- -----------------
SYS                SYS_EXPORT_SCHEMA_02      EXPORT         SCHEMA          EXECUTING               1                 1                 3
SYS                SYS_EXPORT_SCHEMA_01      EXPORT         SCHEMA          NOT RUNNING             0                 0                 0

Step 2:-
Now, I want to kill "SYS_EXPORT_SCHEMA_01" which is in "NOT RUNNING" state and in order to kill the job, we can use the below procedure

SQL> DECLARE
   h1 NUMBER;
BEGIN
   h1 := DBMS_DATAPUMP.ATTACH('SYS_EXPORT_SCHEMA_01','SYS');
   DBMS_DATAPUMP.STOP_JOB (h1,1,0);
END;
/
DECLARE
*
ERROR at line 1:
ORA-31626: job does not exist
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 1852
ORA-06512: at "SYS.DBMS_DATAPUMP", line 5319
ORA-06512: at line 4

Step 3:-
Since the job is already in "NOT RUNNING" state, we received the above error. To remove the job, identify the master tables which are created for this job.

SQL> SET lines 150
col "OBJECT_TYPE" for a20
col "OWNER.OBJECT" for a40
SELECT o.status, o.object_id, o.object_type,
       o.owner||'.'||object_name "OWNER.OBJECT"
  FROM dba_objects o, dba_datapump_jobs j
 WHERE o.owner=j.owner_name AND o.object_name=j.job_name
   AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2;

STATUS                 OBJECT_ID OBJECT_TYPE          OWNER.OBJECT
--------------------- ---------- -------------------- ----------------------------------------
VALID                     235524 TABLE                SYS.SYS_EXPORT_SCHEMA_01
VALID                     236371 TABLE                SYS.SYS_EXPORT_SCHEMA_02

Step 4:-
Now, drop the master tables to cleanup the job

SQL> DROP TABLE SYS.SYS_EXPORT_SCHEMA_01;

Table dropped.

Step 5:-
Verify the job is dropped by re-running the statement from Step 1.

View running processes in Oracle DB

$
0
0

View running processes in Oracle DB

This will show you a list of all running processes:
SET LINESIZE 200
SET PAGESIZE 200
SELECT PROCESS pid, sess.process, sess.status, sess.username, sess.schemaname, sql.sql_text FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id AND sess.type = 'USER';
Identify database SID based on OS Process ID
use the following SQL query, when prompted enter the OS process PID:
SET LINESIZE 100
col sid format 999999
col username format a20
col osuser format a15
SELECT b.spid,a.sid, a.serial#,a.username, a.osuser
FROM v$session a, v$process b
WHERE a.paddr= b.addr
AND b.spid='&spid'
ORDER BY b.spid;
For making sure you are targeting the correct session, you might want to review the SQL associated with the offensive task, to view the SQL being executed by the session you can use the following SQL statement:
SELECT
b.username, a.sql_text
FROM
v$sqltext_with_newlines a, v$session b, v$process c
WHERE
c.spid = '&spid'
AND
c.addr = b.paddr
AND
b.sql_address = a.address;
Killing the session
The basic syntax for killing a session is shown below.
ALTER SYSTEM KILL SESSION 'sid,serial#';
In a RAC environment, you optionally specify the INST_ID, shown when querying the GV$SESSION view. This allows you to kill a session on different RAC node.
ALTER SYSTEM KILL SESSION 'sid,serial#,@inst_id';
The KILL SESSION command doesn't actually kill the session. It merely asks the session to kill itself. In some situations, like waiting for a reply from a remote database or rolling back transactions, the session will not kill itself immediately and will wait for the current operation to complete.

 In these cases the session will have a status of "marked for kill". It will then be killed as soon as possible.
In addition to the syntax described above, you can add the IMMEDIATE clause.
ALTER SYSTEM KILL SESSION 'sid,serial#' IMMEDIATE;
This does not affect the work performed by the command, but it returns control back to the current session immediately, rather than waiting for confirmation of the kill.

If the marked session persists for some time you may consider killing the process at the operating system level. Before doing this it's worth checking to see if it is performing a rollback. 

If the USED_UREC value is decreasing for the session in question you should leave it to complete the rollback rather than killing the session at the operating system level.

Article 4

$
0
0

Different Results Between QUERY Parameter Used With EXP/EXPDP and SQL*Plus


APPLIES TO:
Oracle Database - Enterprise Edition - Version 10.1.0.2 to 11.2.0.3 [Release 10.1 to 11.2]

SYMPTOMS:
You try to export parts of data in a table using the parameter QUERY and observed that between the count delivered by original export (or DataPump export) is different from the count obtained when the same query is directly started against the table via SQL*Plus

Steps to reproduce the problem:

--create and populate the table
connect test/test 

drop table tab001; 
purge recyclebin; 

create table tab001 

   id      number, 
   t_stamp timestamp(6) 
); 

insert into tab001 values (1, to_timestamp ('01.12.2008 10:15:20.123000', 'DD.MM.YYYY HH24:MI:SS.FF6')); 
insert into tab001 values (1, to_timestamp ('01.12.2008 11:15:20.123000', 'DD.MM.YYYY HH24:MI:SS.FF6')); 
insert into tab001 values (1, to_timestamp ('01.12.2008 12:15:20.123000', 'DD.MM.YYYY HH24:MI:SS.FF6')); 
insert into tab001 values (1, to_timestamp ('01.12.2008 13:15:20.123000', 'DD.MM.YYYY HH24:MI:SS.FF6')); 
insert into tab001 values (1, to_timestamp ('01.12.2008 14:15:20.123000', 'DD.MM.YYYY HH24:MI:SS.FF6')); 
commit;


Export the table with original export:

#> exp test/test file=tab001.dmp tables=tab001 query=\"where t_stamp > current_timestamp - 21\"


returns:

Export: Release 11.1.0.7.0 - Production on Mon Dec 22 16:21:47 2008 

Copyright (c) 1982, 2007, Oracle. All rights reserved. 

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Produc tion 
With the Partitioning, OLAP, Data Mining and Real Application Testing options 
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set 

About to export specified tables via Conventional Path ... 
. . exporting table TAB001                      0 rows exported 
Export terminated successfully without warnings.


Export the table with DataPump export:

#> expdp test/test directory=dpu dumpfile=tab001.dmp tables=tab001 query=\"where t_stamp > current_timestamp - 21\"


returns:
;;; 
Export: Release 11.1.0.7.0 - Production on Monday, 22 December, 2008 16:37:08 
Copyright (c) 2003, 2007, Oracle. All rights reserved. 
;;; 
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production 
With the Partitioning, OLAP, Data Mining and Real Application Testing options 
Starting "TEST"."SYS_EXPORT_TABLE_01": test/******** directory=dpu dumpfile=x111.dmp tables=tab001 query="where t_stamp > current_timestamp - 21" 
Estimate in progress using BLOCKS method... 
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA 
Total estimation using BLOCKS method: 64 KB 
Processing object type TABLE_EXPORT/TABLE/TABLE 
. . exported "TEST"."TAB001"                     5.421 KB     0 rows 
Master table "TEST"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded 
****************************************************************************** 
Dump file set for TEST.SYS_EXPORT_TABLE_01 is: 
D:\DATABASES\O111\DPU\TAB001.DMP 
Job "TEST"."SYS_EXPORT_TABLE_01" successfully completed at 16:37:28



Start the same statement in SQL*Plus:

SQL> select count (*) from tab001 where t_stamp > current_timestamp - 25; 

COUNT(*) 
---------- 
         5 


1 row selected.


CAUSE:

Exports extract the data from point of view of database. The statement:

SQL> alter session set time_zone = dbtimezone;

runs at begin of export sessions. If your client session (SQL*Plus) runs in a different time zone than the database time zone, then the results delivered by function CURRENT_TIMESTAMP are different. In my example, there is a difference from 9 hours between client/server:

SQL> select sessiontimezone, dbtimezone from dual; 

SESSIONTIMEZONE 
-------------------------------------------------- 
DBTIME 
------ 
-08:00 
+01:00 

SQL> select current_timestamp - 25 as time1 from dual; 

TIME1 
------------------- 
01.12.2008 08:15:22


1 row selected.


=> All five rows inserted above have the time in column T_STAMP older than the calculated TIME1.


SOLUTION:

Alter the SQL*Plus session time zone to database time zone:

SQL> alter session set time_zone = dbtimezone;

and then restart the same queries:

SQL> select sessiontimezone, dbtimezone from dual; 

SESSIONTIMEZONE 
-------------------------------------------------- 
DBTIME 
------ 
+01:00 
+01:00 

SQL> select current_timestamp -  21 as time2 from dual; 

TIME2 
------------------- 
01.12.2008 17:18:07 

1 row selected. 

SQL> select count (*) from tab001 where t_stamp > current_timestamp - 21; 

COUNT(*) 
---------- 
         0


1 row selected.


This time, the exports and SQL*Plus deliver the same results. The calculated TIME2 is now newer than the five rows inserted.

Article 3

$
0
0

DBMS_NETWORK_ACL_ADMIN.CREATE_ACL Worked But Nothing In Dba_network_acls


APPLIES TO:
Oracle Database - Enterprise Edition - Version 11.2.0.2 and later


SYMPTOMS:
Nothing in the dba_network_acl_privileges view after creating ACL.
Created ACL list as below.

BEGIN
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL(acl         => 'test.xml',
                                  description => 'ACL FOR test',
                                  principal   => 'TEST',
                                  is_grant    => true,
                                  privilege   => 'connect'
);
COMMIT;
END;
/

begin
 DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE('TEST.xml','TEST',TRUE,'connect');
 commit;
end;
/

select * from sys.dba_network_acl_privileges;

no row selected


CAUSE:
Until ACL is not assigned to a network Host It will not be listed under ACL views.


SOLUTION:
use the following code to see the results.

BEGIN

DBMS_NETWORK_ACL_ADMIN.CREATE_ACL(
  acl => 'test.xml',
  description => 'test ACL',
  principal => 'TEST',
  is_grant => true,
  privilege => 'connect');

DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(
  acl => 'test.xml',
  principal => 'TEST',
  is_grant => true,
  privilege => 'resolve');

DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL(
  acl => 'test.xml',
  host => '*');

COMMIT;

END;
/


Verify from the following SQLs

select * from dba_network_acl_privileges;
select * from net$_acl;
Select * from dba_network_acls;


Article 2

$
0
0

DataPump Import Results In ORA-39001 Invalid Argument Value ORA-1775 Looping Chain Of Synonyms


APPLIES TO:
Oracle Database - Enterprise Edition - Version 10.2.0.2 and later


SYMPTOMS:
When running DataPump import (impdp) the following errors occur:

ORA-39001: invalid argument value
ORA-01775: looping chain of synonyms

CAUSE:
Apparently, the SYS_IMPORT_SCHEMA_01 table is a master table being created during schema impdp. If the impdp is carried out normally then this table would be dropped automatically. In case there is an abnormal termination of the impdp then, the table might still remain.

Diagnose the issue by running the following trace:

SQL> connect / as sysdba
SQL> alter system set events '1775 trace name ERRORSTACK level 3';

Redo the import and reproduce the error. The trace file should be in the USER_DUMP_DEST directory. Then set the event off with:

SQL> alter system set events '1775 trace name errorstack off';

The trace file will show something like the following:

ksedmp: internal or fatal error
ORA-01775: looping chain of synonyms
Current SQL statement for this session:
SELECT COUNT(*) FROM SYS_IMPORT_SCHEMA_01

call stack:
ksedst ksedmp ksddoa ksdpcg ksdpec ksfpec gesev kgesec0 qcuErroer erroer kkmpfcbk qcsprfro qcsprfro_tree qcsprfro_tree qcspafq qcspqb kkmdrv opiSem opiDeferredSem opitca kksFullTypeCheck rpiswu2 kksLoadChild kxsGetRuntimeLock 810 kksfbc kkspsc0 kksParseCursor opiosq0 opipls opiodr rpidrus skgmstack rpidru rpiswu2 rpidrv psddr0 psdnal pevm_EXIM pfrinstr_EXIM pfrrun_no_tool pfrrun plsql_run peicnt kkxexe opiexe kpoal8 opiodr ttcpip opitsk opiino opiodr opidrv sou2o  opimai_real main start


Additionally, running this query shows that a public synonym (not created by DataPump) still exists with the name "SYS_IMPORT_SCHEMA_01".

SQL> select owner, object_name, object_type, status from dba_objects where object_name like '%SYS_IMPORT_SCHEMA_01%';

OWNER           OBJECT_NAME            OBJECT_TYPE      STATUS
--------------- ---------------------- ---------------- -------
PUBLIC          SYS_IMPORT_SCHEMA_01   SYNONYM           VALID


SOLUTION:

Dropping the synonym SYS_IMPORT_SCHEMA_01 should resolve this issue.

connect / as sysdba
drop public synonym sys_import_schema_01;

Article 1

$
0
0

While Accessing Public Synonym Getting Error ORA-1775: looping chain of synonyms


APPLIES TO:
Oracle Database - Enterprise Edition - Version 11.2.0.3 and later


SYMPTOMS:
User created a public synonym with the same name as the table name, while accessing the synonym getting the following error.

ORA-01775: looping chain of synonyms


CAUSE:
The syntax used for public synonym creation was wrong. The schema name is not specified while creating the public synonym.

create public synonym <synonym_name> for <tablename>; 

SQL> conn TESTER/TESTER
Connected.
SQL> create table test_synonym ( n number);

Table created.

SQL>
SQL> insert into test_synonym values (100);

1 row created.

SQL> commit;

Commit complete.

SQL> conn / as sysdba
Connected.
SQL> show user
USER is "SYS"
SQL> create public synonym test_synonym for test_synonym;     >>>>>>>>>>>>> Didn't specify the schema in which the object resides

Synonym created.

SQL> select count (*) from test_synonym;      
select count (*) from test_synonym
*
ERROR at line 1:
ORA-01775: looping chain of synonyms


SOLUTION:
Create the synonym using the correct syntax.

SQL> conn / as sysdba
Connected.
SQL> show user
USER is "SYS"

SQL>
SQL> drop public synonym test_synonym;

Synonym dropped.

SQL> create public synonym test_synonym for TESTER.test_synonym;

Synonym created.

SQL> select count (*) from test_synonym;

COUNT(*)
----------
1

SQL>

Article 0

$
0
0

EXCLUDE=STATISTICS Or EXCLUDE=INDEX_STATISTICS During Datapump Import Still Analyzes The Indexes


APPLIES TO:
Oracle Database - Enterprise Edition - Version 10.2.0.1 and later


SYMPTOMS:
You are using Data Pump import (impdp) using the following parameters:

EXCLUDE=STATISTICS
- OR -
EXCLUDE=INDEX_STATISTICS
EXCLUDE=TABLE_STATISTICS

Tables are not being analyzed in both cases, however, it is still analyzing the indexes.
The Datapump import statements:

impdp scott/****** 
Directory=DUMP_DIR 
Dumpfile=Exp.dmp 
Logfile=Exp.log 
EXCLUDE=STATISTICS ---> excluded both table and index stats

-- OR -- 

impdp scott/****** 
Directory=DUMP_DIR 
dumpfile=Exp.dmp 
logfile=Exp.log 
EXCLUDE=INDEX_STATISTICS -->excluded table stats
EXCLUDE=TABLE_STATISTICS -->excluded index stats


CAUSE:
Oracle, by default, collects statistics for an index during index creation. It is done by design.

The internal parameter "_optimizer_compute_index_stats", is set to TRUE by default.


SOLUTION:
This parameter can be set to FALSE to avoid the index statistics during import.

EXAMPLE:

SQL> alter system set "_optimizer_compute_index_stats"=FALSE;
- OR -

Set the parameter in the pfile/spfile

_optimizer_compute_index_stats=FALSE

Datapatch apply on 12c Databases after patch

$
0
0
Datapatch apply steps on 12c Databases after patch-

Run the below steps to apply datapatch for all the databases running on 12C RDBMS home.
Set environment
. oraenv 12c_DBname

Sqlplus / as sysdba
startup

* Check invalid objects-
COLUMN object_name FORMAT A30
SELECT owner,object_type,object_name,status
FROM dba_objects WHERE status = ‘INVALID’ and OWNER=’SYS’;

alter system set cluster_database=false scope=spfile sid=’*’;
Shutdown immediate
sqlplus /nolog
SQL> Connect / as sysdba

SQL> startup upgrade
SQL> exit
$ cd $ORACLE_HOME/OPatch
$ ./datapatch -verbose

$ sqlplus / as sysdba and alter system set cluster_database=true scope=spfile sid=’*’;
sql> shut immediate

$ srvctl start database -d 12c_DBname
Run the below query on the DB

SQL>select PATCH_ID, PATCH_UID, VERSION, STATUS, DESCRIPTION, action_time from DBA_REGISTRY_SQLPATCH order by BUNDLE_SERIES;
Run the utlrp to compile any invalid objects

@?/rdbms/admin/utlrp.sql
 
Run the below query in SQLPLUS

COLUMN object_name FORMAT A30
SELECT owner,
object_type,
object_name,
status
FROM dba_objects
WHERE status = ‘INVALID’ and OWNER=’SYS’;

Compare this invalid objects with the list which we have taken before datapatch apply and We should not have any extra invalid objects on SYS
Repeat these steps for all 12c databases

Oracle 12c Common User & Local User

$
0
0
Common Users

      Common user must be created in CDB only.
      When we create a common user must give C## as prefix
      The user is present in all containers(CDB$ROOT and all PDB)
Local Users

     Local user can only created at the PDB.
     The same username can be created in multiple PDB and they are unrelated.
     Use Container Clause to set the current container

     SQL> show con_name
     CON_NAME
     ------------------------------
     CDB$ROOT

     When we try to create a normal user in CDB it raise the error
     SQL> create user sree identified by oracle;
     create user sree identified by oracle
                 *
     ERROR at line 1:
     ORA-65096: invalid common user or role name

     SQL> !oerr ora 65096
     65096, 00000, "invalid common user or role name"
     // *Cause:  An attempt was made to create a common user or role with a name
     //          that was not valid for common users or roles. In addition to the
     //          usual rules for user and role names, common user and role names
     //          must consist only of ASCII characters, and must contain the prefix
     //          specified in common_user_prefix parameter.
     // *Action: Specify a valid common user or role name.
     //

     It agains the rule because only common user allowed to create in CDB. If any reason we need to create the local user in CDB we use undocumented parameter _oracle_script=true at system level.
Create common users
     We connected to common user with the create user privilege.
     The current container must be the root container.
     The username for the common user must be prefixed with "C##" or "c##" and contain only ASCII or EBCDIC characters.
     The username must be unique across all containers.
     The Default table space, Temporary Table space, Quota and Profile, must all reference objects that exist in all containers.
     You can either specify the container=all clause, or omit it, as this is the default setting when the current container is the root.

Common user with container clause.
     SQL> create user C##cdbuser identified by oracle container=all;
     User created.
     SQL> grant create session to C##cdbuser container=all;
     Grant Succeeded.
Common User with default setting
     SQL> create user c##cuser identified by oracle;
     User created.

Create Local User

    You must be connected to a user with the create user privilege
    The username for the local user must not be prefixed with "c##".
    The username must be unique within the PDB.
     You can either specify the container=current clause, or omit it, as this is the default setting when the current container is a PDB.

Switching container for session
     SQL> alter session set container=pram;
     Session altered.
     SQL> show con_name;
     CON_NAME
     ------------------------------
     PRAM
     or

Connect as a User
     ]$ export ORACLE_SID=pram
     ]$ sqlplus essvee@pram
     SQL*Plus: Release 12.2.0.1.0 Production on Fri Jun 15 10:46:08 2018
     Copyright (c) 1982, 2016, Oracle.  All rights reserved.
     Enter password:
     Last Successful login time: Tue Jun 12 2018 12:24:14 +05:30
     Connected to:
     Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
     SQL> show user
     USER is "ESSVEE"
     SQL> show con_name;
     CON_NAME
     ------------------------------
     PRAM
Local user with container clause.
     User created by  a user who have  create user privilege.
     SQL>  create user locuser1 identified by oracle container=current;
     User created.   
Local User with default setting
     SQL> create user locuser2 identified by oracle;
     User created.

Article 2

$
0
0
Fixing dblink issue:-

create database link ats_ora connect using <username> identified by <password> using '<dblink name>';

place the target database tns entry in source database.

Accessing the dblink,

we should be able to accessing the dblink using both the names as below.

select * from dual@ats_ora;

select * from dual@ats_ora.world;

Error:

dblink name is expected.

to fix the issue we have to change the global database name.

alter database rename global_name to <dbname.world>;

Then it will work:

select * from dual@ats_ora;

select * from dual@ats_ora.world;




Article 1

$
0
0
Dynamic script to compile all the invalids:-

SELECT CASE WHEN object_type = 'SYNONYM' AND owner = 'PUBLIC' THEN
'alter ' || owner || '' || DECODE(object_type, 'PACKAGE BODY', 'PACKAGE', object_type) || '' || object_name || '' || DECODE(object_type, 'PACKAGE BODY', 'COMPILE BODY', 'COMPILE') || ';'
ELSE
'alter ' || DECODE(object_type, 'PACKAGE BODY', 'PACKAGE', object_type) || '' || owner || '.' || object_name || '' || DECODE(object_type, 'PACKAGE BODY', 'COMPILE BODY', 'COMPILE') || ';'
END "SQL_COMMANDS"
FROM dba_objects
WHERE object_type IN ('PACKAGE', 'PACKAGE BODY', 'VIEW', 'PROCEDURE', 'FUNCTION', 'TRIGGER', 'SYNONYM')
AND status = 'INVALID'
ORDER BY DECODE(object_type, 'TRIGGER', '99', '00');

Article 0

$
0
0
Common issue while doing Re-org in 12.1.0.2 database:-

While importing the database in 12.1.0.2..

ORA-39346: data loss in character set conversion for object DATABASE_EXPORT/SCHEMA/VIEW/VIEW
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/COMMENT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY
ORA-39346: data loss in character set conversion for object DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_BODY
ORA-39346: data loss in character set conversion for object DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY

Cause:-

Invalid or corrupt characters are stored in the source database, usually in comments on objects

Solution:-

Apply interim patch 21342624 and run post-install step:

run

datapatch -verbose.

Article 17

$
0
0
                                 Query to pick up the last one days logs in EBS12.2 




Query: 

#### Start of script
#####################################################################################
(
# pick up files which have been modified in the last 1 day only

HowManyDaysOld=1
echo "Picking up files which have been modified in the last ${HowManyDaysOld} days"
set -x
find $LOG_HOME -type f -mtime -${HowManyDaysOld} > m.tmp
find $FMW_HOME/webtier/instances/*/diagnostics/logs -type f -mtime -${HowManyDaysOld} >> m.tmp
find $FMW_HOME/wlserver_10.3/common/nodemanager/nmHome*/*.log -type f -mtime -${HowManyDaysOld} >> m.tmp
## Get log files for only the WLS servers needed. Valid server names are one or more of:
## AdminServer forms-c4ws_server forms_server oacore_server oaea_server oafm_server
for SERVERNAME in AdminServer oacore_server forms_server oafm_server
do
find $EBS_DOMAIN_HOME/servers/${SERVERNAME}*/logs -type f -mtime -${HowManyDaysOld} >> m.tmp
find $EBS_DOMAIN_HOME/servers/${SERVERNAME}*/adr/diag/ofm/EBS_domain_*/${SERVERNAME}*/incident -type f -mtime -${HowManyDaysOld} >> m.tmp
done
zip -r mzAppsLogFiles_`hostname`_`date '+%m%d%y'`.zip -@ < m.tmp
rm m.tmp
) 2>&1 | tee mzLogZip.out
#####################################################################################

#### End of script

Article 16

$
0
0
                                                      Forms personalization Listing



Query: 


Purpose/Description:
To get modified profile options.

Personalization is a feature available in 11.5.10.X.
Parameters
None
*//

SELECT
    ffft.user_function_name “User Form Name”
,   ffcr.SEQUENCE
,   ffcr.description
,   ffcr.rule_type
,   ffcr.enabled
,   ffcr.trigger_event
,   ffcr.trigger_object
,   ffcr.condition
,   ffcr.fire_in_enter_query
,   (SELECT user_name
        FROM fnd_user fu
        WHERE fu.user_id = ffcr.created_by) “Created By”
FROM
    fnd_form_custom_rules ffcr
,   fnd_form_functions_vl ffft
WHERE ffcr.ID = ffft.function_id
ORDER BY 1;


//*

Viewing all 1640 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>