Quantcast
Channel: Doyensys Allappsdba Blog..
Viewing all 1640 articles
Browse latest View live

OGG-00529 DDL Replication is enabled but table gguser.GGS_DDL_HIST is not found. Please check DDL installation in the database.

$
0
0

Configure Goldengate DDL Replication

Prerequisite Setup
1. Navigate to the directory where the Oracle Goldengate software is installed.
2.Connect to the Oracle database as sysdba.
   sqlplus sys/password as sysdba
3.For DDL synchronization setup, run the marker_setup.sql script. Provide OGG_USER 
   schema name, when prompted.
4.Here the OGG_USER is the name of the database user, assigned to support DDL 
   replication feature in Oracle Goldengate


SQL> @marker_setup

Marker setup script

You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.

Enter Oracle GoldenGate schema name:gguser


Marker setup table script complete, running verification script...
Please enter the name of a schema for the GoldenGate database objects:
Setting schema name to GGUSER

MARKER TABLE
-------------------------------
OK

MARKER SEQUENCE
-------------------------------
OK

Script complete.
SQL>

SQL> @ddl_setup

Oracle GoldenGate DDL Replication setup script

Verifying that current user has privileges to install DDL Replication...

You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: For an Oracle 10g source, the system recycle bin must be disabled. For Oracle 11g and later, it can be enabled.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.

Enter Oracle GoldenGate schema name:gguser

Working, please wait ...
Spooling to file ddl_setup_spool.txt

Checking for sessions that are holding locks on Oracle Golden Gate metadata tables ...

Check complete.







WARNING: Tablespace GGUSER does not have AUTOEXTEND enabled.



Using GGUSER as a Oracle GoldenGate schema name.

Working, please wait ...

DDL replication setup script complete, running verification script...
Please enter the name of a schema for the GoldenGate database objects:
Setting schema name to GGUSER

CLEAR_TRACE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

CREATE_TRACE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

TRACE_PUT_LINE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

INITIAL_SETUP STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDLVERSIONSPECIFIC PACKAGE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDLREPLICATION PACKAGE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDLREPLICATION PACKAGE BODY STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDL IGNORE TABLE
-----------------------------------
OK

DDL IGNORE LOG TABLE
-----------------------------------
OK

DDLAUX  PACKAGE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDLAUX PACKAGE BODY STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

SYS.DDLCTXINFO  PACKAGE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

SYS.DDLCTXINFO  PACKAGE BODY STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDL HISTORY TABLE
-----------------------------------
OK

DDL HISTORY TABLE(1)
-----------------------------------
OK

DDL DUMP TABLES
-----------------------------------
OK

DDL DUMP COLUMNS
-----------------------------------
OK

DDL DUMP LOG GROUPS
-----------------------------------
OK

DDL DUMP PARTITIONS
-----------------------------------
OK

DDL DUMP PRIMARY KEYS
-----------------------------------
OK

DDL SEQUENCE
-----------------------------------
OK

GGS_TEMP_COLS
-----------------------------------
OK

GGS_TEMP_UK
-----------------------------------
OK

DDL TRIGGER CODE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDL TRIGGER INSTALL STATUS
-----------------------------------
OK

DDL TRIGGER RUNNING STATUS
-----------------------------------
ENABLED

STAYMETADATA IN TRIGGER
-----------------------------------
OFF

DDL TRIGGER SQL TRACING
-----------------------------------
0

DDL TRIGGER TRACE LEVEL
-----------------------------------
NONE

LOCATION OF DDL TRACE FILE
------------------------------------------------------------------------------------------------------------------------
/u01/app/oracle/product/12.1.0.2/db_1/rdbms/log/ggs_ddl_trace.log

Analyzing installation status...


VERSION OF DDL REPLICATION
------------------------------------------------------------------------------------------------------------------------
OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401

STATUS OF DDL REPLICATION
------------------------------------------------------------------------------------------------------------------------
SUCCESSFUL installation of DDL Replication software components

Script complete.

SQL> grant dba to gguser;

Grant succeeded.

SQL> @ddl_enable

Trigger altered.


Please move GGUSER to its own tablespace

$
0
0
Please move GGUSER to its own tablespace

 SQL> @ddl_setup.sql

Oracle GoldenGate DDL Replication setup script

Verifying that current user has privileges to install DDL Replication...

You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: For an Oracle 10g source, the system recycle bin must be disabled. For Oracle 11g and later, it can be enabled.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.

Enter Oracle GoldenGate schema name:GGUSER

Working, please wait ...
Spooling to file ddl_setup_spool.txt

Checking for sessions that are holding locks on Oracle Golden Gate metadata tables ...

Check complete.



declare
*
ERROR at line 1:
ORA-20783:
ORA-20783:
Oracle GoldenGate DDL Replication setup:
*** Please move GGUSER to its own tablespace
ORA-06512: at line 34


Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

SQL>
I check the default table of GGUSER


SQL> select username, default_tablespace from dba_users where username='GGUSER';
USERNAME                       DEFAULT_TABLESPACE           
------------------------------ ------------------------------
GGUSER                      GG_TBS     

Then I have check if any other user is using GG_TBS.

 SQL> select USERNAME from dba_users where default_tablespace='GG_TBS';

USERNAME                     
------------------------------
GGUSER
TESTUSER

So i change the TESTUSER default tablespace to other DATA.


SQL> alter user TESTUSER default tablespace DATA;

User altered.

so now no other user is using GG_TBS

SQL> select USERNAME from dba_users where default_tablespace='GG_TBS';

USERNAME
------------------------------
GGUSER

SQL>

now, executed again the @ddl_setup.sql and it worked.


SQL> @ddl_setup.sql

Oracle GoldenGate DDL Replication setup script

Verifying that current user has privileges to install DDL Replication...

You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: For an Oracle 10g source, the system recycle bin must be disabled. For Oracle 11g and later, it can be enabled.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.

Enter Oracle GoldenGate schema name:GGUSER

Working, please wait ...
Spooling to file ddl_setup_spool.txt

Checking for sessions that are holding locks on Oracle Golden Gate metadata tables ...

Check complete.











Using GGUSER as a Oracle GoldenGate schema name.

Working, please wait ...

DDL replication setup script complete, running verification script...
Please enter the name of a schema for the GoldenGate database objects:
Setting schema name to GGUSER

CLEAR_TRACE STATUS:

Line/pos                                 Error
---------------------------------------- -----------------------------------------------------------------
No errors                                No errors

CREATE_TRACE STATUS:

Line/pos                                 Error
---------------------------------------- -----------------------------------------------------------------
No errors                                No errors

TRACE_PUT_LINE STATUS:

Line/pos                                 Error
---------------------------------------- -----------------------------------------------------------------
No errors                                No errors

INITIAL_SETUP STATUS:

Line/pos                                 Error
---------------------------------------- -----------------------------------------------------------------
No errors                                No errors

DDLVERSIONSPECIFIC PACKAGE STATUS:

Line/pos                                 Error
---------------------------------------- -----------------------------------------------------------------
No errors                                No errors

DDLREPLICATION PACKAGE STATUS:

Line/pos                                 Error
---------------------------------------- -----------------------------------------------------------------
No errors                                No errors

DDLREPLICATION PACKAGE BODY STATUS:

Line/pos                                 Error
---------------------------------------- -----------------------------------------------------------------
No errors                                No errors

DDL IGNORE TABLE
-----------------------------------
OK

DDL IGNORE LOG TABLE
-----------------------------------
OK

DDLAUX  PACKAGE STATUS:

Line/pos                                 Error
---------------------------------------- -----------------------------------------------------------------
No errors                                No errors

DDLAUX PACKAGE BODY STATUS:

Line/pos                                 Error
---------------------------------------- -----------------------------------------------------------------
No errors                                No errors

SYS.DDLCTXINFO  PACKAGE STATUS:

Line/pos                                 Error
---------------------------------------- -----------------------------------------------------------------
No errors                                No errors

SYS.DDLCTXINFO  PACKAGE BODY STATUS:

Line/pos                                 Error
---------------------------------------- -----------------------------------------------------------------
No errors                                No errors

DDL HISTORY TABLE
-----------------------------------
OK

DDL HISTORY TABLE(1)
-----------------------------------
OK

DDL DUMP TABLES
-----------------------------------
OK

DDL DUMP COLUMNS
-----------------------------------
OK

DDL DUMP LOG GROUPS
-----------------------------------
OK

DDL DUMP PARTITIONS
-----------------------------------
OK

DDL DUMP PRIMARY KEYS
-----------------------------------
OK

DDL SEQUENCE
-----------------------------------
OK

GGS_TEMP_COLS
-----------------------------------
OK

GGS_TEMP_UK
-----------------------------------
OK

DDL TRIGGER CODE STATUS:

Line/pos                                 Error
---------------------------------------- -----------------------------------------------------------------
No errors                                No errors

DDL TRIGGER INSTALL STATUS
-----------------------------------
OK

DDL TRIGGER RUNNING STATUS
------------------------------------------------------------------------------------------------------------------------
ENABLED

STAYMETADATA IN TRIGGER
------------------------------------------------------------------------------------------------------------------------
OFF

DDL TRIGGER SQL TRACING
------------------------------------------------------------------------------------------------------------------------
0

DDL TRIGGER TRACE LEVEL
------------------------------------------------------------------------------------------------------------------------
0

LOCATION OF DDL TRACE FILE
------------------------------------------------------------------------------------------------------------------------
/optware/oracle/diag/rdbms/cxxxnm2s/CXXXNM2S/trace/ggs_ddl_trace.log

Analyzing installation status...


STATUS OF DDL REPLICATION
------------------------------------------------------------------------------------------------------------------------
SUCCESSFUL installation of DDL Replication software components

Script complete.
SQL>

Article 0

$
0
0

CRS-2800: Cannot start resource 'ora.ORA_DATA.dg' as it is already in the INTERMEDIATE state on server 

ISSUE:-
SQL> startup;
ORA-39510: CRS error performing start on instance 
CRS-2800: Cannot start resource 'ora.ORA_DATA.dg' as it is already in the INTERMEDIATE state on server 


]# srvctl start database -d testdb

PRCR-1079 : Failed to start resource ora.testdb.db
CRS-2800: Cannot start resource ‘ora.asm’ as it is already in the INTERMEDIATE state on server ‘testdb01′
CRS-2632: There are no more servers to try to place resource ‘ora.testdb.db’ on that would satisfy its placement policy
oracle@testdb01]$ crsctl stat res -t
ora.testdb.db
ONLINE INTERMEDIATE testdb01 CHECK TIMED OUT
ONLINE ONLINE testdb02

Root Cause:

This may be an intermediate connectivity disturbance

Solution:

1) If you have root access you can re-check the resources.

[oracle@Doyen25]$ crsctl check resource ora.testdb.dg

[oracle@Doyen25]$ crsctl stat res -t

ora.testdb.db
ONLINE ONLINE testdb01
ONLINE ONLINE testdb02

[oracle@Doyen25] # srvctl status database -d testdb
Instance testdb1 is running on node testdb01
Instance testdb2 is running on node testdb02

                                                          (OR)
2) 
[oracle@Doyen25] # srvctl remove database -d TESTDB
[oracle@Doyen25] # srvctl add database -d TESTDB 
[oracle@Doyen25] # srvctl status database -d testdb



RMAN duplicate database having multiple directories to assign to BACKUP LOCATION

$
0
0

Applies To


 By default RMAN picks one backup location to do duplicate with clause 'BACKUP LOCATION' .

Many times user don't have much space in single location ,hence, the backup piece distributed
to different location and at same time want to use 'BACKUP LOCATION' feature for duplicate.

How to perform the RMAN duplication using the BACKUP LOCATION clause when the RMAN backuppieces are stored on several mount points or directories?


Solution

An Enhancement request is already in place:  'Unpublished'  Bug 12846424 : RMAN DUPLICATE BACKUP LOCATION TO ACCEPT MULTIPLE DIRECTORY PATH LOCATIONS


Workaround:

Create soft link for the rest location in the location we providing in the clause 'BACKUP LOCATION'.

Please perform the following steps, as shown here.

1. Create the softlink on all of the individual RMAN backuppieces required for the RMAN duplicate, as shown in the example here. You may change the '/tmp/rmanbkup' on a different directory where you want to create those softlinks.

mkdir /tmp/rmanbkup/
ln –s /u02/oracle/level0_backup_21042018 /tmp/rmanbkup/bkp3
ln –s /u03/backup /tmp/rmanbkup/bkp1
ln –s /u04/backup /tmp/rmanbkup/bkp2



Now use /tmp/rmanbkup/ as your backup location in your restore script.

RMAN>  catalog start with '/tmp/rmanbkup';




Done.!

Steps for tuning Redo logs and Checkpoints (Contention, Waits, Number/Duration of Checkpoints)

$
0
0
1). Redolog Buffer Contention
-----------------------------
SELECT SUBSTR(name,1,20) "Name",gets,misses,immediate_gets,immediate_misses
  FROM v$latch
 WHERE name in ('redo allocation', 'redo copy');

Name                       GETS     MISSES IMMEDIATE_GETS IMMEDIATE_MISSES
-------------------- ---------- ---------- -------------- ----------------
redo allocation     277'446'780  2'534'627              0                0
redo copy                33'818     27'694    357'613'861          150'511

MISSES/GETS (must be < 1%)

Redo allocation: (2'534'627 / 277'446'780) * 100 = 0.91 %
Redo Copy:       (27'694 / 33'818) * 100 = 81.8 %

IMMEDIATE_MISSES/(IMMEDIATE_GETS+IMMEDIATE_MISSES) (must be < 1%)

Redo Copy: 150'511/(150'511+357'613'861) = 0.04 %

2). Waits on Redo Log Buffer
----------------------------
SELECT name,value
 FROM v$sysstat
 WHERE name = 'redo log space requests';

The value of 'redo log space requests' reflects the number
of times a user process waits for space in the redo log buffer.
Optimal is if the value is near 0 (Oracle Manual says this ...)

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo log space requests                                               22641

4). Number of Checkpoints per hour
----------------------------------
set feed off;
set pagesize 10000;
set wrap off;
set linesize 200;
set heading on;
set tab on;
set scan on;
set verify off;
--
spool show_logswitches.lst

ttitle left 'Redolog File Status from V$LOG' skip 2

select group#, sequence#,
       Members, archived, status, first_time
  from v$log;

ttitle left 'Number of Logswitches per Hour' skip 2

select to_char(first_time,'YYYY.MM.DD') day,
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'00',1,0)),'99') "00",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'01',1,0)),'99') "01",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'02',1,0)),'99') "02",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'03',1,0)),'99') "03",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'04',1,0)),'99') "04",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'05',1,0)),'99') "05",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'06',1,0)),'99') "06",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'07',1,0)),'99') "07",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'08',1,0)),'99') "08",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'09',1,0)),'99') "09",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'10',1,0)),'99') "10",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'11',1,0)),'99') "11",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'12',1,0)),'99') "12",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'13',1,0)),'99') "13",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'14',1,0)),'99') "14",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'15',1,0)),'99') "15",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'16',1,0)),'99') "16",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'17',1,0)),'99') "17",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'18',1,0)),'99') "18",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'19',1,0)),'99') "19",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'20',1,0)),'99') "20",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'21',1,0)),'99') "21",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'22',1,0)),'99') "22",
       to_char(sum(decode(substr(to_char(first_time,'DDMMYYYY:HH24:MI'),10,2),'23',1,0)),'99') "23"
  from v$log_history
 group by to_char(first_time,'YYYY.MM.DD')
/
spool off;


DAY   00  01  02  03  04  05  06  07  08  09  10  11  12  13  14  15  16  17  18  19  20  21  22  23
----- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
07/07   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   2   0   0   0   0   0   0   0   0
07/08   0   0   0   0   0   0   0   0   0   0   0   5   0   4   1   0   1   0   0   0   0   0   0   0
07/12   0   0   0   0   0   0   0   0   0   0   1   1   0   1   1   0   0   0   0   0   0   0   0   0
07/13   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0
07/14   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   1   1   1   0   0
07/15   1   0   0   0   0   0   0   0   0   0   0   0   2   1   0   0   1   2   2   0   0   0   0   0
07/16   0   0  10  10  15  11   5   0   0   0   0   0   2   5   5   4   5   7   6   6   7   4   4   4
07/17   2   2   1   3   4   6   9  10  11  11  12  12  11  11  12  11  11  12  12   9   9  10  12   9
07/18  12   9  10  10   8   8   9  10   9   8   9  10  10  11  10  11  10  10  11  10  11   9  10  10
07/19   9   3   1   1   0   0   4   6   7   7   4   5  11  10   5   4   5   7   6   8   7   5   5   3
07/20   1   1   8  10   7   5   4   5   4   5   7   7   9   7   9   9   7   9  10  11  12  11  12   9
07/21   9  10  10  10  12  10   7   8   9   8   9  10  11  11  11   8  10  10  12   7   6   7   7   7
07/22   8   7   9  10   8   6   7   8   8   8   9   9   9  10   9   9   9   9   9   9  10   7   6   7
07/23   5   5   7   7   7   2   3   3   4   5   6   5   5   4   3   3   4   4   6   6   5   9   8   5
07/24   4   4   5   4   7   6   5   8   8  11  11  11   

log_checkpoint_interval = 900'000'000  (OK, must be greather than Redolog-File)
log_checkpoint_timeout  = 1200 (Set it to 0, so time-based checkpoints are disabled)

5). Time needed to write a checkpoint
-------------------------------------
Beginning database checkpoint by background
Mon Aug  2 16:37:36 1999
Thread 1 advanced to log sequence 2860
  Current log# 4 seq# 2860 mem# 0: /data/ota/db1/OTASICAP/redo/redoOTASICAP04.log
Mon Aug  2 16:43:31 1999
Completed database checkpoint by background

==> 6 Minutes

Mon Aug  2 16:45:15 1999
Beginning database checkpoint by background
Mon Aug  2 16:45:15 1999
Thread 1 advanced to log sequence 2861
  Current log# 5 seq# 2861 mem# 0: /data/ota/db1/OTASICAP/redo/redoOTASICAP05.log
Mon Aug  2 16:50:29 1999
Completed database checkpoint by background

==> 5.5 Minutes

Mon Aug  2 16:51:50 1999
Beginning database checkpoint by background
Mon Aug  2 16:51:51 1999
Thread 1 advanced to log sequence 2862
  Current log# 6 seq# 2862 mem# 0: /data/ota/db1/OTASICAP/redo/redoOTASICAP06.log
Mon Aug  2 16:56:44 1999
Completed database checkpoint by background

==> 5.5 Minutes

I have faced ORA-01118: Cannot add any more data file limit exceeded.

$
0
0
When the Database is created the db_file parameter in the initialization file is set to a limit. You can shutdown the database and reset these up to the MAX_DATAFILE as specified in database creation. The default for MAXDATAFILES is 30. If the MAX_DATAFILES is set to low, you will have to rebuild the control file to increase it before proceeding.The simplest way to recreate the controlfile to change the ‘hard’ value MAXDATAFILES

I have followed below steps:

ALTER DATABASE BHACKUP CONTROLFILE TO TRACE;

Then go to UDUMP destination pick it up and modify the value of MAXDATAFILES

SHUTDOWN IMMEDIATE;
STARTUP NOMOUNT;
sql> @(name of edited file);


Article 0

$
0
0

      ORA-27300, ORA-27301, ORA-27302,ORA-27303:


$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Wed  Apr 30 12:35:25 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected.

SQL> startup;
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1 
ORA-27301: OS failure message: Operation not permitted 
ORA-27302: failure occurred at: skgpwinit6 
ORA-27303: additional information: startup egid = 1000 (oinstall), current egid = 1001 (dba)


SQL> shut immediate;
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1 
ORA-27301: OS failure message: Operation not permitted 
ORA-27302: failure occurred at: skgpwinit6 
ORA-27303: additional information: startup egid = 1000 (oinstall), current egid = 1001 (dba)




Alert Log file last content below:-

At alert log ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 1000 (oinstall), current egid = 1001 (dba)
2018-04-30T12:40:21.796988-04:00
Process J000 died, see its trace file
2018-04-30T12:40:21.797204-04:00
kkjcre1p: unable to spawn jobq slave process
2018-04-30T12:40:21.797384-04:00


Solution:-
Before we go on and involve clusterware, let’s replay the scenario (changing the group of the oracle executable) manually outside of clusterware to see if we can get the same behaviour

Solution for this problem is quite simple, go view this file

-rwxr-x--x 1 oracle asmadmin 232473728 Apr 30 12:59 /u01/app/oracle/product/12.1.0.2/db_1/bin/oracle

Change file group ownership back to oinstall and restart the database/ASM instance.

-rwxr-x--x 1 oracle oinstall 232473728 Apr 30 12:59  /u01/app/oracle/product/12.1.0.2/db_1/bin/oracle




Article 3

$
0
0


Environment :

Red Hat JBoss Operations Network (ON)
2.3
2.4
3.1
3.2
3.3

Issue :

Not able to send mail through JON server.

Solution :

The SMTP properties are configured in $JON_SERVER/bin/rhq-server.properties with the rhq.server.email.* properties:

By default the rhq-server.properties file has the below parameters and we need to change as per our environment

# Email settings used to connect to an SMTP server to send alert emails.
rhq.server.email.smtp-host=<SMTP Server Address>
rhq.server.email.smtp-port=<SMTP Server Port#>     <-- Default Port# 25
rhq.server.email.from-address=rhqadmin@localhost

After configuring the properties the JBoss ON Server must be restarted.

For JBoss ON versions prior to 3.2, the following document: How can I confirm my server's email/SMTP settings are correct? describes how to test the email settings.

For JBoss ON 3.2 and 3.3 confirm server's email/SMTP settings are correct use below link:

http://<JON-IP/hostname>:7080/coregui/#Test/ServerAccess/EmailTest

Note: 550 is the SMTP reply code for "mailbox unavailable".

Diagnostic Steps

If it still does not work, enable DEBUG to org.jboss.as.mail for more verbose logging.


Article 2

$
0
0
ORA-15260 While Creating an ASM Diskgroup in 11gR2

APPLIES TO :

Oracle Database - Enterprise Edition - Version 11.2.0.1 to 11.2.0.4 [Release 11.2]

SYMPTOMS :

Running an ASM command and getting ORA-15260

SQL> conn / as sysdba 
Connected.
SQL> create diskgroup dg5 external redundancy disk '/dev/sda10','/dev/sda11';
create diskgroup dg5 external redundancy disk '/dev/sda10','/dev/sda11'
*
ERROR at line 1:
ORA-15260: permission denied on ASM disk group

SQL> alter diskgroup data rebalance power 11; 
alter diskgroup data rebalance power 11
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15260: permission denied on ASM disk group

CAUSE :

We have logged in with SYSDBA privilege, that is not allowed for ASM operations

SOLUTION :

Login with SYSASM credentials for ASM operations

SQL> conn / as sysasm 
Connected.


SQL> create diskgroup dg5 external redundancy disk '/dev/sda10','/dev/sda11'; 
Diskgroup created.

Article 1

$
0
0


Environment :
Red Hat JBoss Operations Network (ON) 3.3

Step 1 :

Download the plug-in JAR files from the Customer Support Portal.

In the Customer Support Portal, click Software, and then select the JBoss ON for Plug-in drop-down box.

Step 2 :

Download the plug-in packs.

Step 3 :

Unzip the additional plug-in packs. This creates a subdirectory with the name jon-plugin-pack-plugin_name-3.3.0.GA1.

Step 4 :

List the current contents of the JBoss ON server plug-in directory.

For example:
[root@server rhq-agent]# ls -l  serverRoot/jon-server-3.3.0.GA1/jbossas/server/default/deploy/rhq.ear/rhq-downloads/rhq-plugins

Step 5 :

Stop the JBoss ON server.

serverRoot/jon-server-3.3.0.GA1/bin/rhqctl stop

Step 6 :

Copy the new plug-ins from the jon-plugin-pack-plugin_name-3.0.0.GA1/ directory to the JBoss ON server plug-in directory.

[root@server rhq-agent]# cp /opt/jon/jon-server-3.0.0.GA1/jon-plugin-pack-plugin_name-3.0.0.GA1/* serverRoot/jon-server-3.0.0.GA1/jbossas/server/default/deploy/rhq.ear/rhq-downloads/rhq-plugins

Step 7 :

Start the JBoss ON server again.

serverRoot/jon-server-3.0.0.GA1/bin/rhqctl start

Step 8 :

Have the agents reload their plug-ins to load the new plug-ins. This can be done from the command line using the agent's plugins command:

> plugins update

This can also be done in the JBoss ON GUI by scheduling an update plugins operation for an agent or a group or agents.

Article 0

$
0
0

RMAN-05541: no archived logs found in target database

SYMPTOMS :

Getting RMAN-05541 error when duplicating database from a consistent (cold) RMAN backup

Error Message:

RMAN-00571: ============================================

RMAN-00569: ========== ERROR MESSAGE STACK FOLLOWS ==========

RMAN-00571: ============================================

RMAN-03002: failure of Duplicate Db command at 07/08/2014 18:12:40

RMAN-05501: aborting duplication of target database

RMAN-05541: no archived logs found in target database

SOLUTION :

The error should be resolved by using NOREDO clause in the duplicate command.

run

{
set until time "to_date('08-OCT-2014 10:15:00','DD-MON-YYYY HH24:MI:SS')";
DUPLICATE DATABASE TO target DB BACKUP LOCATION '/backup_Location' NOREDO;
}

Article 0

$
0
0


The rhq-server.sh script can be managed by the init process so that the server starts automatically when the system boots. This also allows the server process to be managed by services like service and chkconfig

Environment :

Red Hat JBoss Operations Network (ON) 3.3

Step 1 :

Copy the rhq-server.sh script into the /etc/init.d/ directory.

cp serverRoot/bin/rhq-server.sh /etc/init.d/

Step 2 :

Edit the /etc/init.d/rhq-server.sh script to set the RHQ_SERVER_HOME variable to the JBoss ON server install directory and the RHQ_SERVER_JAVA_HOME variable to the appropriate directory for the JVM.

For example:

RHQ_SERVER_HOME=serverRoot/jon-server-3.3.0.GA
RHQ_SERVER_JAVA_HOME=/usr/

Step 3 :

Edit the /etc/init.d/rhq-server.sh script, and add the following lines to the top of the file, directly under #!/bin/sh

#!/bin/sh
#chkconfig: 2345 95 20  
#description: JBoss Operations Network Server
#processname: run.sh

The last two numbers in the #chkconfig: 2345 95 20 line specify the start and stop priority, respectively, for the JBoss ON server

Step 4 :

Add the service to the chkconfig service management command, and verify that it was added properly

chkconfig --add rhq-server.sh
chkconfig rhq-server.sh --list

Step 5 :

Set the rhq-server.sh service to run at run level 5

chkconfig --level 5 rhq-server.sh on


Article 0

$
0
0
Find which sql is using high CPU between the time interval.


set lines 500;
set pages 500;
set long 1000000;
SELECT X.SQL_ID, X.CPU_TIME, X.EXECUTIONS, T.SQL_TEXT
FROM
   DBA_HIST_SQLTEXT T,
   (
      SELECT
         S.SQL_ID SQL_ID,
         SUM(S.CPU_TIME_DELTA/1000000) CPU_TIME,
         SUM(S.EXECUTIONS_DELTA) EXECUTIONS
      FROM
         DBA_HIST_SQLSTAT S,
         DBA_HIST_SNAPSHOT P
      WHERE
         S.SNAP_ID = P.SNAP_ID AND
         P.BEGIN_INTERVAL_TIME >= TO_DATE('&beginTime', 'MM/DD/YYYY HH24:MI') AND
         P.END_INTERVAL_TIME <= TO_DATE('&endTime', 'MM/DD/YYYY HH24:MI')
      GROUP BY S.SQL_ID
   ) X
WHERE T.SQL_ID = X.SQL_ID
ORDER BY X.CPU_TIME DESC;

Article 0

$
0
0
create sql-plan baselines(export and import in new db):

connect as user1/@lan


BEGIN

  DBMS_SPM.CREATE_STGTAB_BASELINE(

    table_name => 'stage1');

END;

/



var x number;

begin

:x := DBMS_SPM.PACK_STGTAB_BASELINE('STAGE1','user1');

end;



expdp tables=user1.stage1 directory=DATA_PUMP file=baseline_lan.dmp logfile=baseline_lan.log



on lan2 database

alter USERS tablespace of whatever default tablespace for import user to at least 8G size



impdp tables=user1.stage1 directory=DATA_PUMP file=baseline_lan.dmp logfile=baseline_lan_imp.log



connect as user/@lan2



CREATE INDEX user1.wcc1 ON user1.STAGE1 (SQL_HANDLE, PLAN_ID,category,obj_type) tablespace USERS nologging;



var x number;

begin

:x := DBMS_SPM.UNPACK_STGTAB_BASELINE('STAGE1', 'user1');

end;

Article 6

$
0
0
Oracle HTTP Server Fails to Start With "setgid: unable to set group id to Group" Error 


Error:


While configuring Oracle HTTP Server to use port 443 or any port less then 1024 on UNIX  systems,

OHS would fail to start with the following error.


[2011-09-19T15:31:47.3052-07:00] [OHS] [INCIDENT_ERROR:20] [OHS-9999] [core.c] [host_id: hostname] [host_addr: XXX.XXX.XXX.XXX] [pid: 15306] [tid: 1] [user: root] [VirtualHost: main] (22)Invalid argument: setgid: unable to set group id to Group 4294967295

[2011-09-19T15:31:47.3292-07:00] [OHS] [INCIDENT_ERROR:20] [OHS-9999] [core.c] [host_id: hostname] [host_addr: XXX.XXX.XXX.XXX] [pid: 15297] [tid: 1] [user: root] [VirtualHost: main] Child 15306 returned a Fatal error... Apache is exiting!

Cannot bind to port 443 for SSL on Solaris 10

Solution: 

On some systems (it was seen on Solaris), the Group setting is not set as a required group. This is how the User/Group directives appears after a fresh installation and is incorrect:


User oracle
#Group GROUP_TEMPLATE


This how it should be set in order to fix the "setgid: unable to set group id to Group" error:


User oracle
Group oinstall


Important: Ensure the User and Group setting is as installed on your system. The above is an example. Note that you may optionally set the User setting to another user on the system, but must be a member of the Oracle installed group.


Location $FMW_HOME/webtier/instances/EBS_web_TWO_TASK_OHS1/config/OHS/EBS_web_TWO_TASK/httpd.conf




Article 5

$
0
0
R12.2 ADOP Cut over fails 


Error:

Adop patching cycle cut over phase got error due the following reason, while running the cut-over adop fails . 

Oracle HTTP Server is configured to run on a privilege port
Checking permission of /TEST/apps/TEST/fs1/FMW_Home/webtier/ohs/bin/.apachectl
File permission of /TEST/apps/TEST/fs1/FMW_Home/webtier/ohs/bin/.apachectl is NOT OK
Oracle HTTP Server will not start properly

Kindly log in as root and run below commands

        chown root /TEST/apps/TEST/fs1/FMW_Home/webtier/ohs/bin/.apachectl

        chmod 6750 /TEST/apps/TEST/fs1/FMW_Home/webtier/ohs/bin/.apachectl

Solution :

Kindly log in as root and run below commands

        chown root /TEST/apps/TEST/fs1/FMW_Home/webtier/ohs/bin/.apachectl
       chmod 6750 /TEST/apps/TEST/fs1/FMW_Home/webtier/ohs/bin/.apachectl


We may think just by changing the owner and permission of the file will resolve this issue as expected but the tricky part here is we need to check the permission are also in same level in PATCH File system also. 

If not adop will fail , which usually we dont mind on the patch file system . Check and update the permissions/if required please perform this change to patch file system also. 

 chown root /TEST/apps/TEST/fs2/FMW_Home/webtier/ohs/bin/.apachectl
  chmod 6750 /TEST/apps/TEST/fs2/FMW_Home/webtier/ohs/bin/.apachectl

And now re-run the adop phase=cutover .

Now it gets completed successfully .


Article 4

$
0
0


Fail to Start adopmnctl.sh: Starting Oracle Process Manager (OPMN)

This issue is faced when enabling the SSL in EBS suite . 


Error:

In line 4 of /TEST/apps/TEST/fs1/FMW_Home/webtier/instances/EBS_web_TEST_OHS1/config/OPMN/opmn/opmn.xml:
LSX-00201: contents of "notification-server" should be elements only
  LSX-00213: only 0 occurrences of particle "sequence", minimum is 1
[2018-05-21T15:28:20-04:00] [opmn] [ERROR:1] [110] [internal] XML schema validation failed: error 213.
opmnctl start: failed.


05/21/18-15:28:20 :: adopmnctl.sh: exiting with status 3

Solution:

Change the opmn.xml to one value as

<ssl enabled="true" wallet-file="/TEST/apps/TEST/fs1/FMW_Home/webtier/instances/EBS_web_TEST_OHS1/config/OPMN/opmn/wallet"  ssl-versions="TLSv1.0,TLSv1.2,TLSv2.0,"/>

to


<ssl enabled="true" wallet-file="/TEST/apps/TEST/fs1/FMW_Home/webtier/instances/EBS_web_TEST_OHS1/config/OPMN/opmn/wallet"  ssl-versions="TLSv1.0"/>


And start the opmnctl services, it will start now .


Article 3

$
0
0
Adop Phase Cleanup issue 

This is caused when cleanup phase is performed, the nodes are populated with standby hostname.

Which is failing the cleanup opertions, the below steps are to re-populate to nodes

SOLUTION: 
Due to the method required for "cleaning out" / "re-synchronizing" the following tables, it is EXPECTED / REQUIRED that the Applications have been shutdown.

The only thing running should be the Database Tier. 

Note:

A full backup should also be taken before any testing begins.


1. Backup the fnd_oam_context_files, fnd_nodes, and adop_valid_nodes tables in the EBS env:
sqlplus applsys/pwd

create table fnd_oam_context_files_bkp as select * from fnd_oam_context_files;
create table fnd_nodes_bk as select * from fnd_nodes;
create table adop_valid_nodes_bk as select * from adop_valid_nodes;

2. Truncate the following tables:

truncate table fnd_oam_context_files;

truncate table fnd_nodes;

truncate table adop_valid_nodes;


3. Run AutoConfig on the DB tier
Confirm Autoconfig completes successfully

4. Run Autoconfig on the run file system. (APPS Tier)
Confirm Autoconfig completes successfully

5. Run Autoconfig on the patch file system (Apps Tier)

Due to the method required for "cleaning out" / "re-synchronizing" the following tables, it is EXPECTED / REQUIRED that the Applications have been shutdown.

The only thing running should be the Database Tier.


Before running Autoconfig on the patch file system the ebs_login trigger MUST be disabled
After the successful completion of Autoconfig the ebs_login trigger MUST be re-enabled.

This needs to be done as the SYSTEM schema user.

a. Disable the ebs_login trigger using the following SQL.

SQL> alter trigger ebs_logon disable;

At this time Run autoconfig with the patch env sourced.
Make sure Autoconfig completes ok


b. Enable the ebs_login trigger using the following SQL.

SQL> alter trigger ebs_logon enable;


6. After Autoconfig has been run successfully on all nodes, run the following two (2) queries in order to verify the tables have been correctly populated:

SQL>

set pagesize 5
set linesize 132
col node_name format a15
col server_id format a8
col server_address format a15
col platform_code format a4
col webhost format a12
col domain format a20
col virtual_ip format a12

select node_id, platform_code, support_db D, support_cp C, support_admin A,
support_forms F, support_web W, node_name, server_id,
server_address, domain, webhost, virtual_ip, status
from fnd_nodes
order by node_id;

SQL>

set pagesize 5
set linesize 132

col NAME format A20
col VERSION format A12
col PATH format A110
col STATUS format A10

select NAME,VERSION,PATH, STATUS
from FND_OAM_CONTEXT_FILES;


Once complete please retest the adop phase=cleanup . 

Article 2

$
0
0



                       Query to locate all current Workflow Mailer Service log. 



Query:

set linesize 155;
set pagesize 200;
set verify off;
column MANAGER format a15;
column MEANING format a15;
SELECT concurrent_queue_name manager, fcp.last_update_date, fcp.concurrent_process_id pid, meaning, fcp.logfile_name
FROM fnd_concurrent_queues fcq, fnd_concurrent_processes fcp, fnd_lookups flkup
WHERE concurrent_queue_name in ('WFMLRSVC')
AND fcq.concurrent_queue_id = fcp.concurrent_queue_id
AND fcq.application_id = fcp.queue_application_id
AND flkup.lookup_code=fcp.process_status_code
AND lookup_type ='CP_PROCESS_STATUS_CODE'

AND meaning='Active'; 

Article 1

$
0
0
SQL to identify archive log on hourly basis:


SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '00', 1, NULL))
            "00-01",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '01', 1, NULL))
            "01-02",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '02', 1, NULL))
            "02-03",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '03', 1, NULL))
            "03-04",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '04', 1, NULL))
            "04-05",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '05', 1, NULL))
            "05-06",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '06', 1, NULL))
            "06-07",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '07', 1, NULL))
            "07-08",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '08', 1, NULL))
            "08-09",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '09', 1, NULL))
            "09-10",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '10', 1, NULL))
            "10-11",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '11', 1, NULL))
            "11-12",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '12', 1, NULL))
            "12-13",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '13', 1, NULL))
            "13-14",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '14', 1, NULL))
            "14-15",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '15', 1, NULL))
            "15-16",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '16', 1, NULL))
            "16-17",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '17', 1, NULL))
            "17-18",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '18', 1, NULL))
            "18-19",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '19', 1, NULL))
            "19-20",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '20', 1, NULL))
            "20-21",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '21', 1, NULL))
            "21-22",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '22', 1, NULL))
            "22-23",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '23', 1, NULL))
            "23-00",
         COUNT (*) TOTAL
    FROM V$ARCHIVED_LOG
WHERE ARCHIVED='YES'
GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')
ORDER BY TO_DATE (DAY, 'DD/MM/YYYY') desc;
Viewing all 1640 articles
Browse latest View live