Quantcast
Channel: Doyensys Allappsdba Blog..
Viewing all 1640 articles
Browse latest View live

Article 3

$
0
0

Log files related with relink,Network,OUT inventory logs for R12.1.3 

 1) Database Tier

1.1) Relink Log files :
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME /MMDDHHMM/ make_$MMDDHHMM.log
1.2) Alert Log Files :
$ORACLE_HOME/admin/$CONTEXT_NAME/bdump/alert_$SID.log
1.3) Network Logs :
$ORACLE_HOME/network/admin/$SID.log
1.4) OUI Logs :

OUI Inventory Logs :
$ORACLE_HOME/admin/oui/$CONTEXT_NAME/oraInventory/logs

2) Application Tier

$ORACLE_HOME/j2ee/DevSuite/log
$ORACLE_HOME/opmn/logs
$ORACLE_HOME/network/logs

Tech Stack Patch 10.1.3 (Web/HTTP Server)
$IAS_ORACLE_HOME/j2ee/forms/logs
$IAS_ORACLE_HOME/j2ee/oafm/logs
$IAS_ORACLE_HOME/j2ee/oacore/logs
$IAS_ORACLE_HOME/opmn/logs
$IAS_ORACLE_HOME/network/log
$INST_TOP/logs/ora/10.1.2
$INST_TOP/logs/ora/10.1.3
$INST_TOP/logs/appl/conc/log
$INST_TOP/logs/appl/admin/log

Article 2

$
0
0

Instance startup and configuration Log files are located for INST_TOP in Oracle Release 12.1.3 


$INST_TOP/logs/appl/admin/log

Startup/Shutdown error message related to tech stack (10.1.2, 10.1.3 forms/reports/web)


$INST_TOP/logs/ora/ (10.1.2 & 10.1.3)


$INST_TOP/logs/ora/10.1.3/Apache/error_log[timestamp](Apache log files)


$INST_TOP/logs/ora/10.1.3/opmn/ (OC4J, oa*, opmn.log)


$INST_TOP/logs/ora/10.1.2/network/ (listener log)


$INST_TOP/apps/$CONTEXT_NAME/logs/appl/conc/log (CM log files)

Article 1

$
0
0
Oracle Goldengate GGSCI Commands 

INFO
INFO MANAGERProvides details of the Manager process
INFO MGRAlso provides details of the Manager process
STATUS MANAGERThis command also display the info of manager
REFRESH
REFRESH MANAGERReloads from the Manager Parameter file
REFRESH MGRReloads from the Manager Parameter file
SEND
SEND MANAGER CHILDSTATUSDisplays status of processes, started by Manager.
SEND MANAGER CHILDSTATUS DEBUGReturn the ports numbers allocated by the Manager
SEND MANAGER GETPORTINFODisplays the list of currently allocated ports by Manager process
SEND MANAGER GETPORTINFO DETAILProvides info on ports and process assigned to them.
SEND MANAGER GETPURGEOLDEXTRACTSRetrieves trail purge retention info.
START
START MANAGERStarts the Manager Process
START MGRStarts the Manager Process
STOP
STOP MGRStops the Manager Process
STOP MANAGER !Stops Manager without asking for user confirmation.
STOP MGR !Stops Manager without asking for user confirmation.

Article 0

$
0
0
Oracle Goldengate Trail file commands

ADD EXTTRAILIt is used to create the local trail file for extract process on local system
ADD EXTTRAIL /ORACLE/GOLDENGATE/DIRDAT/SE, EXTRACT e_src, MEGABYTES 100Create EXTTRAIL with the Prefix"SE", and the size of 100 mb
ADD EXTTRAIL /ORACLE/GOLDENGATE/DIRDAT/SE000009To create the EXTTRAIL with specific sequence number
ADD RMTTRAILIt is used to create the remote trail files for the extract or pump processes on remote systems
ADD RMTTRAIL /ORACLE/GOLDENGATE/DIRDAT/TE, EXTRACT p_src, MEGABYTES 100Create RMTTRAIL with the Prefix"TE", and the size of 100 mb
ADD RMTTRAIL /ORACLE/GOLDENGATE/DIRDAT/SE000009To create the RMTTRAIL with specific sequence number
ALTER EXTTRAILIt is used to change the options of the existing EXTTRAIL file for extract process on local system
ALTER EXTTRAIL /ORACLE/GOLDENGATE/DIRDAT/SE, EXTRACT e_src, MEGABYTES 50
ALTER RMTTRAILIt is used to change the options of the existing RMTTRAIL file of extract or pump processes on remote systems
ALTER RMTTRAIL /ORACLE/GOLDENGATE/DIRDAT/TE, EXTRACT p_src, MEGABYTES 50
DELETE EXTTRAILIt is used to delete the exttrail assigned to the extract on local system by deleting its references from checkpoint file
DELETE EXTTRAIL /ORACLE/GOLDENGATE/DIRDAT/SE
DELETE RMTTRAILIt is used to delete the exttrail for the extract or pump on remote system by deleting its references from checkpoint file
DELETE RMTTRAIL /ORACLE/GOLDENGATE/DIRDAT/TE
INFO EXTTRAILIt is used to display the info of local trail like name, associated extract, rba and file size etc
INFO EXTTRAIL /ORACLE/GOLDENGATE/DIRDAT/SEDisplay info for specific exttrails
INFO EXTTRAIL *Display info for all exttrails
INFO RMTTRAILIt is used to display the info of remote trail like name, associated extract, rba and file size etc
INFO RMTTRAIL /ORACLE/GOLDENGATE/DIRDAT/TEDisplay info for specific rmttrails
INFO RMTTRAIL *Display info for all rmttrails

Article 1

$
0
0
HOW TO RECOVER THE PASSWORD IN WEBLOGIC SERVER

Step 1:

Run setWlstEnv.sh for setting up the environment variables.
Ex:-
. /u01/Middleware/oracle_common/common/bin/setWlstEnv.sh

Step 2:
weblogic password recover command -
[oracle@localhost bin]$ /opt/installations/tools/jdk1.7.0_55/bin/java weblogic.WLST 
decryptpassword.py /opt/ntdomain/domains/NT {AES}68+XWFqzaQdP5DmEgmkJZWnRWtIvjBd7v+y6h49tCd0\=
Initializing WebLogic Scripting Tool (WLST) ...
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
========================================
          Decrypted Password:p0o9i8u7
========================================
Step 3:
weblogic user recovery command -
[oracle@localhost bin]$ /opt/installations/tools/jdk1.7.0_55/bin/java weblogic.WLST 
decryptpassword.py /opt/ntdomain/domains/NT {AES}WsnwdqROocsh6D1YOclnc1ySRyzheBNtZD2AGLnjIFM\=
Initializing WebLogic Scripting Tool (WLST) ...
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
========================================
          Decrypted Password:weblogic
========================================

decryptpassword.py:
import os
import weblogic.security.internal.SerializedSystemIni
import weblogic.security.internal.encryption.ClearOrEncryptedService

def decryptString(domainPath, encryptedString):
    es = weblogic.security.internal.SerializedSystemIni.getEncryptionService(domainPath)
    ces = weblogic.security.internal.encryption.ClearOrEncryptedService(es)
    decryptedString = ces.decrypt(encryptedString)
    print "=" * 70
    print "" * 10 +"Decrypted Password:" + decryptedString
    print "=" * 70

try:
    os.system('clear')
    if len(sys.argv) == 3:
        decryptString(sys.argv[1], sys.argv[2])
    else:
        print "=" * 70
        print "INVALID ARGUMENTS"
        print "Usage: java weblogic.WLST %s " %sys.argv[0]
        print "example.:"
        print "   java weblogic.WLST %s /oracle/fmwhome/user_projects/domains/NT/ 
{AES}68+XWFqzaQdP5DmEgmkJZWnRWtIvjBd7v+y6h49tCd0\=" %sys.argv[0]
        print "=" * 70
except:
    print "Unexpected error: ", sys.exc_info()[0]
    dumpStack()
    raise


If you got this kind of error "Exception in thread "Main Thread" java.lang.NoClassDefFoundError: weblogic/WLST" , 
need to run the command from the Domain bin directory: . ./setDomainEnv.sh

Article 0

$
0
0
STEPS FOR RAC SWITCHOVER - PHYSICAL STANDBY

PRE SWITCHOVER  CHECKS:

1) Ensure LOG_ARCHIVE_CONFIG & DG_CONFIG parameters are established in primary database.

2)  Verify the physical standby database performing properly.

3)On the standby database, query the V$ARCHIVED_LOG view to identify existing files in the archived redo log

SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIM NEXT_TIME
---------- --------- ---------
118      27-MAY-13 27-MAY-13
119      27-MAY-13 28-MAY-13
120      28-MAY-13 28-MAY-13

DO a log switch - alter system switch logfile;

System altered.

Then Verify redo received on Standby and applied................

SELECT SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# APPLIED
---------- ---------
 119        YES
 120        NO
 120        YES
 121        NO
 121        YES
 122        NO
 122        YES
 123        NO
 123        NO

 Then Verify Managed Recovery is running on the standby...........

 SELECT PROCESS FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';

 4) Verify that recovery is running with “REAL TIME APPLY”  option

 ELECT RECOVERY_MODE FROM V$ARCHIVE_DEST_STATUS WHERE DEST_ID=2;

RECOVERY_MODE
-----------------------
MANAGED REAL TIME APPLY

If managed standby recovery is not running or not started with real-time apply,    
restart managed recovery with  Real-time apply enabled Using any of the below command

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

5)Verify there are no large Gaps Identify the current sequence number for each thread on the primary database
               
SELECT THREAD#, SEQUENCE# FROM V$THREAD;

Verify the target physical standby database has applied up to, but not including the logs from
the primary query. On the standby the following query should be within 1 or 2 of the
primary query result.

SQL> SELECT THREAD#, MAX(SEQUENCE#) FROM V$ARCHIVED_LOG
WHERE APPLIED = 'YES'
AND RESETLOGS_CHANGE# = (SELECT RESETLOGS_CHANGE#
FROM V$DATABASE_INCARNATION WHERE STATUS = ‘CURRENT’)
GROUP BY THREAD#;

6)Then Verify Primary and Standby tempfiles match and all datafiles are ONLINE
For each temporary tablespace on the standby, verify that temporary files associated
with that tablespace on the primary database also exist on the standby database.
Tempfiles added after initial standby creation are not propagated to the standby.
Run this query on both the primary and target physical standby databases and verify that they match.

SQL> SELECT TMP.NAME FILENAME, BYTES, TS.NAME TABLESPACE
FROM V$TEMPFILE TMP, V$TABLESPACE TS WHERE TMP.TS#=TS.TS#;

If the queries do not match then you can correct the mismatch now or immediately after the
open of the new primary database.Prior to switchover, on the target standby, verify that all datafiles necessary for updates after
role transition to primary are ONLINE.

On the target standby:

SQL> SELECT NAME FROM V$DATAFILE WHERE STATUS=’OFFLINE’;

If there are any OFFLINE datafiles, and these are needed after switchover, bring them ONLINE:

SQL> ALTER DATABASE DATAFILE ‘datafile-name’ ONLINE;

7)Check if any jobs are running....IN PROD

SELECT * FROM DBA_JOBS_RUNNING;

SQL> SELECT OWNER, JOB_NAME, START_DATE, END_DATE, ENABLED FROM
DBA_SCHEDULER_JOBS WHERE ENABLED=’TRUE’ AND OWNER <> ‘SYS”;

SQL> SHOW PARAMETER job_queue_processes

Note: Job candidates to be disabled among others: oracle text sync and optimizer, RMAN
backups, application garbage  collectors, application background agents.
Block further job submission

SQL> ALTER SYSTEM SET job_queue_processes=0 SCOPE=BOTH SID=’*’;

Disable any jobs that may interfere.      

SQL> EXECUTE DBMS_SCHEDULER.DISABLE( );

-----------------------------------------------------------------------------------------------------------------

 SWITCHOVER STEPS:

1)Verify that the primary database can be switched to the standby role
Query the SWITCHOVER_STATUS column of the V$DATABASE view on the primary database:

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS
 -----------------
   TO STANDBY
 
The result of the query should be as below
A value of TO STANDBY or SESSIONS ACTIVE (which requires the WITH SESSION SHUTDOWN clause on the switchover command)
indicates that the primary database can be switched to the standby role. If neither of these values is returned,
a switchover is not possible because redo transport is either mis-configured or is not functioning properly  

2)If The Primary is a RAC, then shutdown all secondary primary instances.

Shutdown all the Primary Instances and Start only one Instance

srvctl stop database -d databasename

Start only one Instance

Startup (Make RAC to Stand-alone)

select * from v$active_instances; (1 Instance Up and Running)

Switchover the primary to a standby database

 SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY WITH SESSION SHUTDOWN;

 If an ORA-16139 error is encountered, as long as V$DATABASE.DATABASE_ROLE=’PHYSICAL STANDBY’, then you can proceed.
 A common case where this can occur is  when there are a large number of
 data files. Once managed recovery is started on the new standby, the database will recover.

 NOTE: If the role was not changed then you need to cancel the switchover and review the alert logs and trace files further.


 3)Verify the standby has received the end-of-redo (EOR) logs
In the primary alert log you will see messages like these:

Switchover: Primary controlfile converted to standby controlfile succesfully.
       Tue Mar 15 16:12:15 2011
 MRP0 started with pid=17, OS id=2717
MRP0: Background Managed Standby Recovery process started (SFO)
 Serial Media Recovery started
 Managed Standby Recovery not using Real Time Apply
 Online logfile pre-clearing operation disabled by switchover
 Media Recovery Log /u01/app/flash_recovery_area/SFO/archivelog/2011_03_15/o1_mf_1_133_6qzl0yvd_.arc
 Identified End-Of-Redo for thread 1 sequence 133
       Resetting standby activation ID 0 (0x0)
 Media Recovery End-Of-Redo indicator encountered
       Media Recovery Applied until change 4314801
       MRP0: Media Recovery Complete: End-Of-REDO (SFO)
       MRP0: Background Media Recovery process shutdown (SFO)
       Tue Mar 15 16:12:21 2011
 Switchover: Complete - Database shutdown required (SFO)
 Completed: ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN

And correspondingly in the standby alert log file you should see messages like these:
                Tue Mar 15 16:12:15 2011
                RFS[8]: Assigned to RFS process 2715
                RFS[8]: Identified database type as 'physical standby': Client is Foreground pid 2568
                Media Recovery Log /u01/app/flash_recovery_area/NYC/archivelog/2011_03_15/o1_mf_1_133_6qzl0yjp_.arc
                Identified End-Of-Redo for thread 1 sequence 133
                Resetting standby activation ID 2680651518 (0x9fc77efe)
 Media Recovery End-Of-Redo indicator encountered
                Media Recovery Continuing
                Resetting standby activation ID 2680651518 (0x9fc77efe)
Media Recovery Waiting for thread 1 sequence 134

4)Verify that the standby database can be switched to the primary role

SELECT SWITCHOVER_STATUS FROM V$DATABASE;
       SWITCHOVER_STATUS
       -----------------
       TO PRIMARY

ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN; 
 
6)In the standby alert log file you should see messages like these:


Tue Mar 15 16:16:44 2011
                ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN
                ALTER DATABASE SWITCHOVER TO PRIMARY (NYC)
 Maximum wait for role transition is 15 minutes.
 Switchover: Media recovery is still active
                Role Change: Canceling MRP - no more redo to apply
                Tue Mar 15 16:16:45 2011
                MRP0: Background Media Recovery cancelled with status 16037
                Errors in file /u01/app/diag/rdbms/nyc/NYC/trace/NYC_pr00_2467.trc:
                ORA-16037: user requested cancel of managed recovery operation
 Managed Standby Recovery not using Real Time Apply
                Recovery interrupted!
 Waiting for MRP0 pid 2460 to terminate
 Errors in file /u01/app/diag/rdbms/nyc/NYC/trace/NYC_pr00_2467.trc:
 ORA-16037: user requested cancel of managed recovery operation
                Tue Mar 15 16:16:45 2011
 MRP0: Background Media Recovery process shutdown (NYC)
 Role Change: Canceled MRP

 Open the new primary database    ---- For  RAC  setup  also open the database on second node.

  ALTER DATABASE OPEN; ( Shutdown & Startup )

7) Correct any tempfile mismatch
If there was a tempfile that was not corrected during the pre-switchover check, then correct it now on the new primary.


Restart the new standby
 If the new standby database (former primary database) was not shutdown since switching it to standby,
 bring it to the mount state and start managed recovery. This can be done in parallel to the new primary open.

 SQL> STARTUP MOUNT;

 SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

 Note: If you were using a delay for your standby then you would restart the apply without real time apply:

 SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;

 Finally, if the database is a RAC, then start all secondary instances on the both new standby and current primary.

ONCE THE SWITCHOVER IS SUCCESSFULL RESTORE ANY JOBS TAKEN DOWN AND DISABLE TRACE IF ENABLED.
  

Article 8

$
0
0

Recover Datafile from Hot Backup
==========================


[oracle@test ~]$ cd /u01/app/oracle/oradata/proddb/

[oracle@test proddb]$ ls -l
total 1035664
-rw-r----- 1 oracle oinstall   7061504 Jul 27 10:00 control01.ctl
-rw-r----- 1 oracle oinstall   7061504 Jul 27 10:00 control02.ctl
-rw-r----- 1 oracle oinstall   7061504 Jul 27 10:00 control03.ctl
-rw-r----- 1 oracle oinstall 104865792 Jul 27 09:56 example01.dbf
-rw-r----- 1 oracle oinstall  52429312 Jul 27 09:59 redo01.log
-rw-r----- 1 oracle oinstall  52429312 Jul 27 09:49 redo02.log
-rw-r----- 1 oracle oinstall  52429312 Jul 27 09:49 redo03.log
-rw-r----- 1 oracle oinstall 241180672 Jul 27 09:56 sysaux01.dbf
-rw-r----- 1 oracle oinstall 503324672 Jul 27 09:56 system01.dbf
-rw-r----- 1 oracle oinstall  20979712 Jul 25 15:11 temp01.dbf
-rw-r----- 1 oracle oinstall  26222592 Jul 27 09:56 undotbs01.dbf
-rw-r----- 1 oracle oinstall   5251072 Jul 27 09:56 users01.dbf


SQL> startup

SQL> alter database begin backup;

Database altered.

Let's Copy take backup of datafiles.

[oracle@test proddb]$ cp *.dbf /u01/coldbkp/
[oracle@test proddb]$

Now we can check which files is been in backup state:

SQL> select * from v$backup;

     FILE# STATUS                CHANGE# TIME
---------- ------------------ ---------- ---------
         1 ACTIVE                 485758 06-JAN-16
         2 ACTIVE                 485758 06-JAN-16
         3 ACTIVE                 485758 06-JAN-16
         4 ACTIVE                 485758 06-JAN-16
         5 ACTIVE                 485758 06-JAN-16


Let's close the backup state.

SQL> alter database end backup;

Database altered.

Well now I will shutdown my database end drop one datafile.

Remember in this case didn't  happens any log switch.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

I will remove USERS01.DBF :

[oracle@test proddb]$ rm users01.dbf

And let's Start.

SQL> startup

Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u01/app/oracle/oradata/proddb/users01.dbf'

As expect  error, ok in this case my database will be in MOUNT state so I just need to restore the file manually.

[oracle@test coldbkp]$ cp users01.dbf /u01/app/oracle/oradata/proddb/

If you try to open directly.

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01113: file 4 needs media recovery
ORA-01110: data file 4: '/u01/app/oracle/oradata/proddb/users01.dbf'

SQL> recover datafile 4;
Media recovery complete.

SQL> alter database open;

Database altered.

Done.

Article 7

$
0
0

FIND Concurrent request details from OS PID or Oracle SID

For the particular Day:
=================
select * from fnd_concurrent_requests
where 1=1
and oracle_process_id=&OS_PID
and trunc(request_date)=trunc(
sysdate);

For all Days:
=========
select * from fnd_concurrent_requests
where 1=1
and oracle_process_id=&OS_PID

FULL Deatils from OS PID
====================
select * FROM v$process p, v$session s, fnd_concurrent_requests f
 WHERE s.paddr = p.addr /*and s.status = 'ACTIVE'*/
   AND s.username NOT LIKE '%SYS%'
   AND p.spid IN (SELECT oracle_process_id
                    FROM fnd_concurrent_requests
                   WHERE 1 = 1 AND oracle_process_id = &os_pid);

Complete Detials with filtering for the OS PID
===================================
SELECT trunc(f.ACTUAL_START_DATE) "Actual Start Date", s.LOGON_TIME,p.spid "OS PID", s.SID, s.serial#, s.action, s.username, s.status, s.program,
       p.program, s.module, s.lockwait, s.state, s.sql_hash_value,
       s.schemaname, s.osuser, s.machine, s.last_call_et, p.program,
       p.terminal, logon_time, module, s.osuser , f.request_id,  f.request_date,
        f.completion_text, f.outcome_product,
        f.logfile_node_name, f.outfile_name, argument_text,
       f.outfile_node_name, f.oracle_id, f.concurrent_program_id,
       f.responsibility_application_id, f.responsibility_id,
       f.last_update_login, f.nls_language, f.controlling_manager, f.actual_start_date,f.actual_completion_date
  FROM v$process p, v$session s, fnd_concurrent_requests f
  WHERE s.paddr = p.addr /*and s.status = 'ACTIVE'*/
   AND s.username NOT LIKE '%SYS%'
   AND p.spid=f.oracle_process_id
   order by actual_start_date desc;


Details for a specific OS PID
=======================

SELECT p.spid, s.SID, s.serial#, s.action, s.username, s.status, s.program "Session Program",
       p.program "Process Program, s.module, s.lockwait, s.state, s.sql_hash_value,
       s.schemaname, s.osuser, s.machine, s.last_call_et, p.program,
       p.terminal, logon_time, module, s.osuser , f.request_id,  f.request_date,
        f.completion_text, f.outcome_product,
        f.logfile_node_name, f.outfile_name, argument_text,
       f.outfile_node_name, f.oracle_id, f.concurrent_program_id,
       f.responsibility_application_id, f.responsibility_id,
       f.last_update_login, f.nls_language, f.controlling_manager, f.actual_start_date,f.actual_completion_date
  FROM v$process p, v$session s, fnd_concurrent_requests f
 WHERE s.paddr = p.addr /*and s.status = 'ACTIVE'*/
   AND s.username NOT LIKE '%SYS%'
   AND p.spid IN (SELECT oracle_process_id
                    FROM fnd_concurrent_requests
                   WHERE 1 = 1 AND oracle_process_id = &os_pid);

Article 6

$
0
0


TO CHECK THE CURRENT WORKFLOW MAILER CONFIGURATION AND LOGFILES


SQL>

col MEANING for a10;
col DECODE(FCQ.CONCURRENT_QUEUE_NAME) for a30;
col OS_PROCESS_ID for a10;
col LOGFILE_NAME for a50;
select fl.meaning,fcp.process_status_code, 
decode(fcq.concurrent_queue_name,'WFMLRSVC','maile r container','WFALSNRSVC','listener container',fcq.concurrent_queue_name),
fcp.concurrent_process_id,os_process_id, fcp.logfile_name
from fnd_concurrent_queues fcq, fnd_concurrent_processes fcp , fnd_lookups fl
where fcq.concurrent_queue_id=fcp.concurrent_queue_id and fcp.process_status_code='A'
and fl.lookup_type='CP_PROCESS_STATUS_CODE' and
fl.lookup_code=fcp.process_status_code
and concurrent_queue_name in('WFMLRSVC','WFALSNRSVC')
order by fcp.logfile_name;


Output :
======

EANING    P DECODE(FCQ.CONCURRENT_QUEUE_NA CONCURRENT_PROCESS_ID OS_PROCESS LOGFILE_NAME
---------- - ------------------------------ --------------------- ---------- --------------------------------------------------
Active     A maile r container                             692801 60228270   /test01/oracle/TEST/inst/apps/TEST_host/logs/appl/conc/log/FNDCPGSC692801.txt

Active     A listener container                            692802 63701744   /test01/oracle/TEST/inst/apps/TEST_host/logs/appl/conc/log/FNDCPGSC692802.txt

Article 5

$
0
0

CRSCTL commands in Oracle 11g Release 2


How to shutdown CRS on all nodes and Disable CRS as ROOT user:
-------------------------------------------------------------
#crsctl stop crs
#crsctl disable crs

How to Enable CRS and restart CRS on all nodes as ROOT user:
-----------------------------------------------------------
#crsctl enable crs
#crsctl start crs

How to check VIP status is ONLINE / OFFLINE:
----------------------------------------
$crs_stat or
$crsctl stat res -t ------> 11gr2

How to Check current Version of Clusterware:
-------------------------------------------
$crsctl query crs activeversion

$crsctl query crs softwareversion [node_name]

How to Start & Stop CRS and CSS:
-------------------------------
$crsctl start crs
$crsctl stop crs

#/etc/init.d/init.crs start
#/etc/init.d/init.crs stop

#/etc/init.d/init.cssd stop
#/etc/init.d/init.cssd start

How to Enable & Disable CRS:
---------------------------
$crsctl enable crs
$crsctl disable crs

#/etc/init.d/init.crs enable
#/etc/init.d/init.crs disable

How to Check current status of CRS:
----------------------------------
$crsctl check crs

$crsctl check cluster [-node node_name]

How to Check CSS, CRS and EVMD:
------------------------------
$crsctl check cssd

$crsctl check crsd

$crsctl check evmd

How to List the Voting disks currently used by CSS:
--------------------------------------------------
$crsctl check css votedisk

$crsctl query css votedisk

How to Add and Delete any voting disk:
-------------------------------------
$crsctl add css votedisk <PATH>

$crsctl delete css votedisk <PATH>

How to start clusterware resources:
----------------------------------
$crsctl start resources

$crsctl stop resources

Article 4

$
0
0

Difference in location of Log files in 12.1.3 and  12.2.4

Hi All,

Lot of companies planning to Upgrade from Oracle EBS Release 12.1.3(Rel 12.1.x)  to Oracle EBS Release 12.2.4(Rel 12.2.x). Few of the companies already upgraded.

The Log files locations in Oracle EBS Release 12.1.3 and  Oracle EBS R 12.2.4  are given below:

1.Instance startup and configuration Log files are located for INST_TOP in Oracle Release 12.1.3 are below:

$INST_TOP/logs/appl/admin/log
Startup/Shutdown error message related to tech stack (10.1.2, 10.1.3 forms/reports/web)
$INST_TOP/logs/ora/ (10.1.2 & 10.1.3)
$INST_TOP/logs/ora/10.1.3/Apache/error_log[timestamp](Apache log files)
$INST_TOP/logs/ora/10.1.3/opmn/ (OC4J, oa*, opmn.log)
$INST_TOP/logs/ora/10.1.2/network/ (listener log)
$INST_TOP/apps/$CONTEXT_NAME/logs/appl/conc/log (CM log files)

2. Log files related to cloning in R12.1.3 are as below:

 Preclone log files in source instance
Database Tier – $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/(StageDBTier_MMDDHHMM.log)
Application Tier –
$INST_TOP/apps/$CONTEXT_NAME/admin/log/(StageAppsTier_MMDDHHMM.log)

Clone log files in target instance
Database Tier – $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ApplyDBTier_.log
Apps Tier – $INST_TOP/admin/log/ApplyAppsTier_.log

3. Patching related log files in R12.1.3 are as below:

i) Application Tier adpatch log – $APPL_TOP/admin/$SID/log/
ii) Developer (Developer/Forms & Reports 10.1.2) Patch – $ORACLE_HOME/.patch_storage
iii) Web Server (Apache) patch – $IAS_ORACLE_HOME/.patch_storage
iv) Database Tier opatch log – $ORACLE_HOME/.patch_storage


4. Autoconfig related log files in R12.1.3 are as below:

a) Database Tier Autoconfig log :
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/MMDDHHMM/adconfig.log
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/MMDDHHMM/NetServiceHandler.log


b) Application Tier Autoconfig log : 
$INST_TOP/apps/$CONTEXT_NAME/admin/log/$MMDDHHMM/adconfig.log

5.Autoconfig context file location in R12.1.3 :
$INST_TOP/apps/$CONTEXT_NAME/appl/admin/$CONTEXT_NAME.xml


6)R12.1.3 Installation Logs in R12.1.3 are as below:

 Database Tier Installation
RDBMS_ORACLE_HOME/appsutil/log/$CONTEXT_NAME/.log
RDBMS_ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ApplyDBTechStack_.log
RDBMS_ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ohclone.log
RDBMS_ORACLE_HOME/appsutil/log/$CONTEXT_NAME/make_.log
RDBMS_ORACLE_HOME/appsutil/log/$CONTEXT_NAME/installdbf.log
RDBMS_ORACLE_HOME/appsutil/log/$CONTEXT_NAME/adcrdb_.log RDBMS_ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ApplyDatabase_.log
RDBMS_ORACLE_HOME/appsutil/log/$CONTEXT_NAME//adconfig.log
RDBMS_ORACLE_HOME/appsutil/log/$CONTEXT_NAME//NetServiceHandler.log
Application Tier Installation
$INST_TOP/logs/.log
$APPL_TOP/admin/$CONTEXT_NAME/log/ApplyAppsTechStack.log
$INST_TOP/logs/ora/10.1.2/install/make_.log
$INST_TOP/logs/ora/10.1.3/install/make_.log
$INST_TOP/admin/log/ApplyAppsTechStack.log
$INST_TOP/admin/log/ohclone.log
$APPL_TOP/admin/$CONTEXT_NAME/log/installAppl.log
$APPL_TOP/admin/$CONTEXT_NAME/log/ApplyAppltop_.log
$APPL_TOP/admin/$CONTEXT_NAME/log//adconfig.log
$APPL_TOP/admin/$CONTEXT_NAME/log//NetServiceHandler.log
Inventory Registration:
$Global Inventory/logs/cloneActions.log
$Global Inventory/logs/oraInstall.log
$Global Inventory/logs/silentInstall.log

7) Log files related with relink,Network,OUT inventory logs for R12.1.3 are as below:
 1) Database Tier
1.1) Relink Log files :
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME /MMDDHHMM/ make_$MMDDHHMM.log
1.2) Alert Log Files :
$ORACLE_HOME/admin/$CONTEXT_NAME/bdump/alert_$SID.log
1.3) Network Logs :
$ORACLE_HOME/network/admin/$SID.log
1.4) OUI Logs :
OUI Inventory Logs :
$ORACLE_HOME/admin/oui/$CONTEXT_NAME/oraInventory/logs
2) Application Tier
$ORACLE_HOME/j2ee/DevSuite/log
$ORACLE_HOME/opmn/logs
$ORACLE_HOME/network/logs
Tech Stack Patch 10.1.3 (Web/HTTP Server)
$IAS_ORACLE_HOME/j2ee/forms/logs
$IAS_ORACLE_HOME/j2ee/oafm/logs
$IAS_ORACLE_HOME/j2ee/oacore/logs
$IAS_ORACLE_HOME/opmn/logs
$IAS_ORACLE_HOME/network/log
$INST_TOP/logs/ora/10.1.2
$INST_TOP/logs/ora/10.1.3
$INST_TOP/logs/appl/conc/log
$INST_TOP/logs/appl/admin/log


In EBS R12.2.4 the log files locations are as below:

1)Log files file Online patching (adop) in EBS R12.2.4 are in below location:

The adop log files are located on the non-editioned file system (fs_ne), under:

$NE_BASE/EBSapps/log/adop/<adop_session_id>/<phase>_<date>_<time>/<context_name>/log

This log directory will contain patch logs,patch worker logs.

adop(phase=fs_clone) Online pathcing filesystem cloning process related log files are found under:

$INST_TOP/admin/log


2)Log files for Autoconfig process in Oracle EBS R12.2.4 are below:

On Applicaion Tier: $INST_TOP/admin/log/<MMDDhhmm>
On Database Tier: $ORACLE_HOME/appsutil/log/<CONTEXT_NAME>/<MMDDhhmm>

3)Log files for start/stop of services from $ADMIN_SCRIPTS_HOME

In below directory we will find log files related to start/stop process of oacore, forms, apache, opmn,
weblogic admin server/node manager:

$LOG_HOME/appl/admin/log


4)Log/Out files for Concurrent programs/managers in Oracle R12.2.4 are in below location:

Log/Out files for Oracle Release 12.2 are stored in Non-Editioned filesystem(NE).

Log files: $APPLCSF/$APPLLOG (or $NE_BASE/inst/<CONTEXT_NAME>/logs/appl/conc/log)
Out files: $APPLCSF/$APPLOUT (or $NE_BASE/inst/<CONTEXT_NAME>/logs/appl/conc/out)


5)Log files for OPMN and OHS processes in Oracle R12.2.4 are in below location:

Below directory contains log files related OPMN process(opmn.log),
OPMN Debug logs(debug.log), HTTP Transaction logs (access.log),security settings related logs.

$IAS_ORACLE_HOME/instances/<ohs_instance>/diagnostics/logs


6)Log file for Weblogic Node Manager in Oracle R12.2.4 are in below location:

Log file is generated by Node Manager and contains data for all domains that
are controlled by Node Manager on a given physical machine.

$FMW_HOME/wlserver_10.3/common/nodemanager/nmHome1/nodemanager.log


7)Log file for Weblogic  in Oracle R12.2.4 for Oracle Management Service are below

Initial settings for AdminServer and Domain level information is written in this log file

$EBS_DOMAIN_HOME/sysman/log


8)Log files for server processes initiated through Weblogic in Oracle R12.2.4 are in below location:
Stdout and stderr messages generated by the forms, oafm and oacore services are located
at NOTICE severity level or higher are written by Weblogic Node Manager to below directory.

$EBS_DOMAIN_HOME/servers/<server_name>/logs/<server_name>.out

Article 3

$
0
0

RESTORE DATAFILE WITHOUT BACKUP - ARCHIVELOG MODE ON


Datafile restoration without backup is only possible when archivelog 
is enabled during the time of datafile dropped. By using archivelog and redo logs
 we can restore datafile.

SQL> select log_mode from v$database;
                        LOG_MODE
                                ------------
                                ARCHIVELOG
  • I have removed the emp.dbf datafile accidently

Oracle@test TEST$  rm emp.dbf
  • When I try to insert a new data in employee table present in emp datafile it shows below error

SQL> insert into emp select * from emp;
insert into emp select * from emp
                            
ERROR at line 1:
ORA-01116: error in opening database file 8
ORA-01110: data file 6: '/d02/app/oracle/oradata/testdb/emp.dbf'
ORA-27041: unable to open file
Linux Error: 2: No such file or directory
Additional information: 3
  • To recover the datafile we need to take it offline :

SQL> alter database datafile 6 offline;
  • Create a new datafile with same old dropped datafile name/Path

SQL> alter database create datafile '/d02/app/oracle/oradata/testdb/emp.dbf ';
  • Now recover the datafile

SQL> recover datafile 8;
  • Bring the datafile online :

SQL> alter database datafile 6 online;
  • Now insert the values and check

SQL> insert into emp select * from emp;
100 rows created.

Article 2

$
0
0


Planning Ods Load Errors: ORA-00920: invalid relational operator
=========================================


On : 12.1.3 version, ATP based on collected data

When attempting to run MSCPDCW module: Planning ODS Load Worker ,
the following error occurs.

ERROR
-----------------------
21-MAR 11:08:54 : Procedure MSC_CL_RPO_ODS_LOAD.LOAD_IRO_DEMAND started.
21-MAR 11:08:54 : ORA-00920: invalid relational operator
21-MAR 11:08:54 : ORA-00920: invalid relational operator
21-MAR 11:08:54 : Sending message :-9999999 to monitor.
21-MAR 11:08:54 : Error_Stack...
21-MAR 11:08:54 : ORA-00920: invalid relational operator

21-MAR 11:08:54 : Error_Backtrace...
21-MAR 11:08:54 : ORA-06512: at "APPS.MSC_CL_COLLECTION", line 3013
ORA-06512: at "APPS.MSC_CL_COLLECTION", line 3232



SOLUTION :
=======

To implement the solution, please execute the following steps:

1. Download and review the readme and pre-requisites for VCP 12.1.3.6 Patch 12695646 or later

2. Ensure that you have taken a backup of your system before applying 
the recommended patch. 

3. Apply the patch in a test environment. 

4. Confirm the following file versions: 
msc patch/115/import/US/mscprg.ldt 120.86.12010000.61 or higher.

5. Retest the issue. 

6. Migrate the solution as appropriate to other environments.

Article 1

$
0
0

Try to Schedule Sales Order In New Test Instance - Get Message - Summary Concurrent 
====================================================
program is running. Try ATP Later
=====================


On : 11.5.10.2 version, Scheduling and Sourcing

When attempting to book the sales order,
the following error occurs.
Error message in the application is:
Summary Concurrent program is running.  Try ATP Later

OM debug log file shows
ERROR
-----------------------
-----------------Loading MRP Result---------------
MRP COUNT IS 1
SCHEDULE ACTION CODE SCHEDULE

X_RETURN_STATUS E
RR: L2
AFTER ACTION SCHEDULE : E
SCH SHIP DATE
SCH ARR DATE
SHIP_FROM 2453
Exiting schedule_line: 11-JUN-12 10:48:57
schedule return status E
scheduling flow error ORA-0000: normal, successful completion
In WF save messages
ENTER Save_API_Messages
L_MSG_DATA=Scheduling failed.


STEPS
-----------------------
The issue can be reproduced at will with the following steps:
1. PS OM Super User
2. Orders/Returns-->
3. Book the sales order
4. error occurs.

BUSINESS IMPACT
-----------------------
The issue has the following business impact:
Due to this issue, users cannot book the sales order.


SOLUTION :
=========

1. Run the following SQL on the APS destination instance.

update MSC_APPS_INSTANCES set SUMMARY_FLAG = NULL;
commit;

Somehow the flag in msc_apps_instances got set to 2... this would be normal if you ran the program -
 Load ATP Summary Based on Collected Data OR Load ATP Summary Based on Planning Output
- while the program is running, it would set this flag and provide this message, then upon completion, 
would remove the flag so you could continue scheduling orders.
- This program is launched by user or when you have profile - MSC: Enable ATP Summary Mode - Yes

Appears instance was cloned with this flag - 2

Note, you should not use this program when you have distributed install, and both instances 
should have the profile MSC: Enable ATP Summary Mode - NO

Cause: The field PARAMETER.CONFIG could not be located or read

$
0
0
Problem Summary
------------------------

In R12.2.4 After clone , the concurrent ouput/log file shows the error Cannot read value from field PARAMETER.CONFIG


Problem Description
-------------------------

After clone we get the following error in log and output file , but the request completed Normal


Error in the Output/log file :
---------------------------------

Cause: The field PARAMETER.CONFIG could not be located or read.

Action: This error is normally the result of an incorrectly-entered field name string in a trigger, or a field name string that does not uniquely specify a field in your form.

Correct your trigger logic to precisely specify a valid field.


Env Details:
--------------

2 DB node , 2 CM nodes , 2 APP nodes


To find out the issue :
-------------------------

Enable Trace for FNDFS in all the nodes (CM) in listener file :
--------------------------------------------------------------------------

( SID_DESC = ( SID_NAME = FNDFS  )
                 ( ORACLE_HOME = /u01/app/doyuat/fs1/EBSapps/10.1.2 )
                 ( PROGRAM = /u01/app/DOYuat/fs1/EBSapps/appl/fnd/12.0.0/bin/FNDFS )
                 (

envs='EPC_DISABLED=TRUE,NLS_LANG=American_America.UTF8,LD_LIBRARY_PATH=/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib32:/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib:/usr/X11R6/lib:/usr/openwin/lib:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/server:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/native_threads:/u01/app/DOYuat/fs1/EBSapps/appl/sht/12.0.0/lib,SHLIB_PATH=/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib32:/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib:/usr/X11R6/lib:/usr/openwin/lib:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/server:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/native_threads:/u01/app/DOYuat/fs1/EBSapps/appl/sht/12.0.0/lib,LIBPATH=/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib32:/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib:/usr/X11R6/lib:/usr/openwin/lib:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/server:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/native_threads:/u01/app/DOYuat/fs1/EBSapps/appl/sht/12.0.0/lib,APPLFSTT=DOYUAT_806_BALANCE;DOYUATB_FO;DOYUATB;DOYUAT;DOYUATA;DOYUAT_FO;DOYUATA_FO,APPLFSWD=/u01/app/DOYuat/fs1/inst/apps/DOYUAT_doyensys01/appl/admin;/u01/app/DOYuat/fs1/inst/apps/DOYUAT_doyensys01/appltmp;/u01/app/DOYuat/fs1/FMW_Home/Oracle_EBS-app1/applications/oacore/html/oam/nonUix/launchMode/restricted,FNDFS_LOGGING=ON,FNDFS_LOGFILE=/u01/app/DOYuat/fs1/inst/apps/DOYUAT_doyensys01/logs/ora/10.1.2/network/FNDFS_DOYuat.log' )))


Line to be added in the listener for enabling trace and path of the log file :
---------------------------------------------------------------------------

FNDFS_LOGGING=ON,FNDFS_LOGFILE=/u01/app/doyuat/fs1/inst/apps/DOYUAT_doyensys01/logs/ora/10.1.2/network/FNDFS_doyuat.log'

Now bounce the listener and check for the request log/output files.

a) Retest the issue .

b) You will get the FNDFS log output in the specified location as mentioned in the listener (In the node in which the particular request try to fetch log/output file).

c) Now the error in the log file shows when connecting to the Node using the TWO_TASK it errored out.


Solution :
-----------

1. Check the TWO_TASK in Application nodes.(ex.It can be DOYUAT_BALANCE)
2. Check the listener file entry in Application Tier for the parameter APPLFSTT for which the TWO_TASK is included.
3. The same two_task entry should be included in both the CM nodes APPLFSTT value in listener file.
4. The Two_TASK value for CM nodes may be different like DOYUAT1 and DOYUAT2 for each nodes. But listener file should have included the Application's TWO_TASK for APPLFSTT.
5. The same value should be added in TNSNAMES file in both the CM NODES through which the Application node will connect to the CM node.
6. As a permenent fix change the value of APPLFSTT in XML file and run Autoconfig in CM node.


Following tns entry needs to be added in the tnsnames.ora in Both CM Nodes :
-------------------------------------------------------------------------------

DOYUAT_BALANCE=
        (DESCRIPTION=
            (ADDRESS_LIST=
                (LOAD_BALANCE=YES)
                (FAILOVER=YES)
                (ADDRESS=(PROTOCOL=tcp)(HOST=doyensys02.doyen.net)(PORT=1526))
                (ADDRESS=(PROTOCOL=tcp)(HOST=doyensys01.doyen.net)(PORT=1526))
            )
            (CONNECT_DATA=
                (SERVICE_NAME=DOYUAT)
            )
        )


After making the Entry in parameter (APPLFSTT)  in listener file in Both CM nodes  :


( SID_DESC = ( SID_NAME = FNDFS  )
                 ( ORACLE_HOME = /u01/app/doyuat/fs1/EBSapps/10.1.2 )
                 ( PROGRAM = /u01/app/DOYuat/fs1/EBSapps/appl/fnd/12.0.0/bin/FNDFS )
                 (envs='EPC_DISABLED=TRUE,NLS_LANG=American_America.UTF8,LD_LIBRARY_PATH=/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib32:/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib:/usr/X11R6/lib:/usr/openwin/lib:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/server:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/native_threads:/u01/app/DOYuat/fs1/EBSapps/appl/sht/12.0.0/lib,SHLIB_PATH=/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib32:/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib:/usr/X11R6/lib:/usr/openwin/lib:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/server:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/native_threads:/u01/app/DOYuat/fs1/EBSapps/appl/sht/12.0.0/lib,LIBPATH=/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib32:/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib:/usr/X11R6/lib:/usr/openwin/lib:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/server:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/native_threads:/u01/app/DOYuat/fs1/EBSapps/appl/sht/12.0.0/lib,APPLFSTT=DOYUAT_BALANCE;DOYUAT_806_BALANCE;DOYUATB_FO;DOYUATB;DOYUAT;DOYUATA;DOYUAT_FO;DOYUATA_FO,APPLFSWD=/u01/app/DOYuat/fs1/inst/apps/DOYUAT_doyensys01/appl/admin;/u01/app/DOYuat/fs1/inst/apps/DOYUAT_doyensys01/appltmp;/u01/app/DOYuat/fs1/FMW_Home/Oracle_EBS-app1/applications/oacore/html/oam/nonUix/launchMode/restricted,FNDFS_LOGGING=ON,FNDFS_LOGFILE=/u01/app/DOYuat/fs1/inst/apps/DOYUAT_doyensys01/logs/ora/10.1.2/network/FNDFS_DOYuat.log' )   )  )


RMAN Automation script to clone the database from DR site

$
0
0




#!/bin/ksh
##  Script to auto clone DOYENDB based on RMAN Duplicate
## Created by M.GANGAINATHAN
/bin/mailx -s "RPT:-DOYENDB CLONING PROCESS Started at :-`date` " $NOTIFY_LIST <$LOGFILE
NOTIFY_LIST="gnathan@1800flowers.com"
LOGFILE=/u03/oracle/CLONE/DOYENDB/logs/DOYENDB_CLONE.txt
LOGFILE1=/u03/oracle/CLONE/DOYENDB/logs/rman_clone.txt
echo "Start of DOYENDB CLONE  CREATION  at `date +%D-%T`">$LOGFILE
echo "Beginning Maintenance Window for OEM DOYENDB  at `date +%D-%T`">>$LOGFILE
. /orahome/env/oracle_Agent12C_env
cd $ORACLE_HOME/bin
./emctl start blackout DOYENDB_CLONE DOYENDB
echo "Ending Maintenance Window for OEM DOYENDB  at `date +%D-%T`">>$LOGFILE

# ENV file
. /orahome/env/oracle_DOYENDB_11g_env

# Stopping Listeners For preparation of Cloning
echo "Stopping LISTENER DOYENDB PROCESS  at `date +%D-%T`">>$LOGFILE
sh -x /u03/oracle/CLONE/DOYENDB/stop_listener.sh
cat /u03/oracle/CLONE/DOYENDB/logs/stop_DOYENDB_LISTENER.txt >>$LOGFILE
echo "Stopping LISTENER DOYENDB PROCESS  at `date +%D-%T`">>$LOGFILE

echo "Beginning Dropping Database DOYENDB  at `date +%D-%T`">>$LOGFILE
${ORACLE_HOME}/bin/sqlplus -s "/ as sysdba"<<-EOF >>$LOGFILE
shutdown immediate;
startup nomount;
alter database mount exclusive;
alter system enable restricted session;
drop database;
EOF
echo "Finished Dropping Database  DOYENDB at `date +%D-%T`">>$LOGFILE

echo " Starting DB in nomount state ">>$LOGFILE
echo "Beginning Starting Database DOYENDB in MOUNT STATE  at `date +%D-%T`">>$LOGFILE
${ORACLE_HOME}/bin/sqlplus -s "/ as sysdba"<<-EOF >>$LOGFILE
startup nomount pfile='/orahome/app/oracle/product/11204/DOYENDB/dbs/initDOYENDB.ora';
create spfile from pfile='/orahome/app/oracle/product/11204/DOYENDB/dbs/initDOYENDB.ora';
EOF
echo "End of Starting Database ATLLTRG in MOUNT STATE  at `date +%D-%T`">>$LOGFILE

# Starting CLONE Listeners For preparation of Cloning
echo "Starting LISTENER DOYENDB PROCESS  at `date +%D-%T`">>$LOGFILE
sh -x /u03/oracle/CLONE/DOYENDB/start_listener_CLONE.sh
cat /u03/oracle/CLONE/DOYENDB/logs/start_DOYENDB_LISTENER.txt >>$LOGFILE
echo "Starting LISTENER DOYENDB PROCESS  at `date +%D-%T`">>$LOGFILE


echo " Starting RMAN CLONE PROCESS at `date +%D-%T`">>$LOGFILE
$ORACLE_HOME/bin/rman target /@ATLPROD_DD_RESTORE auxiliary / CMDFILE /u03/oracle/CLONE/DOYENDB/SQL/rman_clone.sql LOG $LOGFILE1
echo " Starting RMAN CLONE PROCESS END at `date +%D-%T`">>$LOGFILE
echo " LOGS FOR RMAN CLONE DUMP "
echo "===================================================="
echo "===================================================="
cat /u03/oracle/CLONE/DOYENDB/logs/rman_clone.txt >>$LOGFILE
echo "===================================================="
echo "===================================================="
echo " LOGS FOR RMAN CLONE DUMP "

echo " Starting DB in nomount state at `date +%D-%T` ">>$LOGFILE
echo "Beginning Starting Database DOYENDB in FOR PFILE CREATION at `date +%D-%T`">>$LOGFILE
${ORACLE_HOME}/bin/sqlplus -s "/ as sysdba"<<-EOF >>$LOGFILE
shutdown immediate;
startup mount;
alter database noarchivelog;
alter database open;
EOF
echo "End of Starting Database ATLLTRG in OPEN STATE  at `date +%D-%T`">>$LOGFILE

# Starting Listeners After Cloning
echo "Starting LISTENER DOYENDB PROCESS  at `date +%D-%T`">>$LOGFILE
sh -x /u03/oracle/CLONE/DOYENDB/start_listener.sh
cat /u03/oracle/CLONE/DOYENDB/logs/start_DOYENDB_LISTENER.txt >>$LOGFILE
echo "Starting LISTENER DOYENDB PROCESS Ended at `date +%D-%T`">>$LOGFILE

echo "Starting of POST CLONE Database ACTIVITY at `date +%D-%T`">>$LOGFILE
${ORACLE_HOME}/bin/sqlplus -s "/ as sysdba"<<-EOF >>$LOGFILE
PROMPT DROPPING THE PUBLIC SYNONYNMS
@/u03/oracle/CLONE/DOYENDB/SQL/post_clone.sql
@/u03/oracle/CLONE/DOYENDB/SQL/list.sql
EOF

echo " Starting of POST CLONE Database ACTIVITY :- Dropping Private DB Links at `date +%D-%T`">>$LOGFILE
sh -x /u03/oracle/CLONE/DOYENDB/drop_dblink.sh /u03/oracle/CLONE/DOYENDB/SQL/list.log
cat /u03/oracle/CLONE/DOYENDB/logs/drop_dblink_unixscript.txt >> $LOGFILE
echo " Starting of POST CLONE Database ACTIVITY Finished :- Dropping Private DB Links at `date +%D-%T`">>$LOGFILE

echo "Starting of POST CLONE1 Database ACTIVITY at `date +%D-%T`">>$LOGFILE
${ORACLE_HOME}/bin/sqlplus -s "/ as sysdba"<<-EOF >>$LOGFILE
PROMPT DROPPING THE PUBLIC SYNONYNMS
@/u03/oracle/CLONE/DOYENDB/SQL/post_clone1.sql
EOF

echo "LOGS for POST CLONE Database ACTIVITY at `date +%D-%T`">>$LOGFILE
cat /u03/oracle/CLONE/DOYENDB/logs/drop_dblink.txt >>$LOGFILE
cat /u03/oracle/CLONE/DOYENDB/logs/post_clone.txt >>$LOGFILE
echo "LOGS for POST CLONE Database ACTIVITY Ended at `date +%D-%T`">>$LOGFILE

# Stopping CLONE Listeners For preparation of Cloning
echo "Stopping LISTENER DOYENDB PROCESS  at `date +%D-%T`">>$LOGFILE
sh -x /u03/oracle/CLONE/DOYENDB/stop_listener_CLONE.sh
cat /u03/oracle/CLONE/DOYENDB/logs/stop_DOYENDB_LISTENER.txt >>$LOGFILE
echo "Stopping LISTENER DOYENDB PROCESS Ended at `date +%D-%T`">>$LOGFILE
echo "End of ATLLTRG CREATION  at `date +%D-%T`">>$LOGFILE

echo "Beginning Stopping Maintenance Window for OEM DOYENDB  at `date +%D-%T`">>$LOGFILE
. /orahome/env/oracle_Agent12C_env
cd $ORACLE_HOME/bin
./emctl stop blackout DOYENDB_CLONE
echo "Ending Maintenance Window for OEM DOYENDB  at `date +%D-%T`">>$LOGFILE

/bin/mailx -s "RPT:-DOYENDB CLONING PROCESS Ended at :-`date` " $NOTIFY_LIST <$LOGFILE

set `date`
cp $LOGFILE $LOGFILE.$2$3$4





#!/bin/sh
. /orahome/env/oracle_DOYENDB_11g_env
LOGFILE=/u03/oracle/CLONE/DOYENDB/logs/stop_DOYEN_LISTENER.txt
echo "LISTENER STOP PROCESS STARTED AT :-`date`"> $LOGFILE
echo " Stopping LISTENER_DOYENDB_OEM ">> $LOGFILE
lsnrctl << EOF
set current_listener LISTENER_DOYENDB_OEM
stop
exit
EOF
echo " Stopping LISTENER_DOYENDB_JOBS">> $LOGFILE
lsnrctl << EOF
set current_listener LISTENER_DOYENDB_JOBS
stop
exit
EOF
echo "LISTENER STOP PROCESS ENDED AT :-`date`">> $LOGFILE

 cat /home/oracle/CLONE/DOYENDB/SQL/
PROMPT DROPPING PRIVATE DBLINKS
@/export/home/oracle/CLONE/DOYENDB/SQL/drop_existin_dblink.sql
spool /export/home/oracle/CLONE/DOYENDB/logs/drop_dblink.txt
set echo on
@/export/home/oracle/CLONE/DOYENDB/SQL/drop_dblink.sql
spool off

oracle@dbatlbocoprod01:~/CLONE/DOYENDB$ cat /export/home/oracle/oraprocs/DOYENDB/logs/start_DOYENDB_LISTENER.txt
LISTENER STOP PROCESS STARTED AT :-Wednesday, March 30, 2016 03:00:16 PM
 Stopping LISTENER_DOYENDB_OEM
 Stopping LISTENER_DOYENDB_JOBS
LISTENER STOP PROCESS ENDED AT :-Wednesday, March 30, 2016 03:00:17 PM

Script to check Disk space in all remonte servers in network

$
0
0
#!/bin/bash
# ssh password less login is required

# mention below remote server seperated by space

remote_srv=(192.168.1.1 192.168.1.2  )

do_ssh() {
    ssh $s "$@"
    echo -e
}

header() {
    echo "#==================={$@}==========================#"
}

n=${#remote_srv[@]} # number of ip's in array

for (( i=0;i<$n;i++)); do
            echo -e
            echo "$(tput bold)$(tput setaf 2)* Connected Server: ${remote_srv[$i]}  @@ $(date) @@"
            echo "* Fetching info...$(tput sgr 0)"
            s=${remote_srv[$i]}

            header "Disk Usage"
            do_ssh df -h
done

## EOF

Script to chek Number of connections and CPU load at Peak time in database

$
0
0

$ cat metrix_connections.sh

/home/oracle/gangai/metrix_op.log
a=`cat $1`; export a

for i in $a

do

sqlplus -S username/passwd@$i << EOF
spool metrix_op.log append

@/home/oracle/gangai/Metrix/metrix_connections.sql


exit;
EOF


done


$ cat metrix_connections.sql

set echo off;
set heading off;
set feedback off;
set verify off;
set verify off;
set trimspool off;
set lines 150;
set colsep '';
set space 0;
set pagesize 0;
select wm_concat(distinct(to_char(pvm.measured_date, 'DD-MON-RRRR:HH24'))) "Peak Hr",round(pvm.total_users) "Total Users",
round((select avg(a.total_users)
from stats$totalusers a
where a.measured_date between to_date('11-dec-2015:10:00', 'DD-MON-RRRR:HH24:MI') and to_date('11-dec-2015:13:00', 'DD-MON-RRRR:HH24:MI'))) "Avg Users"
from stats$totalusers pvm
where pvm.total_users =
(select max((pvm1.total_users)) highest_total_users
from stats$totalusers pvm1
where pvm1.measured_date between to_date('11-dec-2015:10:00', 'DD-MON-RRRR:HH24:MI')
and to_date('11-dec-2015:13:00', 'DD-MON-RRRR:HH24:MI'))
and pvm.measured_date between to_date('11-dec-2015:10:00', 'DD-MON-RRRR:HH24:MI')
and to_date('11-dec-2015:13:00', 'DD-MON-RRRR:HH24:MI')
group by (pvm.total_users);




$ cat metrix_cpu.sh
a=`cat $1`; export a

for i in $a

do

sqlplus -S username/passwd@$i << EOF
spool metrix_op.log append

@/home/oracle/gangai/Metrix/metrix_cpu.sql


exit;
EOF


done



$cat metrix_cpu.sql


set echo off;
set heading off;
set feedback off;
set verify off;
set verify off;
set trimspool off;
set lines 150;
set colsep '';
set space 0;
set pagesize 0;
select wm_concat(distinct(to_char(b1.start_date, 'DD-MON-RRRR:HH24'))) "Peak_Hr", round((b1.user_cpu+b1.system_CPU+b1.wait_cpu)) "Max_CPU",
round((select avg(b2.user_cpu+b2.system_CPU+wait_cpu)  from stats$vmstat2 b2
where
b2.start_date between to_date('18-dec-2015:01:00', 'DD-MON-RRRR:HH24:MI') and to_date('18-dec-2015:13:00', 'DD-MON-RRRR:HH24:MI'))) "Avg_CPU"
from stats$vmstat2 b1
where (b1.user_cpu+b1.system_CPU+b1.wait_cpu) =
(select max((b3.user_cpu+system_CPU+wait_cpu)) max_cpu
from stats$vmstat2 b3
where b3.start_date between to_date('18-dec-2015:01:00', 'DD-MON-RRRR:HH24:MI')
and to_date('18-dec-2015:13:00', 'DD-MON-RRRR:HH24:MI'))
and b1.start_date between to_date('18-dec-2015:01:00', 'DD-MON-RRRR:HH24:MI') and to_date('18-dec-2015:13:00', 'DD-MON-RRRR:HH24:MI')
group by (b1.user_cpu+b1.system_CPU+b1.wait_cpu);



$cat list1.log

db1
db2
db3
db4


Execution steps:

Modify the date as per your requirement.


$metrix_connections.sh list1.log

It will dispaly message

$metrix_cpu.sh list1.log

It will dispaly message

$

Auto startup / shutdown the database while OS starts / restart

$
0
0


Please  touch the file as root user

1. copy the oracle.txt file into /etc/init.d/oracle.
2. chmod 750 /etc/init.d/oracle
3. chkconfig --add oracle
4. To start the custom listener open the first $ORACLE_HOME/bin/dbstart and edit file as dbstart and dbshut.




cat  /etc/init.d/oracle
#!/bin/sh
#
# /etc/rc.d/init.d/dbase
# Description: Starts and stops the Oracle database and listeners
# See how we were called.
ORAHOME1=/u01/app/oracle/product/11.2.0/db_1
ORAHOME2=/u01/app/oracle/product/11.2.0/PROD

case "$1" in
  start)
        echo -n "Starting All Oracle Databases: "
        echo "----------------------------------------------------">> /var/log/dbase.log
        date +"! %T %a %D : Starting Oracle Databases as part of system up.">> /var/log/dbase.log
        echo "----------------------------------------------------">> /var/log/dbase.log
                echo -n "Starting Oracle Listeners:PROD"
        su - oracle -c "$ORAHOME2/bin/lsnrctl start PROD">> /var/log/dbase.log
        echo "Done."
        su - oracle -c dbstart >> /var/log/dbase.log
        echo "Done."
                echo -n "Starting Oracle Listeners:ORCL "
        su - oracle -c "$ORAHOME1/bin/lsnrctl start ORCL">> /var/log/dbase.log
        echo "Done."
        echo ""
        echo "----------------------------------------------------">> /var/log/dbase.log
        date +"! %T %a %D : Finished.">> /var/log/dbase.log
        echo "----------------------------------------------------">> /var/log/dbase.log

        echo ""
        echo "----------------------------------------------------">> /var/log/dbase.log
        date +"! %T %a %D : Finished.">> /var/log/dbase.log
        echo "----------------------------------------------------">> /var/log/dbase.log
                touch /var/lock/subsys/oracle
        ;;
  stop)
        echo -n "Shutting Down Oracle Listeners: "
        echo "----------------------------------------------------">> /var/log/dbase.log
        date +"! %T %a %D : Shutting Down Oracle Databases because of system down.">> /var/log/dbase.log
        echo "----------------------------------------------------">> /var/log/dbase.log
        su - oracle -c "$ORAHOME1/bin/lsnrctl stop ORCL">> /var/log/dbase.log
        echo "Done. ORCL"
                su - oracle -c "$ORAHOME2/bin/lsnrctl stop PROD">> /var/log/dbase.log
        echo "Done. PROD"
        rm -f /var/lock/subsys/oracle
        echo -n "Shutting Down All Oracle Databases: "
        su - oracle -c dbshut >> /var/log/dbase.log
        echo "Done."
        echo ""
        echo "----------------------------------------------------">> /var/log/dbase.log
        date +"! %T %a %D : Finished.">> /var/log/dbase.log
        echo "----------------------------------------------------">> /var/log/dbase.log
        ;;
  restart)
        echo -n "Restarting Oracle Databases: "
        echo "----------------------------------------------------">> /var/log/dbase.log
        date +"! %T %a %D : Restarting Oracle Databases as part of system up.">> /var/log/dbase.log
        echo "----------------------------------------------------">> /var/log/dbase.log
        su - oracle -c dbshut >> /var/log/dbase.log
        su - oracle -c dbstart >> /var/log/dbase.log
        echo "Done."
        echo -n "Restarting Oracle Listeners: ORCL"
        su - oracle -c "$ORAHOME1/bin/lsnrctl stop ORCL">> /var/log/dbase.log
        su - oracle -c "$ORAHOME1/bin/lsnrctl start ORCL">> /var/log/dbase.log
                 echo "Done. ORCL"
                echo -n "Restarting Oracle Listeners: PROD"
        su - oracle -c "$ORAHOME2/bin/lsnrctl stop PROD">> /var/log/dbase.log
        su - oracle -c "$ORAHOME2/bin/lsnrctl start PROD">> /var/log/dbase.log
        echo "Done. PROD"
        echo ""
        echo "----------------------------------------------------">> /var/log/dbase.log
        date +"! %T %a %D : Finished.">> /var/log/dbase.log
        echo "----------------------------------------------------">> /var/log/dbase.log
        touch /var/lock/subsys/oracle
        ;;
  *)
        echo "Usage: oracle {start|stop|restart}"
        exit 1
esac












 To start the custom Listener got to $ORACLE_HOME/bin/dbstart and edit like this


#PROD
ORACLE_HOME_LISTNER=/u01/app/oracle/product/11.2.0/PROD
if [ ! $ORACLE_HOME_LISTNER ] ; then
  echo "ORACLE_HOME_LISTNER is not SET, unable to auto-start Oracle Net Listener"
  echo "Usage: $0 ORACLE_HOME"
else
  LOG=$ORACLE_HOME_LISTNER/listener.log

  # Set the ORACLE_HOME for the Oracle Net Listener, it gets reset to
  # a different ORACLE_HOME for each entry in the oratab.
  export ORACLE_HOME=$ORACLE_HOME_LISTNER

  # Start Oracle Net Listener
  if [ -x $ORACLE_HOME_LISTNER/bin/tnslsnr ] ; then
    echo "$0: Starting Oracle Net Listener">> $LOG 2>&1
    $ORACLE_HOME_LISTNER/bin/lsnrctl start PROD >> $LOG 2>&1 &
    VER10LIST=`$ORACLE_HOME_LISTNER/bin/lsnrctl version | grep "LSNRCTL for " | cut -d'' -f5 | cut -d'.' -f1`
    export VER10LIST
  else
    echo "Failed to auto-start Oracle Net Listener using $ORACLE_HOME_LISTNER/bin/tnslsnr"
  fi
fi
To stop the Custom listener go to $ORACLE_HOME/bin/dbshut and edit the file
as same
#PROD
# The  this to bring down Oracle Net Listener
ORACLE_HOME_LISTNER=/u01/app/oracle/product/11.2.0/PROD
if [ ! $ORACLE_HOME_LISTNER ] ; then
  echo "ORACLE_HOME_LISTNER is not SET, unable to auto-stop Oracle Net Listener"
  echo "Usage: $0 ORACLE_HOME"
else
  LOG=$ORACLE_HOME_LISTNER/listener.log

  # Set the ORACLE_HOME for the Oracle Net Listener, it gets reset to
  # a different ORACLE_HOME for each entry in the oratab.
  export ORACLE_HOME=$ORACLE_HOME_LISTNER

  # Stop Oracle Net Listener
  if [ -f $ORACLE_HOME_LISTNER/bin/tnslsnr ] ; then
    echo "$0: Stoping Oracle Net Listener">> $LOG 2>&1
    $ORACLE_HOME_LISTNER/bin/lsnrctl stop PROD >> $LOG 2>&1 &
  else
    echo "Failed to auto-stop Oracle Net Listener using $ORACLE_HOME_LISTNER/bin/tnslsnr"
  fi
fi

Query to check log switch between PROD and DR

$
0
0
set linesize 170
set pagesize 100
column  day     format a15              heading 'Day'
column  d_0     format a3               heading '00'
column  d_1     format a3               heading '01'
column  d_2     format a3               heading '02'
column  d_3     format a3               heading '03'
column  d_4     format a3               heading '04'
column  d_5     format a3               heading '05'
column  d_6     format a3               heading '06'
column  d_7     format a3               heading '07'
column  d_8     format a3               heading '08'
column  d_9     format a3               heading '09'
column  d_10    format a3               heading '10'
column  d_11    format a3               heading '11'
column  d_12    format a3               heading '12'
column  d_13    format a3               heading '13'
column  d_14    format a3               heading '14'
column  d_15    format a3               heading '15'
column  d_16    format a3               heading '16'
column  d_17    format a3               heading '17'
column  d_18    format a3               heading '18'
column  d_19    format a3               heading '19'
column  d_20    format a3               heading '20'
column  d_21    format a3               heading '21'
column  d_22    format a3               heading '22'
column  d_23    format a3               heading '23'
select
        substr(to_char(FIRST_TIME,'DY, YYYY/MM/DD'),1,25) day,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'00',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'00',1,0))) d_0,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'01',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'01',1,0))) d_1,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'02',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'02',1,0))) d_2,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'03',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'03',1,0))) d_3,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'04',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'04',1,0))) d_4,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'05',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'05',1,0))) d_5,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'06',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'06',1,0))) d_6,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'07',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'07',1,0))) d_7,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'08',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'08',1,0))) d_5,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'09',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'09',1,0))) d_9,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'10',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'10',1,0))) d_10,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'11',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'11',1,0))) d_11,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'12',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'12',1,0))) d_12,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'13',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'13',1,0))) d_13,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'14',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'14',1,0))) d_14,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'15',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'15',1,0))) d_15,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'16',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'16',1,0))) d_16,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'17',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'17',1,0))) d_17,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'18',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'18',1,0))) d_18,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'19',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'19',1,0))) d_19,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'20',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'20',1,0))) d_20,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'21',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'21',1,0))) d_21,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'22',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'22',1,0))) d_22,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'23',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'23',1,0))) d_23
from
        v$log_history
group by
        substr(to_char(FIRST_TIME,'DY, YYYY/MM/DD'),1,25)
order by
        substr(to_char(FIRST_TIME,'DY, YYYY/MM/DD'),1,25) desc;

set linesize 80
set pagesize 14
clear columns
Viewing all 1640 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>