Quantcast
Channel: Doyensys Allappsdba Blog..
Viewing all 1640 articles
Browse latest View live

Query to find Errored concurrent Requests in Last 24 Hrs:

$
0
0

SELECT b.request_id, a.user_concurrent_program_name,
       b.phase_code AS completed, b.status_code AS error,
       u.user_name requestor,
       TO_CHAR (b.actual_start_date, 'MM/DD/YY HH24:MI:SS') starttime,
       ROUND ((b.actual_completion_date - b.actual_start_date) * (60 * 24),
              2
             ) runtime,
       b.completion_text
  FROM fnd_concurrent_programs_tl a, fnd_concurrent_requests b, fnd_user u
 WHERE a.concurrent_program_id = b.concurrent_program_id
   AND b.phase_code = 'C'
   AND b.status_code = 'E'
   AND b.actual_start_date > SYSDATE - 1
   AND b.requested_by = u.user_id
   AND a.LANGUAGE = 'US';

Query to terminate the Bulk concurrent request in oracle apps

$
0
0
 Query to terminate the Bulk concurrent request in oracle apps

Query:

update fnd_concurrent_requests set status_code='X', phase_code='C' where concurrent_program_id='51385'
and phase_code='P' and status_code='Q'

Concurrent program ID --> This has to change the based the concurrent program

REP-0004: Warning: Unable to open user preference file.

$
0
0


R12: Executing A Report Fails With REP-0004: Warning: Unable to open user preference file.
On an E-Business Suite Release 12.1.3 - when attempting to execute a Report the following error occurs:

ERROR: 
 
REP-0004: Warning: Unable to open user preference file.

Cause:

In Release 11i the Environment Setting ORACLE_LOCALPREFERENCE has been used to store this setting in the ad80ux.env file, which has been sourced for the execution of an Concurrent Program.

In Release 12 you can find the Setting in the Contextfile, but is has been removed to be used in the Environment file.

Published Bug 11063588 - ORACLE_LOCALPREFERENCE REMOVED IN R12 has been created for this issue and a fix has been created but no Patch has been released yet.

SOLUTION

To implement the solution please execute the following steps:

Workaround :

1 - Backup file $AD_TOP/admin/template/iAS_1012_env.tmp

mv $AD_TOP/admin/template/iAS_1012_env.tmp $AD_TOP/admin/template/iAS_1012_env.tmp.org

2 - Edit  file $AD_TOP/admin/template/iAS_1012_env.tmp and add the following:

...
ORACLE_LOCALPREFERENCE=$ORACLE_HOME/tools/admin
export ORACLE_LOCALPREFERENCE
...

3 - Execute autoconfig

4 - Start the services and execute the failed Concurrent Request again and verify that the Workaround worked.

EXPDP/IMPDP failing with UDE-31623: operation generated ORACLE error 31623

$
0
0
While exporting we got the following error,

ERROR:

 
UDE-31623: operation generated ORACLE error 31623
ORA-31623: a job is not attached to this session via the specified handle
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3326
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4551
ORA-06512: at line 1

SOLUTION:

 
The solution for the above error is to increase the stream pool size.
We have changed the stream pool size to 128M from 64M and started the export process which completed successfully.


SYS> show parameter stream

NAME                  TYPE             VALUE
------------------ ------------- ---------
streams_pool_size  big integer      64M


SYS> show parameter stream

NAME                  TYPE             VALUE
------------------ ------------- ---------
streams_pool_size  big integer      128M

adop_session_details

$
0
0
select ADOP_SESSION_ID, BUG_NUMBER, STATUS, APPLIED_FILE_SYSTEM_BASE, PATCH_FILE_SYSTEM_BASE, ADPATCH_OPTIONS, NODE_NAME, END_DATE, CLONE_STATUS
from ad_adop_session_patches
order by end_date desc;


Note: STATUS
R - Patch Application is going on.
N - Not Applied In the current node but applied in other nodes
C - Reserved for clone and config_clone. Indicates clone completed normal
H - Patch failed in the middle of the session
F - Patch failed in the middle but user tried to skip some failures.
S - Patch Application succeeded after skipping the failed jobs.
Y - Patch Application succeeded.


Locked objects details session wise

$
0
0
SELECT O.OBJECT_NAME, S.SID, S.SERIAL#, P.SPID, S.PROGRAM,S.USERNAME,
S.MACHINE,S.PORT , S.LOGON_TIME,SQ.SQL_FULLTEXT
FROM V$LOCKED_OBJECT L, DBA_OBJECTS O, V$SESSION S,
V$PROCESS P, V$SQL SQ
WHERE L.OBJECT_ID = O.OBJECT_ID
AND L.SESSION_ID = S.SID AND S.PADDR = P.ADDR
AND S.SQL_ADDRESS = SQ.ADDRESS

Package running status

$
0
0
select
   x.sid
from
   v$session x, v$sqltext y
where
   x.sql_address = y.address
and
   y.sql_text like '%<package name>%';

Measuring Oracle I/O Performance Using CALIBRATE_IO

$
0
0
Measuring Oracle I/O Performance Using CALIBRATE_IO

There are many third party tools to measure I/O performance, but CALIBRATE_IO is oracle provided tool, introduced in Oracle Database 11g Release 1. There are a few restrictions associated with the procedure.

    The procedure must be called by a user with the SYSDBA priviledge.
    TIMED_STATISTICS must be set to TRUE, which is the default when STATISTICS_LEVEL is set to TYPICAL.
    Datafiles must be accessed using asynchronous I/O. This is the default when ASM is used.
    Only one calibration can be run at a time. If another calibration is initiated at the same time, it will fail.

We can check current asynchronous I/O setting for datafiles using the following query.

SELECT d.name, i.asynch_io
FROM   v$datafile d, v$iostat_file i
WHERE  d.file# = i.file_no
AND    i.filetype_name  = 'Data File';

To turn on asynchronous I/O, issue the following command and restart the database.

ALTER SYSTEM SET filesystemio_options=setall SCOPE=SPFILE;

Now we can call the procedure by running the following code.

SET SERVEROUTPUT ON
DECLARE
  lat  INTEGER;
  iops INTEGER;
  mbps INTEGER;
BEGIN
-- DBMS_RESOURCE_MANAGER.CALIBRATE_IO (<DISKS>, <MAX_LATENCY>, iops, mbps, lat);
   DBMS_RESOURCE_MANAGER.CALIBRATE_IO (2, 10, iops, mbps, lat);

  DBMS_OUTPUT.PUT_LINE ('max_iops = ' || iops);
  DBMS_OUTPUT.PUT_LINE ('latency  = ' || lat);
  DBMS_OUTPUT.PUT_LINE ('max_mbps = ' || mbps);
end;
/

In addition to appearing on screen, the results of a calibration run can be displayed using the DBA_RSRC_IO_CALIBRATE view.

SET LINESIZE 100
COLUMN start_time FORMAT A20
COLUMN end_time FORMAT A20

SELECT TO_CHAR(start_time, 'DD-MON-YYY HH24:MI:SS') AS start_time,
       TO_CHAR(end_time, 'DD-MON-YYY HH24:MI:SS') AS end_time,
       max_iops,
       max_mbps,
       max_pmbps,
       latency,
       num_physical_disks AS disks
FROM   dba_rsrc_io_calibrate;


Calibration runs can be monitored using the V$IO_CALIBRATION_STATUS view.
View for I/O calibration results

SQL> desc V$IO_CALIBRATION_STATUS
  Name                                      Null?    Type
  ----------------------------------------- -------- ----------------------------
  STATUS                                             VARCHAR2(13)
  CALIBRATION_TIME                                   TIMESTAMP(3)

SQL> desc gv$io_calibration_status
  Name                                      Null?    Type
  ----------------------------------------- -------- ----------------------------
  INST_ID                                            NUMBER
  STATUS                                             VARCHAR2(13)
  CALIBRATION_TIME                                   TIMESTAMP(3)

Column explanation:
-------------------
STATUS:
  IN PROGRESS   : Calibration in Progress (Results from previous calibration
                  run displayed, if available)
  READY         : Results ready and available from earlier run
  NOT AVAILABLE : Calibration results not available.

CALIBRATION_TIME: End time of the last calibration run

Article 0

$
0
0
                                  Query to Displays a list of all the spfile parameters.


Query:

This query is used to display the parameter present in the spfile.




SET LINESIZE 500

COLUMN name  FORMAT A30
COLUMN value FORMAT A60
COLUMN displayvalue FORMAT A60

SELECT sp.sid,
       sp.name,
       sp.value,
       sp.display_value
FROM   v$spparameter sp
ORDER BY sp.name, sp.sid;

Using deprecated ASM parameter might prevent your Cluster to start

$
0
0
Used ASM_PREFERRED_READ_FAILURE_GROUPS parameter to see how I can force ASM to read specific failure group. Testings were successfull but I didn’t know that this parameter is deprecated in 12.2, and beside that, I didn’t imagine that it might cause me a downtime and prevent Clusterware to start.
Here’s the scenario that you can try in your test environment. First of all, I set this parameter to the failure group and then resetted it back:
SQL> alter system set ASM_PREFERRED_READ_FAILURE_GROUPS=”;
System altered.
SQL> 

Then I made some hardware changes to my nodes and rebooted them. After nodes are rebooted, I checked the status of the clusterware, and it was down at all nodes.

[oracle@oratest01 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4534: Cannot communicate with Event Manager


[oracle@oratest01 ~]$ crsctl check cluster -all
**************************************************************
oratest01:
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4534: Cannot communicate with Event Manager
**************************************************************
oratest02:
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
**************************************************************
oratest03:
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
**************************************************************

Next, I check if ohasd and crsd background processes are up
[root@oratest01 oracle]# ps -ef|grep init.ohasd|grep -v grep
root      1252     1  0 02:49 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
[root@oratest01 oracle]#

[root@oratest01 oracle]# ps -ef|grep crsd|grep -v grep
[root@oratest01 oracle]#

OHAS was up and running, but CRSD not. ASM instance should be up in order to bring the crsd, so I checked if ASM instance is up, but it was also down:
[oracle@oratest01 ~]$ ps -ef | grep smon
oracle    5473  3299  0 02:50 pts/0    00:00:00 grep –color=auto smon
[oracle@oratest01 ~]$



Next, I decided to check log files. Logged in to adrci to find the centralized Clusterware log folder:

[oracle@oratest01 ~]$ adrci
ADRCI: Release 12.2.0.1.0 – Production on Fri Oct 20 02:51:59 2017
Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
ADR base = “/u01/app/oracle”
adrci> show home
ADR Homes:
diag/rdbms/_mgmtdb/-MGMTDB
diag/rdbms/proddb/proddb1
diag/asm/user_root/host_4288267646_107
diag/asm/user_oracle/host_4288267646_107
diag/asm/+asm/+ASM1
diag/crs/oratest01/crs
diag/clients/user_root/host_4288267646_107
diag/clients/user_oracle/host_4288267646_107
diag/tnslsnr/oratest01/asmnet1lsnr_asm
diag/tnslsnr/oratest01/listener_scan1
diag/tnslsnr/oratest01/listener_scan2
diag/tnslsnr/oratest01/listener_scan3
diag/tnslsnr/oratest01/listener
diag/tnslsnr/oratest01/mgmtlsnr
diag/asmtool/user_root/host_4288267646_107
diag/asmtool/user_oracle/host_4288267646_107
diag/apx/+apx/+APX1
diag/afdboot/user_root/host_4288267646_107
adrci> exit
[oracle@oratest01 ~]$ cd /u01/app/oracle/diag/crs/oratest01/crs
[oracle@oratest01 crs]$cd trace

[oracle@oratest01 trace]$ tail -f evmd.trc
2017-10-20 02:54:26.533 :  CRSOCR:2840602368:  OCR context init failure.  Error: PROC-32: Cluster Ready Services on the local node is not running Messaging error [gipcretConnectionRefused] [29]
2017-10-20 02:54:27.552 :  CRSOCR:2840602368:  OCR context init failure.  Error: PROC-32: Cluster Ready Services on the local node is not running Messaging error [gipcretConnectionRefused] [29]
2017-10-20 02:54:28.574 :  CRSOCR:2840602368:  OCR context init failure.  Error: PROC-32: Cluster Ready Services on the local node is not running Messaging error [gipcretConnectionRefused] [29]

From evmd.trc file it can bees that OCR was not initialized. Then I check alert.log file:

[oracle@oratest01 trace]$ tail -f alert.log
2017-10-20 02:49:49.613 [OCSSD(3825)]CRS-1605: CSSD voting file is online: AFD:DATA1; details in /u01/app/oracle/diag/crs/oratest01/crs/trace/ocssd.trc.
2017-10-20 02:49:49.627 [OCSSD(3825)]CRS-1672: The number of voting files currently available 1 has fallen to the minimum number of voting files required 1.
2017-10-20 02:49:58.812 [OCSSD(3825)]CRS-1601: CSSD Reconfiguration complete. Active nodes are oratest01 .
2017-10-20 02:50:01.154 [OCTSSD(5351)]CRS-8500: Oracle Clusterware OCTSSD process is starting with operating system process ID 5351
2017-10-20 02:50:01.161 [OCSSD(3825)]CRS-1720: Cluster Synchronization Services daemon (CSSD) is ready for operation.
2017-10-20 02:50:02.099 [OCTSSD(5351)]CRS-2403: The Cluster Time Synchronization Service on host oratest01 is in observer mode.
2017-10-20 02:50:03.233 [OCTSSD(5351)]CRS-2407: The new Cluster Time Synchronization Service reference node is host oratest01.
2017-10-20 02:50:03.235 [OCTSSD(5351)]CRS-2401: The Cluster Time Synchronization Service started on host oratest01.
2017-10-20 02:50:10.454 [ORAAGENT(3362)]CRS-5011: Check of resource “ora.asm” failed: details at “(:CLSN00006:)” in “/u01/app/oracle/diag/crs/oratest01/crs/trace/ohasd_oraagent_oracle.trc”
2017-10-20 02:50:18.692 [ORAROOTAGENT(3198)]CRS-5019: All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at “(:CLSN00140:)” in “/u01/app/oracle/diag/crs/oratest01/crs/trace/ohasd_orarootagent_root.trc”.

CRS didn’t started as the ASM is not up and running. To checking why ASM wasn’t started upon the server book sounded good starting point for the investigation, so logged in and tried to start ASM instance:

[oracle@oratest01 ~]$ sqlplus / as sysasm
SQL*Plus: Release 12.2.0.1.0 Production on Fri Oct 20 02:55:12 2017
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup
ORA-01078: failure in processing system parameters
SQL> startup
ORA-01078: failure in processing system parameters
SQL> startup
ORA-01078: failure in processing system parameters
SQL>

I checked ASM alert.log file, but it didn’t provide enough information why ASM didn’t start:
NOTE: ASM client -MGMTDB:_mgmtdb:clouddb disconnected unexpectedly.
NOTE: check client alert log.
NOTE: Trace records dumped in trace file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_ufg_20658_-MGMTDB__mgmtdb.trc
NOTE: cleaned up ASM client -MGMTDB:_mgmtdb:clouddb connection state (reg:2993645709)
2017-10-20T02:47:20.588256-04:00
NOTE: client +APX1:+APX:clouddb deregistered
2017-10-20T02:47:21.201319-04:00
NOTE: detected orphaned client id 0x10004.
2017-10-20T02:48:49.613505-04:00
WARNING: Write Failed, will retry. group:2 disk:0 AU:9067 offset:151552 size:4096
path:AFD:DATA1
incarnation:0xf0a9ba5e synchronous result:’I/O error’
subsys:/opt/oracle/extapi/64/asm/orcl/1/libafd12.so krq:0x7f8fced52240 bufp:0x7f8fc9262000 osderr1:0xfffffff8 osderr2:0xc28
IO elapsed time: 0 usec Time waited on I/O: 0 usec
ERROR: unrecoverable error ORA-15311 raised in ASM I/O path; terminating process 20200

The problem seemed to be in the parameter file of ASM, so I decided to start it with default parameters and then investigate. For this, I opened searched for the string “parameters” in the ASM alert.log file to get list of parameters and paramter file location:
[oracle@oratest01 trace]$ more +ASM1_alert.log
Using parameter settings in server-side spfile +DATA/clouddb/ASMPARAMETERFILE/registry.253.949654249
System parameters with non-default values:
  large_pool_size          = 12M
  remote_login_passwordfile= “EXCLUSIVE”
  asm_diskstring           = “/dev/sd*”
  asm_diskstring           = “AFD:*”
  asm_diskgroups           = “NEW”
  asm_diskgroups           = “TESTDG”
  asm_power_limit          = 1
  _asm_max_connected_clients= 4
NOTE: remote asm mode is remote (mode 0x202; from cluster type)
2017-08-11T10:22:24.834431-04:00
Cluster Communication is configured to use IPs from: GPnP

Then I created parameter file (/tmp/pfile_asm.ora) and started the instance:
SQL> startup pfile=’/home/oracle/pfile_asm.ora’;
ASM instance started

Total System Global Area 1140850688 bytes
Fixed Size                                8629704 bytes
Variable Size                      1107055160 bytes
ASM Cache                            25165824 bytes
ASM diskgroups mounted
SQL> exit

Great! ASM is up. Now I can restore my parameter file and try to start ASM with it:

[oracle@oratest01 ~]$ sqlplus / as sysasm
SQL> create pfile=’/home/oracle/pfile_orig.ora’ from spfile=’+DATA/clouddb/ASMPARAMETERFILE/registry.253.957837377′;
File created.
SQL> 

And here is entry of my original ASM parameter file:
[oracle@oratest01 ~]$ more /home/oracle/pfile_orig.ora
+ASM1.__oracle_base=’/u01/app/oracle’#ORACLE_BASE set from in memory value
+ASM2.__oracle_base=’/u01/app/oracle’#ORACLE_BASE set from in memory value
+ASM3.__oracle_base=’/u01/app/oracle’#ORACLE_BASE set from in memory value
+ASM3._asm_max_connected_clients=5
+ASM2._asm_max_connected_clients=8
+ASM1._asm_max_connected_clients=5
*.asm_diskgroups=’DATA’,’ACFSDG’#Manual Mount
*.asm_diskstring=’/dev/sd*’,’AFD:*’
*.asm_power_limit=1
*.asm_preferred_read_failure_groups=”
*.large_pool_size=12M
*.remote_login_passwordfile=’EXCLUSIVE’

Good. Now let’s start ASM with it:
SQL> shut abort
ASM instance shutdown
SQL> startup pfile=’/home/oracle/pfile_orig.ora’;
ORA-32006: ASM_PREFERRED_READ_FAILURE_GROUPS initialization parameter has been deprecated

ORA-01078: failure in processing system parameters
SQL>

Wohoo. ASM failed to start because of deprecated parameter?! Let’s remove it and start ASM without ASM_PREFERRED_READ_FAILURE_GROUPS parameter:
[oracle@oratest01 ~]$ sqlplus / as sysasm
Connected to an idle instance.
SQL> startup pfile=’/home/oracle/pfile_orig.ora’;
ASM instance started

Total System Global Area 1140850688 bytes
Fixed Size                                8629704 bytes
Variable Size                      1107055160 bytes
ASM Cache                            25165824 bytes
ASM diskgroups mounted
SQL> 

It is started! Next I create ASM parameter file based on this pfile and start the instance:
SQL> create spfile=’+DATA’ from pfile=’/home/oracle/pfile_orig.ora’;
File created.

SQL> shut immediate
ASM diskgroups dismounted
ASM instance shutdown

SQL> startup
ASM instance started
Total System Global Area 1140850688 bytes
Fixed Size                                8629704 bytes
Variable Size                      1107055160 bytes
ASM Cache                            25165824 bytes
ASM diskgroups mounted
SQL> 

After having ASM up and running I restart the clusterware on all nodes and check the status:
[root@oratest01 ~]$  crsctl stop cluster –all
[root@oratest01 ~]$ crsctl start cluster –all
[oracle@oratest01 ~]$ crsctl check cluster -all
**************************************************************
oratest01:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
CRS-4404: The following nodes did not reply within the allotted time:
oratest02, oratest03

The first node is up, but I wasn’t able to get status of clusterware in other nodes and got CRS-4404 error. To solve it, kill gpnpd process on all nodes and run the command again:

[oracle@oratest01 ~]$ ps -ef | grep gpn
oracle    3418     1  0 02:49 ?        00:00:15 /u01/app/12.2.0.1/grid/bin/gpnpd.bin
[oracle@oratest01 ~]$ kill -9 3418
[oracle@oratest01 ~]$ ps -ef | grep gpn
oracle   16169     1  3 06:52 ?        00:00:00 /u01/app/12.2.0.1/grid/bin/gpnpd.bin

[oracle@oratest01 ~]$ crsctl check cluster -all
**************************************************************
oratest01:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
oratest02:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
oratest03:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[oracle@oratest01 ~]$

RAC Node Eviction

$
0
0
Why Node Eviction

The node eviction/reboot is used for I/O fencing to ensure that writes from I/O capable clients can be cleared avoiding potential corruption scenarios in the event of a network split, node hang, or some other fatal event in clustered environment.

By definition, I/O fencing (cluster industry technique) is the isolation of a malfunctioning node from a cluster's shared storage to protect the integrity of data.

We want to reboot here as fast as possible. Choosing not to flush local disks or kill off processes gracefully helps us shutdown quickly. It is imperative that we do not flush any IO to the shared disks. Else it may write irrelevant information to clusterware components(OCR or Voting disk) or to database files.

Who evicts/reboot the node

The daemons for Oracle Clusterware (CRS) are started by init when the machine boots. Viz. CRSD, OCSSD,EVMD, OPROCD (when vendor clusterware is absent), OCLSOMON.

There are three fatal processes, i.e. processes whose abnormal halt or kill will provoke a node reboot
1. the ocssd.bin  (run as oracle)
2. the oclsomon.bin (monitors OCSSD and run as a root)
3. the oprocd.bin     (I/o fencing in non-vendor clusterware env and run as a root)

—Other non-CRS Processes Capable of Evicting:

◦OCFS2 (if used)
◦Vendor Clusterware (if used)
◦Operating System (panic)

When Node Eviction

Read below to understand when OCSSD will trigger the node reboot..

—OCSSD's primary job is internode health monitoring (via NM and GM services), other roles not discussed here in-depth.

—It is a multi-threaded application. i.e. several jobs or threads runs simultaneously performing specific tasks. The ocssd.log reveals all the thread names clss%, clsc%, etc

Eg. Threads for performing heartbeat (network & disk) and monitoring, send/receive cluster messages, etc.

—Evictions occur when CSS detects a heartbeat problem that must be dealt with.  For example, lost network communication with another node(s) or lost disk heartbeat information from another node(s). CSS initiated evictions (via poison packet or kill block) should always result in a node reboot.

—init spawns init.cssd, which in turn spawns OCSSD as a child. If ocssd dies or is killed or exit, the node kill functionality of the init script will kill the node. Killing init.cssd (respawn creates DUP. cssd) will also result in reboot.

Read below to understand when OCLSOMON will trigger the node reboot..

By now we know working of CSS is very crucial for cluster functioning. This calls for a process that can keep track of its(CSS's) good health. Name: OCLSOMON.

This process monitors the CSS daemon for hangs or scheduling issues and can reboot a node if there is a perceived hang of CSS threads.
Variety of problems such as with OS scheduler, resources, hardware, driver, network, misconfiguration, Oracle code bug, etc may cause a process/thread to hang/crash.

Some routines in OS kernel will cause kernel 'lockup and are non-preemtable, this causes the CPU starvation or a scheduling issues. Typically, on AIX memory over commitment may lead to heavy paging activity resulting in scheduling issues.

Read below to understand when OPROCD will trigger the node reboot..

Unlike CSS which is responsible to maintain and monitor good health of all nodes in cluster, the OPROCD helps monitor the health of node locally for any issues with self node where it runs ((when vendor clusterware is absent).

The OPROCD process is locked in memory to monitor each local cluster node where it executes,  to detect scheduling latencies caused by hardware and driver freezes on a machine, and provide I/O fencing functionality. (only in 10g and 11gR1)

To provide this functionality OPROCD performs its check, stops running (sleeps/timeout –t 1000ms ), and if the wake up is beyond (margin –m 500ms) the expected time, OPROCD reboots the local node.   (alarm clock snooze, exam nightmare)

default values for OPROCD can be overly sensitive to scheduling latencies and may cause FALSE reboot. Also, more so in pre-11.2 releases because its code does not function in tandem with CSS eviction code (e.g. NM polling threads).

A FALSE reboot is when a reboot takes place when no formal CSS eviction was in progress.CSS expiring misscount/disktimeout and rebooting the node is not considered a 'false reboot'

Due to the fast nature of the reboot, the CRS logging messages might not actually get flushed to the disk.  However, with newer CRS releases and on some platforms (except AIX), we now perform kernel crash dump /panic /TOC on reboot for OS support to investigate what the system looked like when we crashed the node.

IMPORTANT: OCLSOMON and OPROCD does not exists in 11gR2. CSSD Monito(ora.cssdmonitor) will take over the functionality of oclsomon and oprocd.
Also CPU Starvation or Memory starvation caused by non clusterware services may lead the node eviction. Somtime Hardware freeze will also cause the node eviction.

New READ Object Privilege in 12cR1

$
0
0

The “SELECT” object privilege in addition to querying the table, allows the user to:
 
LOCK TABLE table_name IN EXCLUSIVE MODE;
SELECTFROM table_name FOR UPDATE;

The New Feature of “READ” object privilege, does not allow the user to lock tables in exclusive mode nor select table for update.
Prior to 12.1.0.2, the “SELECT” object privilege is only available which allows the locking:
 
GRANT SELECT ON ... TO ...;

12.1.0.2 onwards, the new “READ” object privilege is available which doesn’t allow the locking:
 
GRANT READ ON ... TO ...;

This also applies to the “SELECT ON ANY TABLE“, prior to 12.1.0.2 which allows the locking:
 
GRANT SELECT ANY TABLE TO ...;

12.1.0.2 onwards, the new “READ ON ANY TABLE” object privilege is available which doesn’t allow the locking:
 
GRANT READ ANY TABLE TO ...;

MRP process getting terminated with error ORA-10485

$
0
0
If you have a Data Guard environment, where you’ve just applied a Database Bundle Patch and OJVM Patch, it’s possible that your Physical Standby can throw the following error:


Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT NODELAY
Wed Oct 11 08:11:57 2017
Media Recovery Log +RECOC1/VER1S/ARCHIVELOG/2017_10_11/thread_1_seq_18251.24912.957080425
MRP0: Background Media Recovery terminated with error 10485
Wed Oct 11 08:11:57 2017
Errors in file /u01/app/oracle/diag/rdbms/ver1s/VER1S2/trace/VER1S2_pr00_220336.trc:
ORA-10485: Real-Time Query cannot be enabled while applying migration redo.

This is because your database is “open” i.e. Active Data Guard (license option) and Managed Recovery Process is trying to apply the redo of datapatch which it can’t do when “open“.

See MOS note for more info:

MRP process getting terminated with error ORA-10485 (Doc ID 1618485.1):

“ORA-10485: Real-Time Query cannot be enabled while applying migration redo.

The Real-Time Query feature was enabled when an attempt was made to recover through migration redo generated during primary upgrades or downgrades”

The easiest solution is to restart the database as “mount” mode allowing the redo via Data Guard to apply the patch, then restart again as “open” mode:

DGMGRL> show configuration;

Configuration - ver1p

Protection Mode: MaxPerformance
 Members:
 ver1p - Primary database
 ver1s - Physical standby database
 Error: ORA-16766: Redo Apply is stopped

Fast-Start Failover: DISABLED

Configuration Status:
ERROR (status updated 37 seconds ago)

DGMGRL> show database ver1s;

Database - ver1s

Role: PHYSICAL STANDBY
 Intended State: APPLY-ON
 Transport Lag: 0 seconds (computed 0 seconds ago)
 Apply Lag: 2 hours 24 minutes 33 seconds (computed 1 second ago)
 Average Apply Rate: 99.32 MByte/s
 Real Time Query: OFF
 Instance(s):
 VER1S1
 VER1S2 (apply instance)

Database Error(s):
 ORA-16766: Redo Apply is stopped

Database Status:
ERROR

DGMGRL>

Now restart the database as “mount” mode allowing the redo via Data Guard to apply the patch:

[oracle@v1ex2dbadm01 ~]$ srvctl status database -d VER1S -v
Instance VER1S1 is running on node v1ex2dbadm01 with online services VER1_BK1,VER1_BK2,VER1_BK3,VER1_BK4. Instance status: Open,Readonly.
Instance VER1S2 is running on node v1ex2dbadm02. Instance status: Open,Readonly.
[oracle@v1ex2dbadm01 ~]$ srvctl config database -d VER1S
Database unique name: VER1S
Database name:
Oracle home: /u01/app/oracle/product/12.1.0.2/dbhome_1
Oracle user: oracle
Spfile: +DATAC1/VER1S/PARAMETERFILE/spfileVER1S.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATAC1,RECOC1
Mount point paths:
Services: VER1_BK1,VER1_BK2,VER1_BK3,VER1_BK4
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: dba
Database instances: VER1S1,VER1S2
Configured nodes: v1ex2dbadm01,v1ex2dbadm02
Database is administrator managed
[oracle@v1ex2dbadm01 ~]$ srvctl stop database -d VER1S
[oracle@v1ex2dbadm01 ~]$ srvctl start database -d VER1S -o mount

Re-check Data Guard Broker to check if the transport lag and status have cleared as expected:

DGMGRL> show database ver1s

Database - ver1s

Role: PHYSICAL STANDBY
 Intended State: APPLY-ON
 Transport Lag: 0 seconds (computed 0 seconds ago)
 Apply Lag: 0 seconds (computed 0 seconds ago)
 Average Apply Rate: 34.62 MByte/s
 Real Time Query: OFF
 Instance(s):
 VER1S1
 VER1S2 (apply instance)

Database Status:
SUCCESS

DGMGRL>

Then restart the database again as “open” mode (Active Data Guard):

[oracle@v1ex2dbadm01 ~]$ srvctl stop database -d VER1S
[oracle@v1ex2dbadm01 ~]$ srvctl start database -d VER1S
[oracle@v1ex2dbadm01 ~]$ srvctl status database -d VER1S -v
Instance VER1S1 is running on node v1ex2dbadm01 with online services VER1_BK1,VER1_BK2,VER1_BK3,VER1_BK4. Instance status: Open,Readonly.
Instance VER1S2 is running on node v1ex2dbadm02. Instance status: Open,Readonly.

Re-check Data Guard Broker to check if the “Real Time Query” is back on as expected:

DGMGRL> show configuration

Configuration - ver1p

Protection Mode: MaxPerformance
 Members:
 ver1p - Primary database
 ver1s - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS (status updated 2 seconds ago)

DGMGRL> show database ver1s

Database - ver1s

Role: PHYSICAL STANDBY
 Intended State: APPLY-ON
 Transport Lag: 0 seconds (computed 0 seconds ago)
 Apply Lag: 0 seconds (computed 0 seconds ago)
 Average Apply Rate: 84.35 MByte/s
 Real Time Query: ON
 Instance(s):
 VER1S1
 VER1S2 (apply instance)

Database Status:
SUCCESS

DGMGRL>

How To Enable DDL Logging in the Database

$
0
0
If for whatever reason, you are required to log DDL, for example, I need to know why the LAST_DDL_TIME of a table was getting updated, you can do this from Oracle 11g.

To Enable:

SQL> show parameter ENABLE_DDL_LOGGING

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------ 
enable_ddl_logging boolean FALSE

SQL> ALTER SYSTEM SET ENABLE_DDL_LOGGING=TRUE;

System altered.

SQL> show parameter ENABLE_DDL_LOGGING

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------ 
enable_ddl_logging boolean TRUE
 

To disable:

 

SQL> show parameter ENABLE_DDL_LOGGING

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------ 
enable_ddl_logging boolean TRUE

SQL> ALTER SYSTEM SET ENABLE_DDL_LOGGING=FASLE;

System altered. 

SQL> show parameter ENABLE_DDL_LOGGING

NAME TYPE VALUE 
------------------------------------ ----------- ------------------------------ 
enable_ddl_logging boolean FALSE

Create some DDL:

SQL> create view zeddba as select * from dual;

View created.

SQL> select * from zeddba;





SQL> drop view zeddba;

View dropped.

Oracle 12c

Now if you look in the following text file:
 
$ADR_BASE/diag/rdbms/${DBNAME}/${ORACLE_SID}/log/ddl_${ORACLE_SID}.log

You will see:
 
Mon Sep 11 15:52:59 2017
diag_adl:create view zahid as select * from dual
diag_adl:drop view zahid

There is also a XML version:
 
$ADR_BASE/diag/rdbms/${DBNAME}/${ORACLE_SID}/log/ddl/log.xml
 
<msg time='2017-09-11T15:41:35.000+01:00' org_id='oracle' comp_id='rdbms'
msg_id='opiexe:4424:2946163730' type='UNKNOWN' group='diag_adl'
level='16' host_id='v1ex1dbadm01.v1.com' host_addr='x.x.x.x'
version='1'>
<txt>create view zeddba as select * from dual
</txt>
</msg>
<msg time='2017-09-11T15:41:45.942+01:00' org_id='oracle' comp_id='rdbms'
msg_id='opiexe:4424:2946163730' type='UNKNOWN' group='diag_adl'
level='16' host_id='v1ex1dbadm01.v1.com' host_addr='x.x.x.x'>
<txt>drop view zeddba
</txt>
</msg>

Oracle 11g

DDL statements are written to the alert log in: $ADR_BASE/diag/rdbms/${DBNAME}/${ORACLE_SID}/trace/alert_${ORACLE_SID}.log

License

Oracle Database Lifecycle Management Pack for Oracle Database
Licensed Parameters
The init.ora parameter ENABLE_DDL_LOGGING is licensed as part of the Database Lifecycle Management Pack when set to TRUE.  When set to TRUE, the database reports schema changes in real time into the database alert log under the message group schema_ddl. The default setting is FALSE.”

More info

Database Reference: ENABLE_DDL_LOGGING
See MOS Note:
How To Enable DDL Logging in Database (Doc ID 2207341.1)
“When ENABLE_DDL_LOGGING is set to true, the following DDL statements are written to the alert log:
ALTER/CREATE/DROP/TRUNCATE CLUSTER
ALTER/CREATE/DROP FUNCTION
ALTER/CREATE/DROP INDEX
ALTER/CREATE/DROP OUTLINE
ALTER/CREATE/DROP PACKAGE
ALTER/CREATE/DROP PACKAGE BODY
ALTER/CREATE/DROP PROCEDURE
ALTER/CREATE/DROP PROFILE
ALTER/CREATE/DROP SEQUENCE
CREATE/DROP SYNONYM
ALTER/CREATE/DROP/RENAME/TRUNCATE TABLE
ALTER/CREATE/DROP TRIGGER
ALTER/CREATE/DROP TYPE
ALTER/CREATE/DROP TYPE BODY
DROP USER
ALTER/CREATE/DROP VIEW
Earlier, RENAME was not logged and a bug was reported for that and the same is fixed in 11.2.0.4.
Document 12938609.8 – ENABLE_DDL_LOGGING does not log RENAME table statements, this is fixed in 11.2.0.4
However, the feature does not log DDLs of some DBMS_STATS operations like:
set_column_stats
set_index_stats
create_extended_stats
drop_extended_stats
set_*_prefs (table/schema/global etc)
delete_pending_stats
publish_pending_stats
export_pending_stats
create_stat_table 
There is an enhancement raised with development to add more operations to this mechanism and would get fixed in 12.2.
Unpublished Bug 22368778 : PERF_DIAG: ENABLE_DDL_LOGGING NEEDS TO LOG MORE DDLS”

User Management Of The Exadata Smart Flash Cache

$
0
0

There are two techniques provided to manually use and manage the cache. The first enables the
pinning of objects in the flash cache. The second supports the creation of logical disks out of the
flash for the permanent placement of objects on flash disks.

Pinning Objects In The Flash Cache:

Preferential treatment over which database objects are cached is also provided with the Exadata
Smart Flash Cache. For example, objects can be pinned in the cache and always be cached, or an
object can be identified as one which should never be cached. This control is provided by the
new storage clause attribute, CELL_FLASH_CACHE, which can be assigned to a database table,
index, partition and LOB column.

There are three values the CELL_FLASH_CACHE attribute can be set to. DEFAULT specifies
the cache used for a DEFAULT object is automatically managed as described in the previous
section. NONE specifies that the object will never be cached. KEEP specifies the object should
be kept in cache.

For example, the following command could be used to pin the table CUSTOMERS in Exadata
Smart Flash Cache
ALTER TABLE customers STORAGE (CELL_FLASH_CACHE KEEP)
This storage attribute can also be specified when the table is created.

The Sun Oracle Exadata Storage Server will cache data for the CUSTOMERS table more
aggressively and will try keeping this data in Exadata Smart Flash Cache longer than cached data
for other tables. In the normal case where the CUSTOMERS table is spread across many Sun
Oracle Exadata Storage Servers, each Exadata cell will cache its part of the table in its own
Exadata Smart Flash Cache. Generally there should be more flash cache available than the
objects KEEP is specified for. This leads to the table being completely cached over time.
While the default behavior for sequential scans is to bypass the flash cache, this is not the case
when KEEP is specified. If KEEP has been specified for an object, and it is accessed via an
offloaded Smart Scan, the object is kept in and scanned from cache. Another advantage of the
flash cache is that when an object that is kept in the cache is scanned, the Exadata software will
simultaneously read the data from both flash and disk to get a higher aggregate scan rate than is
possible from either source independently.

Creating Flash Disks Out Of The Flash Cache:

When an Exadata cell is installed, by default, all the flash is assigned to be used as flash cache and
user data is automatically cached using the default caching behavior. Optionally, a portion of the
cache can be reserved and used as logical flash disks. These flash disks are treated like any
Exadata cell disk in the Exadata cell except they actually reside and are stored as non-volatile
disks in the cache. For each Exadata cell the space reserved for flash disks is allocated across
sixteen (16) cell disks – 4 cell disks per flash card. Grid disks are created on these flash-based cell
disks and the grid disks are assigned to an Automatic Storage Management (ASM) diskgroup.
The best practice would be to reserve the same amount of flash on each Exadata cell for flash
disks and have the ASM diskgroup spread evenly across the Exadata cells in the configuration
just as you would do for regular Exadata grid disks. This will evenly distribute the flash I/O load
across the Exadata cells and flash.
These high-performance logical flash disks be used to store frequently accessed data. To use
them requires advance planning to ensure adequate space is reserved for the tablespaces stored
on them. In addition, backup of the data on the flash disks must be done in case media recovery
is required, just as it would be done for data stored on conventional disks. This option is
primarily useful for highly write intensive workloads where the disk write rate is higher than the
disks can keep up with.


OSB CLOUD BACKUP FOR AMAZON S3

$
0
0





OSB CLOUD BACKUP FOR AMAZON S3


Steps to implement OSB cloud module for Amazon S3:
============================================

Before Running the OSB Cloud Module for Amazon s3 Installer,verify the following pre-requirements.
1)The OSB Cloud Module for Amazon S3 installer requires Java 1.5 or higher to run.
2)Download the OSB Cloud Module for Amazon S3 installer (osbws_installer.zip) from OTN to the database server.
http://www.oracle.com/technetwork/products/secure-backup/secure-backup-s3-484709.html
3)Copy and unzip the OSB Cloud Module for Amazon S3 installer to /home/oracle.
4)Create a directory for the secure Oracle wallet. The Oracle wallet will be created by the installer and used to store your AWS S3 credentials.
($ORACLE_HOME/dbs/osbws_wallet)

Install Oracle Secure Backup Cloud Module for Amazon S3:
=================================================
$ mkdir -p $ORACLE_HOME/dbs/osbws_wallet

$ cd /home/oracle

unzip osbws_installer.zip



java -jar osbws_install.jar -AWSID ************* -AWSKey **************** -otnUser r*********** -otnPass ******* -walletDir $ORACLE_HOME/dbs/osbws_wallet
-libDir $ORACLE_HOME/lib


 


[-AWSID] and [-AWSKey] ====>(Mandatory)

Supply your AWS Access Key and Secret Key which serve the purpose of ID and Password to access Amazon S3. To obtain your AWS Access Key and Secret Key from the AWS website, navigate to Security Credentials, click on the Access Keys tab under Access Credentials to create or view your Access Key ID and Secret Access Key.









Please refer the screenshot.

 


 


[-otnUser] =====> (Mandatory)

Your OTN username which the installer uses to identify the customer.

[-otnPass] =====> (Mandatory)

Your OTN password.

[-walletDir] ====> (Mandatory)

Directory where you want the installer to create a secure wallet containing your AWS S3 credentials.

[-libDir] =====> (Optional)

Directory where you want the installer to download the OSB Cloud Module for Amazon S3 software library.

[-configFile]

The name of the initialization parameter file that will be created by the install tool. This parameter file will be referenced during your RMAN jobs for a particular database. If this parameter is not specified then the initialization parameter file will be created in a system-dependent default location and filename based on the ORACLE_SID. For example: $ORACLE_HOME/dbs/osbws<ORACLE_SID>.ora.
Fix Media Management Library Loading Error:
======================================


ORA-19554: error allocating device, device type: SBT_TAPE, device name:
ORA-27211: Failed to load Media Management Library
Additional information: 2





The installer does not create the default media management library symbolic link for the OSB Cloud Module for Amazon S3 media management library. This results in the following RMAN error when attempting to allocate a channel of type sbt:

Manually create the following symbolic link for the default media management library before performing backups using the SBT.

$ ln -s $ORACLE_HOME/lib/libosbws12.so $ORACLE_HOME/lib/libobk.so


Modify Oracle Recovery Manager's Media Management Configuration:
==========================================================

run {
configure channel device type sbt parms="SBT_LIBRARY=/u01/app/oracle/product/11.2.0.3/db_1/lib/libosbws.so,
SBT_PARMS=(OSB_WS_PFILE=/u01/app/oracle/product/11.2.0.3/db_1/dbs/osbwsSANDBNEW.ora)";
}

 


 

Perform an Oracle Database Backup to the Cloud:
=========================================

testing the backup connection:

run
{
allocate channel ch1 type 'sbt_tape';
release channel ch1;
}

 


creating backup to flash recovery area (for testing):

RMAN> backup as compressed backupset datafile 4;


copying backup to amazon cloud(s3):


RMAN> run
{
allocate channel ch1 type 'sbt_tape';
backup recovery area;
release channel ch1;
}


 

verifying the backup information:

RMAN> list backup summary;

 




After moving oracle database backup to cloud:

the database backup has been copied from flash recovery area to cloud(S3).


 






Minimize System Contention

$
0
0




how to Minimize System Contention

Understanding Response Time

SQL> select metric_name, value
from v$sysmetric
where metric_name in ('Database CPU Time Ratio',
'Database Wait Time Ratio') and
intsize_csec =
(select max(INTSIZE_CSEC) from V$SYSMETRIC);

METRIC_NAME                 VALUE
————————————------------ -----------
Database Wait Time Ratio    11.371689
Database CPU Time Ratio     87.831890
SQL>





Identifying SQL Statements with the Most Waits

SQL> select ash.user_id,
u.username,
s.sql_text,
sum(ash.wait_time +
ash.time_waited) ttl_wait_time
from v$active_session_history ash,
v$sqlarea s,
dba_users u
where ash.sample_time between sysdate - 60/2880 and sysdate
and ash.sql_id = s.sql_id
and ash.user_id = u.user_id
group by ash.user_id,s.sql_text, u.username
order by ttl_wait_time


Examining Session Waits

the V$SESSION_WAIT view to get a quick idea about what a particular session is waiting

V$SESSION: This view shows the specific resource currently being waited for, as well as the
event last waited for in each session.
• V$SESSION_WAIT: This view lists either the event currently being waited for or the event last
waited on for each session. It also shows the wait state and the wait time.
• V$SESSION_WAIT_HISTORY: This view shows the last ten wait events for each current session.
• V$SESSION_EVENT: This view shows the cumulative history of events waited on for each
session. The data in this view is available only so long as a session is active.
• V$SYSTEM_EVENT: This view shows each wait event and the time the entire instance has waited
on that event since you started the instance.
• V$SYSTEM_WAIT_CLASS: This view shows wait event statistics by wait classes.


SQL>select event, count(*) from v$session_wait
group by event;
EVENT COUNT(*)
--------------------------------------------- --------
SQL*Net message from client 11
Streams AQ: waiting for messages in the queue 1
enq: TX - row lock contention 1
...


SQL> select event, state, seconds_in_wait siw
from v$session_wait
where sid = 81;
EVENT STATE SIW
----------------------------- ----------- ------
enq: TX - row lock contention WAITING 976

The V$SESSION_WAIT view shows the current or last wait for each session. The STATE column in this view tells you
whether a session is currently waiting. Here are the possible values for the STATE column:
• WAITING: The session is currently waiting for a resource.
• WAITED UNKNOWN TIME: The duration of the last wait is unknown. (This value is shown only
if you set the TIMED_STATISTICS parameter to false, so in effect this depends on the value
set for the STATISTICS_LEVEL parameter. If you set STATISTICS_LEVEL to TYPICAL or ALL, the
TIMED_STATISTICS parameter will be TRUE by default. If the STATISTICS_LEVEL parameter is
set to BASIC, TIMED_STATISTICS will be FALSE by default.)
• WAITED SHORT TIME: The most recent wait was less than a 100th of a second long.
• WAITED KNOWN TIME: The WAIT_TIME column shows the duration of the last wait.


SQL> select wait_class, sum(time_waited), sum(time_waited)/sum(total_waits)
2 sum_waits
3 from v$system_wait_class
4 group by wait_class
5* order by 3 desc;
WAIT_CLASS SUM(TIME_WAITED) SUM_WAITS
----------- --------------- ----------
Idle 249659211 347.489249
Commit 1318006 236.795904
Concurrency 16126 4.818046
User I/O 135279 2.228869
Application 912 .0928055
Network 139 .0011209


Do not worry if you see a very high sum of waits for the Idle wait class. You should actually expect to see a high
number of Idle waits in any healthy database



select sea.event, sea.total_waits, sea.time_waited, sea.average_wait
from v$system_event sea, v$event_name enb, v$system_wait_class swc
where sea.event_id=enb.event_id
and enb.wait_class#=swc.wait_class#
and swc.wait_class in ('Application','Concurrency')
order by average_wait desc



EVENT TOTAL_WAITS TIME_WAITED AVERAGE_WAIT
----------- ------------ ----------- ---------- ----------
enq: TX - index contention 2 36 17.8
library cache load lock 76 800 10.53
buffer busy waits 9 89 9.87
row cache lock 26 100 3.84
cursor: pin S wait on X 484 1211 2.5
SQL*Net break/reset to client 2 2 1.16
library cache: mutex X 12 13 1.10
latch: row cache objects 183 158 .86
latch: cache buffers chains 5 3 .69
enq: RO - fast object reuse 147 70 .47
library cache lock 4 1 .27
cursor: pin S 20 5 .27
latch: shared pool 297

You can see that the enqueue waits caused by the row lock contention are what’s causing the most waits under
these two classes. Now you know exactly what’s slowing down the queries in your database! To get at the session
whose performance is being affected by the contention for the row lock, drill down to the session level using the
following query:


select se.sid, se.event, se.total_waits, se.time_waited, se.average_wait
from v$session_event se, v$session ss
where time_waited > 0
and se.sid=ss.sid
and ss.username is not NULL
and se.event='enq: TX - row lock contention';

SID EVENT TOTAL_WAITS time_waited average_wait
---- --------------------------- ----------- ------------ -----------
68 enq: TX - row lock content 24 8018 298


The output shows that the session with SID 68 had waited (or still might be waiting) for a row lock that’s held by
another transaction.

changing the SYS password in a Data Guard environment

$
0
0




changing the SYS password in a Data Guard environment

The way to change the SYS password without breaking the redo transport service includes
copying the primary database's password file to the standby server after changing the
password. The following steps show how this can be done:

1. Stop redo transport from the primary database to the standby database. We can
execute the DEFER command to defer the log destination with the ALTER SYSTEM
statement:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = 'DEFER';
System altered.

If the Data Guard broker is being used, we can use the following statement:
DGMGRL> EDIT DATABASE TURKEY_UN SET STATE = 'LOG-TRANSPORT-OFF';
2. Change the SYS user's password in the primary database:
SQL> ALTER USER SYS IDENTIFIED BY newpassword;
User altered.

3. Copy the primary database's password file to the standby site:
$ cd $ORACLE_HOME/dbs
$ scp orapwTURKEY standbyhost:/u01/app/oracle/product/11.2.0/
dbhome_1/dbs/orapwINDIAPS
4. Try logging into the standby database from the standby server using the new SYS
password:
$ sqlplus sys/newpassword as sysdba

5. Start redo transport from the primary database to the standby database:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = 'ENABLE';
System altered.
If the Data Guard broker is being used, we can use the following statement:
DGMGRL> EDIT DATABASE TURKEY_UN SET STATE = 'ONLINE';
6. Check whether the redo transport service is running normally by switching the redo
logs in the primary database:
SQL> ALTER SYSTEM SWITCH LOGFILE;
System altered.
Check the standby database's processes or the alert log file to see redo transport
service status:
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS
FROM V$MANAGED_STANDBY ;

PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
--------- ------------ ---------- ---------- ---------- ----------
ARCH CLOSING 1 3232 1 275
ARCH CLOSING 1 3229 1 47
ARCH CONNECTED 0 0 0 0
ARCH CLOSING 1 3220 2049 1164
RFS IDLE 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 0 0 0 0
MRP0 APPLYING_LOG 1 3233 122 102400
RFS IDLE 1 3233 122 1

note:Also, if the password file of the standby database is somehow corrupted,
or has been deleted, the redo transport service will raise an error and we
can copy the primary password file to the standby site to fix this problem.


OEM_working_at_commang_line

$
0
0




Working at the Command Line:

Start an EM CLI Session

emcli login -username="SYSMAN"
emcli sync



emcli help


emcli help sync


The get_targets Verb


emcli set_credential -target_type=oracle_database -target_name="eamz" \
-credential_set=DBCredsNormal \
-columns="username:dbsnmp;password:Super_S3cret;role:Normal"

emcli get_targets -targets="oracle_database" -format="name:csv" | grep -i eamz


Agent Administration

emcli get_targets –targets="oracle_emd"–name="name:csv"
Status ID,Status,Target Type,Target Name
1,Up,oracle_emd,acme_dev:2480
1,Up,oracle_emd,acme_qa:3872
1,Up,oracle_emd,acme_prod:3872

emcli stop_agent -agent=acme_qa:3872 -host_username=oracle
Host User password:
The Shut Down operation is in progress for the Agent: acme_qa:3872
The Agent "acme_qa:3872" has been stopped successfully.
emcli start_agent -agent=acme_qa:3872 -host_username=oracle
Host User password:
The Start Up operation is in progress for the Agent: acme_qa:3872
The Agent "acme_qa:3872" has been started successfully.
emcli restart_agent -agent=acme_qa:3872 -host_username=oracle
Host User password:
The Restart operation is in progress for the Agent: acme_qa:3872
The Agent "acme_qa:3872" has been restarted successfully.


emcli secure_agent -agent=acme_qa:3872 -host_username=oracle
Host User password:
Registration Password:
The Secure operation is in progress for the Agent: acme_qa:3872
The Agent "acme_qa:3872" has been secured successfully.
emcli resecure_agent -agent=acme_qa:3872 -host_username=oracle
Host User password:
Registration Password:
The Resecure operation is in progress for the Agent: acme_qa:3872
The Agent "acme_qa:3872" has been resecured successfully.


emcli resyncAgent -agent=acme_qa:3872
Resync job RESYNC_20140422135854 successfully submitted



emcli get_agent_properties -agent_name=acme_qa:3872
Name Value
agentVersion 12.1.0.3.0
agentTZRegion America/Los_Angeles
emdRoot /opt/oracle/agent12c/core/12.1.0.3.0
agentStateDir /opt/oracle/agent12c/agent_inst
perlBin /opt/oracle/agent12c/core/12.1.0.3.0/perl/bin
scriptsDir /opt/oracle/agent12c/core/12.1.0.3.0/sysman/admin/ scripts
EMD_URL https://acme_qa:3872/emd/main/
REPOSITORY_URL https://myoms.com:4903/empbs/upload
EMAGENT_PERL_TRACE_LEVEL INFO
UploadInterval 15
Total Properties : 10
emcli get_agent_property -agent_name=acme_qa:3872 -name=agentTZRegion
Property Name: agentTZRegion
Property Value: America/Los_Angeles



Deleting EM Targets with EM CLI

Find Exact Target Names


> emcli get_targets | grep -i bertha
1 Up oracle_database bertha
1 Up oracle_dbsys bertha_sys
1 Up oracle_listener LSNRBERTHA_oemdemo.com


Delete the Target

emcli delete_target –type = "oracle_db_sys"–name="bertha_sys"–delete_members
Target "bertha_sys:oracle_dbsys" deleted successfully

You can also delete individual members, like this:
> emcli delete_target –type = "oracle_listener"–name="LSNRBERTHA"
Target "LSNRBERTHA:oracle_listener" deleted successfully




How to Remove an Enterprise Manager Agent with One Command

emcli delete_target -name="<Agent Name>"
-type="oracle_emd"
-delete_monitored_targets
-async;


emcli get_targets | grep -i demohost01
1 Up host demohost01
1 Up oracle_emd demohost01:3872
> emcli delete_target -name="demohost01:3872" -type="oracle_emd" -delete_monitored_targets -async;
Target "LSNRBERTHA:oracle_listener" deleted successfully



Transferring Targets to Another EM Agent



To move goldfish database and its listener from the alice server to buster:
emcli relocate_targets
-src_agent="alice:3872"
-dest_agent="buster:3872"
-target_name="goldfish"
-target_type="oracle_database"
-copy_from_src
emcli relocate_targets
-src_agent="alice:3872"
-dest_agent="buster:3872"
-target_name="lsnrgoldfish"
-target_type="oracle_listener"



For the transfer you’d use these values:
• src_agent= alice:3872 (current agent name)
• dest_agent= buster:3872 (destination agent name)
• target_name= goldfish (name of the target to move)
• target_type= EM types for that target
Database targets also include the copy_from_src flag in order to retain their history. You can only relocate one
target per EM CLI command.


OMS-Mediated Targets

SELECT entity_type,
entity_name,
host_name
FROM sysman.em_manageable_entities
WHERE manage_status =2-- Managed
AND promote_status =3-- Promoted
AND monitoring_mode =1-- OMS mediated
ORDERBY entity_type,entity_name, host_name;
ENTITY_TYPE ENTITY_NAME HOST_NAME
------------------------- ------------------------------------- --------------------
cluster clust01 cluster01b.com
cluster clust01 cluster01b.com
rac_database apple cluster01b.com
rac_database betty cluster01a.com
rac_database jack cluster01b.com
weblogic_domain /EMGC_GCDomain/GCDom myoms.com
weblogic_domain /Farm01_IDMDomain/ID myoms.com
weblogic_domain /Farm02_IDMDomain/ID myoms.com


Managing OEM Administrators


emcli create_user -name="SuzyQueue" -password="oracle"

emcli create_user -name="SuzyQueue" -password="oracle" -expired="true"

Role Management

emcli create_user -name="SuzyQueue" -password="oracle" \
-roles="em_all_administrator"

emcli grant_roles -name="SuzyQueue" -roles="em_all_viewer"
emcli revoke_roles -name="SuzyQueue" -roles="em_all_operator"


Tracking Management Server Login

emcli list_active_sessions -details
OMS Name: myoms.com:4889_Management_Service
Administrator: SYSMAN
Logged in from: Browser@123.45.6.234
Session: F7CA6D7DE88B0917E04312E7510A9E54
Login Time: 2014-04-24 06:46:53.876687


OMS Name: myoms.com:4889_Management_Service
Administrator: BOBBY
Logged in from: Browser@SAMPLEPC.com
Session: F7CD5C7EE0A961C7E04312E7510A8A71
Login Time: 2014-04-24 11:05:24.199258
OMS Name: myoms.com:4889_Management_Service
Administrator: PHIL
Logged in from: Browser@123.45.6.228
Session: F7CECDD6335543E3E04312E7510AA25C
Login Time: 2014-04-24 11:13:20.567234
OMS Name: myoms.com:4889_Management_Service
Administrator: SYSMAN
Logged in from: Browser@123.45.6.234
Session: F7CF52CA3BE1692FE04312E7510A7494
Login Time: 2014-04-24 11:50:31.152683

OMS Name: myoms.com:4889_Management_Service
Administrator: SYSMAN
Logged in from: EMCLI@123.45.6.231
Session: F7CF52CA3BE3692FE04312E7510A7494
Login Time: 2014-04-24 11:55:52.482938
OMS Name: myoms.com:4889_Management_Service
Administrator: SYSMAN
Logged in from: Browser@123.45.6.231
Session: F7CF52CA3BE5692FE04312E7510A7494
Login Time: 2014-04-24 12:07:52.335728




RMAN new features and enhancements

$
0
0






RMAN new features and enhancements


Container and pluggable database backup and restore

RMAN> BACKUP DATABASE; (To backup the CBD + all PDBs)
RMAN> BACKUP DATABASE root; (To backup only the CBD)
RMAN> BACKUP PLUGGABLE DATABASE pdb1,pdb2; (To backup all specified
PDBs)
RMAN> BACKUP TABLESPACE pdb1:example; (To backup a specific
tablespace in a PDB)
Some examples when performing RESTORE operations are:
RMAN> RESTORE DATABASE; (To restore an entire CDB, including all
PDBs)
RMAN> RESTORE DATABASE root; (To restore only the root container)
RMAN> RESTORE PLUGGABLE DATABASE pdb1; (To restore a specific PDB)
RMAN> RESTORE TABLESPACE pdb1:example; (To restore a tablespace in a
PDB)
Finally, some example of RECOVERY operations are:
RMAN> RECOVER DATABASE; (Root plus all PDBs)
RMAN> RUN {
SET UNTIL SCN 1428;
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE OPEN RESETLOGS; }
RMAN> RUN {
RESTORE PLUGGABLE DATABASE pdb1 TO RESTORE POINT one;
RECOVER PLUGGABLE DATABASE pdb1 TO RESTORE POINT one;
ALTER PLUGGABLE DATABASE pdb1 OPEN RESETLOGS;}


Viewing all 1640 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>