Quantcast
Channel: Doyensys Allappsdba Blog..
Viewing all 1640 articles
Browse latest View live

Article 0

$
0
0
Receiving Failed to Login to APEX / HTMLDB Page - You Don't Have Permission to Access /pls/apex

APPLIES TO:

Oracle Application Express (formerly HTML DB) - Version 1.5 and later
Information in this document applies to any platform.

SYMPTOMS:

APEX (formerly HTML DB) has been installed with no errors in the installation log. When trying to access the APEX admin page after the install using the APEX URL
http://machine:port/pls/apex/apex_admin
or:
http://machine:port/pls/apex/apex
Gives the following error:

"Forbidden
You don't have permission to access /pls/apex on this server"

Other URLs within this configuration work successfully. For example:
http://machine:port brings up the Oracle HTTP Server Welcome page, and the links and tabs for Demonstrations / Key Features are also working.

CAUSE:

The error_log shows the following errors:

[Tue Sep 14 09:34:03 2004] [error] [client 111.111.111.10] [ecid: 1095168843:121.121.121.12:2764:0:431,0] mod_plsql: /pls/apex/apex HTTP-403 ORA-1017

The database password that is provided in the Database Access Descriptor definition file (dads.conf or marvel.conf) in $ORACLE_HOME/Apache/modplsql/conf is incorrect.

SOLUTION:

1. Verify the APEX_PUBLIC_USER (or HTMLDB_PUBLIC_USER) password. By default, it will be the same password defined for the APEX ADMIN user.
If you are unable to login as the APEX_PUBLIC_USER, modify the password as the SYS user using the following command:
alter user APEX_PUBLIC_USER  identified by <password>;

2. Make a backup of the file containing the Database Access Descriptor (DAD) definition for APEX- tyically marvel.conf or dads.conf.

3. Modify the value PlsqlDatabasePassword in to reflect the password identified in step 1. For this example, let's say the password is manager1:

PlsqlDatabaseUsername APEX_PUBLIC_USER
PlsqlDatabasePassword manager1

The password must be entered in clear text.

4. Restart the Oracle HTTP Server.

opmnctl stop
opmnctl start

5. Test to verify that you can login to APEX / HTML DB successfully.


Once you can successfully login to APEX / HTML DB , you can encrypt the password by using the dadTool.pl perl script provided with the 10g HTTP Server installation.

Article 0

$
0
0
ASCP to EBS Mapping:

Background and Requirement:

TESTA (ASCP) environment is mapped to TEST (EBS) for planning data collection. Due to TEST is taken for other implementation, UAT is suggested as EBS DB for TESTA. For modifying this mapping, below steps performed.


In UAT, TESTA tns entry created.
-----------------------
TESTA=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.10)(PORT=1521))
(CONNECT_DATA=
(SID=TESTA)
)
)


IN TESTA, UAT tns entry created.
-----------------------

UAT=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.20)(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME=UAT)
(INSTANCE_NAME=UAT)
)
)


In UAT:
=======

SQL> select name from v$database;

NAME
---------
UAT

SQL> create public database link UAT_TESTA.COMPANY.COM connect to APPS identified by password using 'TESTA';

Database link created.

SQL> select name from v$database@UAT_TESTA.COMPANY.COM;

NAME
---------
TESTA


In TESTA:
==========
SQL> select name from v$database;

NAME
---------
TESTA

SQL> create public database link TESTA_UAT.COMPANY.COM connect to APPS identified by password using 'UAT';

Database link created.

SQL> select name from v$database@TESTA_UAT.COMPANY.COM;

NAME
---------
UAT

Update in UAT:
==============
update MRP.MRP_AP_APPS_INSTANCES_ALL set M2A_DBLINK='TESTA_UAT',A2M_DBLINK='UAT_TESTA';
commit;


select instance_code,A2M_DBLINK, M2A_DBLINK from MRP.MRP_AP_APPS_INSTANCES_ALL;


SQL> select instance_code,A2M_DBLINK, M2A_DBLINK from MRP.MRP_AP_APPS_INSTANCES_ALL;

INSTANCE_CODE A2M_DBLINK M2A_DBLINK
--------------- -------------------- --------------------
DM2 UAT_TESTA TESTA_UAT

=================================================================

=========================
In TESTA, before update:
========================
select instance_code,A2M_DBLINK,M2A_DBLINK FROM MSC.MSC_APPS_INSTANCES;

SQL> select instance_code,A2M_DBLINK,M2A_DBLINK FROM MSC.MSC_APPS_INSTANCES;


INSTANCE_CODE A2M_DBLINK M2A_DBLINK
--------------- -------------------- --------------------
TST TEST_TESTA TESTA_TEST

After update:
==============
update MSC.MSC_APPS_INSTANCES set M2A_DBLINK='TESTA_UAT',A2M_DBLINK='UAT_TESTA';

SQL> update MSC.MSC_APPS_INSTANCES set M2A_DBLINK='TESTA_UAT',A2M_DBLINK='UAT_TESTA';

1 row updated.

SQL> select instance_code,A2M_DBLINK,M2A_DBLINK FROM MSC.MSC_APPS_INSTANCES;

INSTANCE_CODE A2M_DBLINK M2A_DBLINK
--------------- -------------------- --------------------
TST UAT_TESTA TESTA_UAT

SQL> commit;

Commit complete.

=================================================================

Change the package body in TESTA:
==================================
desc APPS.MSC_RELEASE_HOOK

CREATE OR REPLACE PACKAGE BODY APPS.MSC_RELEASE_HOOK AS
PROCEDURE EXTEND_RELEASE( ERRBUF OUT NOCOPY VARCHAR2
, RETCODE OUT NOCOPY NUMBER
, arg_dblink IN VARCHAR2
, arg_plan_id IN NUMBER
, arg_log_org_id IN NUMBER
, arg_org_instance IN NUMBER
, arg_owning_org_id IN NUMBER
, arg_owning_instance IN NUMBER
, arg_compile_desig IN VARCHAR2
, arg_user_id IN NUMBER
, arg_po_group_by IN NUMBER
, arg_po_batch_number IN NUMBER
, arg_wip_group_id IN NUMBER
, arg_loaded_jobs IN NUMBER
, arg_loaded_lot_jobs IN NUMBER
, arg_resched_lot_jobs IN NUMBER
, arg_loaded_reqs IN NUMBER
, arg_loaded_scheds IN NUMBER
, arg_resched_jobs IN NUMBER
, arg_resched_reqs IN NUMBER
, arg_int_repair_orders IN NUMBER
, arg_ext_repair_orders IN NUMBER
, arg_wip_req_id IN NUMBER
, arg_osfm_req_id IN NUMBER
, arg_req_load_id IN NUMBER
, arg_req_resched_id IN NUMBER
, arg_int_repair_Order_id IN NUMBER
, arg_ext_repair_Order_id IN NUMBER
, arg_mode IN VARCHAR2
, arg_transaction_id IN NUMBER
, l_apps_ver in VARCHAR2 )
IS
BEGIN
UPDATE msc_wip_job_schedule_interface mws
SET job_name='OSP'||job_name
WHERE group_id = arg_wip_group_id
AND EXISTS ( SELECT 1 FROM fnd_lookup_values, mtl_parameters@TESTA_UAT mp
WHERE lookup_type='XXFM_ORGANIZATION_CODE'
AND lookup_code=mp.organization_code
AND mp.organization_id=mws.organization_id)
AND EXISTS ( SELECT 1 FROM fnd_lookup_values, mtl_system_items@TESTA_UAT msi
WHERE lookup_type='XXFM_PLANNER_CODE'
AND lookup_code=msi.planner_code
AND msi.organization_id=mws.organization_id
AND msi.inventory_item_id=mws.primary_item_id);

COMMIT;

END EXTEND_RELEASE;

END MSC_RELEASE_HOOK;
/

===============================================================
Drop the below three synonyms and Recreate it with correct DB Links for APPS user in EBS DB:

APPS.XXFMPA_MSC_SYSTEM_ITEMS
APPS.XXFMPA_MSC_SUPPLIES
APPS.XXFMPA_MSC_ORDERS_V


DROP SYNONYM APPS.XXFMPA_MSC_SYSTEM_ITEMS;
CREATE SYNONYM APPS.XXFMPA_MSC_SYSTEM_ITEMS FOR APPS.MSC_SYSTEM_ITEMS@"UAT_TESTA.COMPANY.COM";

DROP SYNONYM APPS.XXFMPA_MSC_SUPPLIES;
CREATE SYNONYM APPS.XXFMPA_MSC_SUPPLIES FOR APPS.MSC_SUPPLIES@"UAT_TESTA.COMPANY.COM";

DROP SYNONYM APPS.XXFMPA_MSC_ORDERS_V;
CREATE SYNONYM APPS.XXFMPA_MSC_ORDERS_V FOR APPS.MSC_ORDERS_V@"UAT_TESTA.COMPANY.COM";

Conclusion:
Using the above steps new EBS DB can be mapped to planning instance.

WebLogic 11g Cloning

Steps to Clone SOA 11g

Oracle EBS R12.2.6 Application tier cloning with Step by Step

OPTIMIZER ADAPTIVE FEATURES

Empty Blocks Affecting ROWNUM Performance

How to use GoldenGate HANDLECOLLISIONS Parameter

$
0
0
How to use GoldenGate HANDLECOLLISIONS Parameter correctly?


When is the Oracle Goldengate HANDLECOLLISIONS parameter useful?

The Goldengate HANDLECOLLISIONS parameter is configured on the target database in the Replicat process to enable processing of the data when there are duplicate data integrity issues in the destination database.

There could be a number of reasons which could cause this condition.

• The table data was instantiated at a particular CSN in the destination database but the Replicat       process was started at a CSN prior table load CSN. This causes an overlap.
• Duplicate data exists in the source table.
• Misconfiguration of the extract or Replicat configuration
The HANDLECOLLISIONS parameter is utilized when there is a possibility of an overlap of the trail data being applied by the Replicat process to the destaination database.

Without the use of this parameter, the Replicat will ABEND when it tries to process the inserts from the trail into the table which already has the rows (PK or unique constraint violation).

It will also ABEND when the Replicat tries updating or deleting rows which are not present in the destination tables. To overcome this normally the RBA of the trail has to be moved forward one transaction before the Replicat can be restarted and will stay running.

How do you apply rows which would normally fail on target?

To capture rows which are either duplicate INSERTS or do not exist in the destination to be updated or deleted, REPERROR can be used to record these rows into a discard file.

In the example below, the REPERROR (1403, Discard) parameter is used to identify a condition when the row the Replicat is looking for, is not present in the destination database.

Similarly, the REPERROR (0001, Discard) is raised when a duplicate INSERT is attempted but it violates a PK or unique value key as the row is already present in the table.

Replicat rep
USERID gg_user, PASSWORD XXXX,
ASSUMETARGETDEFS
DISCARDFILE /u01/app/gg/dirrpt/rep.dsc, APPEND, MEGABYTES 1024
DBOPTIONS SUPPRESSTRIGGERS
DDLOPTIONS UPDATEMETADATA, REPORT
REPERROR (0001, DISCARD)
REPERROR (1403, DISCARD)

How can we enable HANDLECOLLISIONS for only one table?

Firstly, as discussed above the Goldengate HANDLECOLLISIONS should be used only when and where necessary.

It should be removed from the Oracle Goldengate Replication configuration as soon as possible.

Secondly if it has to be enabled, it should only be done so ONLY for tables requiring this.

This can be achieved by using HANDLECOLLISION, but by listing the specific tables and then turning it off using the NOHANDLECOLLISIONS clause for the remaining tables, as shown below.

Enabling HANDLECOLLISIONS

1. Set Globally
Enable global HANDLECOLLISIONS for ALL MAP statements

HANDLECOLLISIONS
MAP vst.inventory, TARGET vst.inventory;
MAP vst.trans_hist, TARGET vst.trans_hist;
MAP vst.trans, TARGET vst.trans;
MAP vst.orders, TARGET vst.orders;

2. Set for Group of MAP Statements
Enable HANDLECOLLISIONS for some MAP statements
HANDLECOLLISIONS
MAP vst.inventory, TARGET vst.inventory;
MAP vst.trans_hist, TARGET vst.trans_hist;
NOHANDLECOLLISIONS
MAP vst.trans, TARGET vst.trans;
MAP vst.orders, TARGET vst.orders;

3.Set for Specific Tables
Enable global HANDLECOLLISIONS but disable for specific tables

HANDLECOLLISIONS
MAP vst.inventory, TARGET vst.inventory;
MAP vst.trans_hist, TARGET vst.trans_hist;
MAP vst.trans, TARGET vst.trans, NOHANDLECOLLISIONS;
MAP vst.orders, TARGET vst.orders, NOHANDLECOLLISIONS;

Don't forget to remove the HANDLECOLLISIONS parameter after the Replicat has moved past the CSN where it was abending previously.
Also make sure to restart the Replicat after the removing this parameter.


Goldengate 12c Troubleshooting Using LogDump Utility

$
0
0
Goldengate 12c Troubleshooting Using LogDump Utility

Oracle GoldenGate Software includes the Logdump utility for viewing data directly from the trail files. Without the Logdump, it is not possible to read the content of the Oracle GoldenGate trail files, as the trail files are in a binary format. With Logdump, we can open up the trail file, read it’s content, navigate thorough the file, view transactions at different RBA’s (relative byte address – file position), help identify the type of commands (DML or DDL) issued on the source, including delete, insert, update, alter and create statements.

Logdump Commands

Open Logdump
Navigate to the directory where the Oracle GoldenGate Software is installed and execute the Logdump.

[GoldenGate]$ $GG_HOME/logdump

Open a Trail File
To open a trail file and read it’s content, specify the trail file at the logdump prompt. Trail files are usually found in the GoldenGate dirdat directory.

ls -lrt $GG_HOME/dirata
-rw-rw-rw- 1 oracle oinstall 78325 Dec 7 10:38 aa000001
-rw-rw-rw- 1 oracle oinstall 78325 Dec 7 10:42 aa000002
-rw-rw-rw- 1 oracle oinstall 78325 Dec 7 10:55 aa000003

You can also determine the current trail file directory/name by running the “INFO process_name” command at the ggsci prompt.

Open and view the details of local trail file.

Logdump> OPEN ./dirdat/aa000001
Change the file name and location as required.

Set Output Format: 

Enable the following options so that you are able to view the results in a readable format in your Logdump sessionL

Set trail file header detail on
The FILEHEADER contains the header details of the currently opened trail file.

Logdump> FILEHEADER DETAIL

Record Header
Logdump> GHDR ON

Set Column Details on
It displays the list of columns, their ID, length, Hex values etc.

Logdump> DETAIL ON

User token details
User token is the user defined information stored in trail, associated with the table mapping statements. The CSN (SCN in Oracle Database) associated with the transaction is available in this section.

Logdump> USERTOKEN DETAIL

Set length of the record to be displayed
In this case it is 128 characters.

Logdump> RECLEN 128

Viewing the Records:

To view particular records in the trail files, navigate as below in the local trail file.

First record in the trail file
Here “0” is the beginning of the trial file

Logdump> POS 0
Move to a specific record, at a particular RBA
The “xxxx” is the RBA number.
Logdump> POS xxxx
Next record in the opened trail file
Logdump> N
Or
Logdump> NEXT

Moving forward or reverse in the trail file
Logdump> POS FORWARD
or
Logdump> POS REVERSE

Skip certain number of records
Here ‘x’ is the number of records you want to skip.

Logdump> SKIP x
Last record in the trail file
Logdump> POS last

Filter Commands:

We can use filter commands to view the specific operations or data records, a record at a specific RBA, the record length, record type, etc. using the commands below.

To start filtration, use the “filter” keyword, followed by include or exclude. These options allow the data to be removed or shown, based on the filter criteria. Then apply other conditions like file name, rectype, iotype etc. Here rectype is record type and iotype is input output type.

There are number of operation we can filter using the Logdump. To view the list of operation types and the number assigned to them, run below command.

Show the Record Types
Logdump> SHOW RECTYPE

Enable or disable filtration

Logdump> FILTER [ ENABLE | DISABLE ]
Filter Records by Table Name

Logdump> FILTER INCLUDE FILENAME CC_APP.IMAGE_DETAIL
Filter Records By Operation Type
Operation types are Insert, Update, and Delete.

Logdump> FILTER INCLUDE IOTYPE INSERT
Filter Records using the operation number
You can specify the IOTYPE by using the equivalent operation number.
Logdump 374> FILTER INCLUDE IOTYPE 160
Logdump 374> N
n
Sample Output:
2013/02/18 00:36:05.000.000 DDLOP Len 1169 RBA 3049
Name:
After Image: Partition 0 G s
2c43 353d 2733 3135 3435 272c 2c42 373d 2733 3135 | ,C5='31545',,B7='315
3735 272c 2c42 323d 2727 2c2c 4233 3d27 5331 272c | 75',,B2='',,B3='S1',
2c42 343d 2754 4553 545f 3132 272c 2c43 3132 3d27 | ,B4='TEST',,C12='
272c 2c43 3133 3d27 272c 2c42 353d 2754 4142 4c45 | ',,C13='',,B5='TABLE
272c 2c42 363d 2743 5245 4154 4527 2c2c 4238 3d27 | ',,B6='CREATE',,B8='
4747 5553 4552 2e47 4753 5f44 444c 5f48 4953 5427 | GGUSER.GGS_DDL_HIST'
2c2c 4239 3d27 5331 272c 2c43 373d 2731 312e 322e | ,,B9='S1',,C7='11.2.
Filtering suppressed 2 records

Note: Here 160 represent DDL operation and in the detail we can see the DDL type like below is “CREATE” and suppressed means number of records skipped to reach next filter value.

View currently applied filters

Logdump> FILTER SHOW
Sample output:
Data filters are ENABLED

Include Match ANY
Rectypes : DDLOP

Exclude Match ANY

Filter on multiple conditions
We can filter the data of trail file using the multiple conditions together.

For that we can string multiple FILTER commands together, separating each one with a semicolon, as shown in the below example:

Logdump>FILTER INCLUDE FILENAME [SCHEMA].[TABLE]; FILTER RECTYPE 5; FILTER INCLUDE IOTYPE INSERT
The above example will display only “5”,” insert” statement records from the specified table.
Note: [SCHEMA] & [TABLE] is the name of the schema and table, and should be in upper case.

Clear the filter in the session
Logdump> FILTER CLEAR


Other Useful Commands

Count of the records in trail file

Logdump> COUNT
Sample Output:
Logtrail /u01/gg/dirdat/bb000010 has 5 records
Total Data Bytes      2161
Avg Bytes/Record 432
Insert                  2
RestartOK               1
DDL                     1
Others                  1
After Images            4

Average of 4 Transactions
Bytes/Trans .....     600
Records/Trans ...   1
Files/Trans .....       1

It will display the count of DDL, DML, DCL (Commit or Rollback) operations, etc.

Display count details
Logdump> COUNT DETAIL

Sample Output of Addition Data:

Partition 0
Total Data Bytes       1194
Avg Bytes/Record   597
RestartOK                 1
DDL                       1
After Images             2

*FileHeader* Partition 0
Total Data Bytes       931
Avg Bytes/Record 931
Others                    1

Search for large transaction
Logdump>TRANSHIST 200
Logdump>TRANSRECLIMIT 50
Logdump>FILTER INCLUDE FILENAME CC_APP.IMAGE_DETAIL
Logdump>COUNT

Previously used commands in the current Logdump session
Logdump> HISTORY

Scan for next good header of record
Logdump> SFH
or
Logdump> SCANFORHEADER

The above command will show the next good header of the record in the trail file.
Sample Output:

2013/02/18 00:36:52.797.000 FileHeader Len 931 RBA 0
Name: *FileHeader*
3000 01c5 3000 0008 4747 0d0a 544c 0a0d 3100 0002 | 0...0...GG..TL..1...
0002 3200 0004 2000 0000 3300 0008 02f1 fc23 c46f | ..2... ...3......#.o
2448 3400 0047 0045 7572 693a 6368 642d 706b 6175 | $H4..G.Euri:LOCAL
7368 616c 323a 5345 4153 4941 3a52 4f4f 543a 5345 | MACHINE
4153 4941 434f 4e53 554c 5449 4e47 3a43 4f4d 3a64 | :d
7269 7665 2d44 3a67 6f6c 6465 6e67 6174 6536 0000 | rive-D:GoldenGate6..
1500 1364 3a5c 7465 7374 5c6d 315c 6574 3030 3030 | ... /u01/gg/dirdat/ST0000

Scan for end of the transaction
Logdump> SCANFORENDOFTRANSACTION
or
Logdump> SFET

Some of the other SCAN options are:

SCANFORRBA
SCANFORTIME
SCANFORTYPE

Open the next trail file
Logdump> NEXTTRAIL

Sample Output:
Logtrail /u01/gg/dirdat/bb000010 closed

Current Logtrail is /u01/gg/dirdat/bb000011
For example if we had the trail file ST000010 opened, the NEXTTRAIL command will open the next trailfile, ST000011.


Exiting the Logdump Utility
Logdump> EXIT
Save A Part Of A GoldenGate Trail To A New Trail
We can save the records of trail file to a new trail file

Save all contents of the trail file

Logdump> SAVE [file]

Save the subset of data
Set the filter condition for the table we want data.

Logdump> FILTER EXCLUDE FILENAME [SCHEMA].[TABLE]
Save a subset of records
Logdump> SAVE [file] [n] RECORDS
Note: Here [file] is the name of the new file and [SCHEMA] & [TABLE] is the name of the schema and table, and should be in upper case.






Oracle Golden Gate Commands

$
0
0
Oracle Golden Gate Commands

1.The HISTORY Command

GGSCI (gg) 44> history

GoldenGate GGSCI Command History

35: info dpump detail
36: info all
37: info dpump detail
38: info all
39: info all
40: history
41: view report ext
42: history
43: view report ext detail
44: history

GGSCI (gg) 45> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING dpump 00:00:00 00:00:01
EXTRACT RUNNING ext 00:00:00 00:00:01
REPLICAT RUNNING rep 00:00:00 00:00:03

2. The ! Command

To rerun the previous command use “!”

GGSCI (gg) 46> !
info all
Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING dpump 00:00:00 00:00:06
EXTRACT RUNNING ext 00:00:00 00:00:06
REPLICAT RUNNING rep 00:00:00 00:00:08

To run a specific command from the history use “!” with the command line number.

GGSCI (gg) 47> !42
history
GGSCI Command History

38: info all
39: info all
40: history
41: view report ext
42: history
43: view report ext detail
44: history
45: info all
46: info all
47: history

3.The VERSIONS Command

VERSION: You can view this to view the version of the OS, host info and the database version.

GGSCI (proddb02) 3> versions
Operating System:
SunOS
Version Generic_147440-01, Release 5.10
Node: proddb02
Machine: sun4u
Database:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for Solaris: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production

4.The REPORT Command

REPORTS: View the reports for specific processes.
GGSCI (gg) 49> view report ext

Opened new report file at 2013-03-12 00:01:00.

***********************************************************************
** Run Time Messages **
***********************************************************************

2013-03-12 02:43:12 INFO OGG-01738 BOUNDED RECOVERY: CHECKPOINT: for object pool 1: p21775_extr: start=SeqNo: 17632, RBA: 21008, SCN: 0.1148
5118 (11485118), Timestamp: 2013-03-12 02:41:35.000000, Thread: 1, end=SeqNo: 17632, RBA: 21504, SCN: 0.11485118 (11485118), Timestamp: 2013-03-1
2 02:41:35.000000, Thread: 1.

2013-03-12 06:43:20 INFO OGG-01738 BOUNDED RECOVERY: CHECKPOINT: for object pool 1: p21775_extr: start=SeqNo: 17646, RBA: 18448, SCN: 0.1149
0824 (11490824), Timestamp: 2013-03-12 06:42:08.000000, Thread: 1, end=SeqNo: 17646, RBA: 18944, SCN: 0.11490824 (11490824), Timestamp: 2013-03-1
2 06:42:08.000000, Thread: 1.

5.The SHOW ALL Command

Use the “show” command to look at the configuration info on the different processes.

GGSCI (gg) 50> show
Parameter settings:

SET SUBDIRS ON
SET DEBUG OFF

Current directory: /u01/app/ha/ggs

Using subdirectories for all process files

Editor: vi

Reports (.rpt) /u01/app/db01/ggs/dirrpt
Parameters (.prm) /u01/app/db01/ggs/dirprm
Stdout (.out) /u01/app/db01/ggs/dirout
Replicat Checkpoints (.cpr) /u01/app/db01/ggs/dirchk
Extract Checkpoints (.cpe) /u01/app/db01/ggs/dirchk
Process Status (.pcs) /u01/app/db01/ggs/dirpcs
SQL Scripts (.sql) /u01/app/db01/ggs/dirsql
Database Definitions (.def) /u01/app/db01/ggs/dirdef

GGSCI (gg) 52> show all

Parameter settings:

SET SUBDIRS ON
SET DEBUG OFF

Current directory: /u01/app/ha/ggs

Using subdirectories for all process files

Editor: vi

Reports (.rpt) /u01/app/db01/ggs/dirrpt
Parameters (.prm) /u01/app/db01/ggs/dirprm
Stdout (.out) /u01/app/db01/ggs/dirout
Replicat Checkpoints (.cpr) /u01/app/db01/ggs/dirchk
Extract Checkpoints (.cpe) /u01/app/db01/ggs/dirchk
Process Status (.pcs) /u01/app/db01/ggs/dirpcs
SQL Scripts (.sql) /u01/app/db01/ggs/dirsql
Database Definitions (.def) /u01/app/db01/ggs/dirdef

Configure Goldengate DDL Replication

$
0
0
How to Configure Goldengate DDL Replication?


Goldengate supports the replication of DDL commands, operating at a schema level, from one database to another.

By default the DDL replication is disabled on the source database (extract side) but is enabled on the target Database (replicat side). Learn more on how to configure Goldengate DDL Replication.


Configure Goldengate DDL Replication
Prerequisite Setup

Navigate to the directory where the Oracle Goldengate software is installed.

Connect to the Oracle database as sysdba.

sqlplus sys/password as sysdba

For DDL synchronization setup, run the marker_setup.sql script. Provide OGG_USER schema name, when prompted.

Here the OGG_USER is the name of the database user, assigned to support DDL replication feature in Oracle Goldengate

SQL> @marker_setup.sql

Then run the ddl_setup.sql script. Provide the setup detail information below.

SQL> @ddl_setup.sql

For 10g:

Schema Name : OGG_USER
Installation mode : initialsetup
To proceed with the installation : yes

For 11g:

Start the installation : yes
Schema Name : OGG_USER
Installation mode : initialsetup

For 12c:

In Oracle database 12c, DDL replication does not require any setup of triggers as it is natively supported at the database level.

So none of the marker, ddl_setup or any of the other scripts need to be run. All that is required is including the “DDL INCLUDE MAPPED” parameter in the Extract parameter file as shown in the last step.

Run the role_setup.sql script. Provide OGG_USER schema name, when prompted.

SQL> @role_setup.sql
Then grant the ggs_ggsuser_role to the OGG_USER.

SQL> grant ggs_ggsuser_role to OGG_USER;
Run the ddl_enable.sql script as shown in below command:

SQL> @ddl_enable;
Run the ddl_pin.sql script as shown below.

SQL> @ddl_pin OGG_USER;

Configure Extract Process with DDL Replication

The following extract ESRC01 was configured previously. Adding “DDL INCLUDE MAPPED” enables extracting the DDL which ran in the database. Here the “MAPPED .. TABLE” are all tables specified in [schema_name].*.

On restart of the ESRC01 process all DDL on the speicfied tables will be picked up and placed in the trail file for applying to the destination database.

EXTRACT ESRC01
USERID OGG_USER PASS_WORD OGG_USER
EXTTRAIL ./dirdat/st
TRANLOGOPTIONS EXCLUDEUSER OGG_USER
DDL INCLUDE MAPPED
TABLE APPOLTP01.*;

Don’t forget to add DDL INCLUDE MAPPED in the Pump and Replicat processes.

Configure GoldenGate Integrated Capture Mode

$
0
0
GoldenGate Integrated Capture Mode


The Integrated Capture GoldenGate Mode also known as the Integrated Extract Goldengate process in 12c (also backported to 11gr2) is one of the more interesting and useful feature released with this version. This capture process is the component responsible for extracting the DML transactional data and DDL’s from the source database redo log files. This data is then written to local trail files which eventually is move to the destination database to be applied there.

• What is the GoldenGate Integrated Capture Mode?

• On-Source Capture

• Downstream Capture

• Prerequisites

• Configuration

• Monitoring/Views


What is the GoldenGate Integrated Capture Mode?

Integrated Capture Mode (IC) is a new form of the Extract process, were in this process is moved closer, inside the source database. In the traditional Classic extract process, the extract works on the redo logs outside the domain of the actual database. In this new integrated capture mode, a server Log Miner process is started which extracts all the DML data and DDLS statements creating Logical Change Records (LCR’s). These are then handed to the Goldengate memory processes which writes these LCR’s to the local trail files. This Log Miner server process is not the Log Miner utility we are used to in the database but is a similar mechanism which has been tuned and enhanced for specific use by the Goldengate processes.

The purpose of moving this inside the database is to be able to make use of the already existing internal procedures in the database, making it easier to provide support for the newer features of Oracle faster than was previously possible. Due to this change, Oracle is now able to provide the following.

• Full Support of Basic, OLTP and EHCC compressed data.

• No need to fetch LOB’s from tables.

• Full Secure File support for Secure file lobs.

• Full XML support.

• Automatically handles addition of nodes and threads in RAC environment.

• Senses node up down in RAC and handles it in its processes transparently.



Integrated Capture Modes

Integration capture supports two types of deployment configurations. They are:

• On-Source Capture

• Downstream Capture


On-Source Capture

When the integrated capture process is configured using the on-source capture mode, the capture process is started on the actual source database server itself. Changes as they happen on source database will be captured locally, routed, transformed and applied on target database in near real-time

This may seem convenient but consideration needs to be given to the additional workload that will be placed by this process on the database server. However if real-time replication is required this is the best option.

Note: All features are supported in both On-Source or Downstream Deployment

Downstream Capture

In the downstream mode, the capture process is configured to run on a remote database host. All the database redo logs from the source database are shipped to this remote server using Dataguard technology and then mined there by the capture process.

In this mode there is an inherent latency introduced due to the fact that the redo log on the source needs to switch first before the log can be shipped downstream. So there will be some delay in the replication of data to a target database as the extraction will be delayed due to the log switch. The main benefit of this setup however is the offset of the resource usage on the source server.

In this mode, to overcome the log switch latency, Oracle has provide a near Real time capture using Standby redo logs for extraction. In this configuration the redo log from the source continuously writes into the standby redo logs of the downstream database. The capture process directly capture the data from here.

It is important to keep in mind when deciding whether to use the Integrated capture or the classic capture mechanism that both configuration will remain available in future releases. However Oracle recommends to use the new Integrated capture mechanism as Oracle will not be adding new features to classic capture in the future and it will only be there for legacy support purposes.

Prerequisites

The database where integrated capture runs:

• Must be at least 11.2.0..3

• Database patch 1411356.1 must be installed.

• Works with Oracle 12c.

In Downstream Configuration:

• DBID’s of ALL source databases must be unique.

• Downstream capture must have same OS/platform as source.

• Standby redo logs must be as large as the largest source redo logs.

• Only one database can have real time mining



Configuration:

Add Extract

On the source database the capture is created by first adding the capture parameter file EXT.prm in the dirprm directory.


gg>cat EXT.prm

EXTRACT EXT

USERID gguser, PASSWORD gguser

EXTTRAIL ./dirdat/xi

TRANLOGOPTIONS EXCLUDEUSER gguser

TABLE EQ2.*;

Next add the capture as an integrated capture specifying the “INTEGRATED TRANLOG” option.



gg> ./ggsci

Oracle GoldenGate Command Interpreter for Oracle

Version 11.2.1.0.3 14400833 OGGCORE_11.2.1.0.3_PLATFORMS_120823.1258

...

GGSCI (xgoldengate01) 2> ADD EXTRACT EXT, INTEGRATED TRANLOG, BEGIN NOW

EXTRACT added.

GGSCI (xgoldengate01) 3> ADD EXTTRAIL ./dirdat/xn EXTRACT EXT, megabytes 100

EXTTRAIL added.



GGSCI (xgoldengate01) 4> start extract EXT

Sending START request to MANAGER ...



Here is the entries in the ggserror.log when the extract was created.

2013-04-25 17:18:54 INFO OGG-00987 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): ADD EXTRACT EXT INTEGRATED TRANLOG, BEGIN NOW.

2013-04-25 17:19:22 INFO OGG-00987 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): ADD EXTTRAIL ./dirdat/xn EXTRACT EXT megabytes 100.

2013-04-25 17:19:39 INFO OGG-00987 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): start extract EXT.

2013-04-25 17:19:39 INFO OGG-00963 Oracle GoldenGate Manager for Oracle, mgr.prm: Command received from GGSCI on host gg.vst.com:35534 (START EXTRACT EXT ).

2013-04-25 17:19:40 INFO OGG-00992 Oracle GoldenGate Capture for Oracle, EXT.prm: EXTRACT EXT starting.

2013-04-25 17:19:40 INFO OGG-03035 Oracle GoldenGate Capture for Oracle, EXT.prm: Operating system character set identified as US-ASCII. Locale: en_US_POSIX, LC_ALL: C.

2013-04-25 17:19:40 INFO OGG-03500 Oracle GoldenGate Capture for Oracle, EXT.prm: WARNING: NLS_LANG environment variable does not match database character set, or not set. Using database character set value of AL32UTF8.

2013-04-25 17:19:40 INFO OGG-01635 Oracle GoldenGate Capture for Oracle, EXT.prm: BOUNDED RECOVERY: reset to initial or altered checkpoint.

2013-04-25 17:19:40 INFO OGG-01815 Oracle GoldenGate Capture for Oracle, EXT.prm: Virtual Memory Facilities for: BR

anon alloc: mmap(MAP_ANON) anon free: munmap

file alloc: mmap(MAP_SHARED) file free: munmap

target directories:

/u01/app/ggs/BR/EXT.

2013-04-25 17:19:41 INFO OGG-01815 Oracle GoldenGate Capture for Oracle, EXT.prm: Virtual Memory Facilities for: COM

anon alloc: mmap(MAP_ANON) anon free: munmap

file alloc: mmap(MAP_SHARED) file free: munmap

target directories:

/u01/app/ggs/dirtmp.

2013-04-25 17:19:43 WARNING OGG-01842 Oracle GoldenGate Capture for Oracle, EXT.prm: CACHESIZE PER DYNAMIC DETERMINATION (2G) LESS THAN RECOMMENDED: 64G (64bit system)
vm found: 3.82G
Check swap space. Recommended swap/extract: 128G (64bit system).



Monitoring

There are several components of the Integrated Capture Mode all of which need to be monitored for the effective tuning and troubleshooting of the Replication. Since the extract process mostly resides in the database, the Goldengate, Capture and the Logminer statistics integrated capture monitoring views can be used to view the progress of the extract process. The main components to keep an eye on are below.



• Capture Processes configured in the database.

• Dynamic stats of the Goldengate capture process

• Logminer performance

• Outbound progress table



DBA_CAPTURE

col CAPTURE_NAME for a20;

col QUEUE_NAME for a15;

col START_SCN for 9999999999;

col STATUS for a10;

col CAPTURED_SCN for 9999999999;

col APPLIED_SCN for 9999999999;

col SOURCE_DATABASE for a10;

col LOGMINER_ID for 9999999;

col REQUIRED_CHECKPOINTSCN for a30;

col STATUS_CHANGE_TIME for a15;

col ERROR_NUMBER for a15;

col ERROR_MESSAGE for a10;

col START_TIME for a30

col CAPTURE_TYPE for a10;

SELECT CAPTURE_NAME, QUEUE_NAME, START_SCN, STATUS,

CAPTURED_SCN, APPLIED_SCN, SOURCE_DATABASE,

LOGMINER_ID, REQUIRED_CHECKPOINT_SCN,

STATUS_CHANGE_TIME, ERROR_NUMBER, ERROR_MESSAGE,

CAPTURE_TYPE, START_TIME

FROM DBA_CAPTURE;



GOLDENGATE CAPTURE/Trans

col state for a30;

SELECT sid, serial#, capture#, CAPTURE_NAME, STARTUP_TIME, CAPTURE_TIME,

state, SGA_USED, BYTES_OF_REDO_MINED,

to_char(STATE_CHANGED_TIME, 'mm-dd-yy hh24:mi') STATE_CHANGED_TIME

FROM V$GOLDENGATE_CAPTURE;

col capture_message_create_time for a30;

col enqueue_message_create_time for a27;

col available_message_create_time for a30;

SELECT capture_name,

to_char(capture_time, 'mm-dd-yy hh24:mi') capture_time,

capture_message_number,

to_char(capture_message_create_time ,'mm-dd-yy hh24:mi') capture_message_create_time,

to_char(enqueue_time,'mm-dd-yy hh24:mi') enqueue_time,

enqueue_message_number,

to_char(enqueue_message_create_time, 'mm-dd-yy hh24:mi') enqueue_message_create_time,

available_message_number,

to_char(available_message_create_time,'mm-dd-yy hh24:mi') available_message_create_time

FROM GV$GOLDENGATE_CAPTURE; SELECT component_name capture_name, count(*) open_transactions,

sum(cumulative_message_count) LCRs

FROM GV$GOLDENGATE_TRANSACTION

WHERE component_type='CAPTURE'

group by component_name;



LOGMINER SESSIONS/STATS

col db_name for a15;

select INST_ID, SESSION_ID,SESSION_NAME,SESSION_STATE, DB_NAME,

NUM_PROCESS,START_SCN,END_SCN,SPILL_SCN, PROCESSED_SCN, PREPARED_SCN,

READ_SCN MAX_MEMORY_SIZE,USED_MEMORY_SIZE PINNED_TXN, PINNED_COMMITTED_TXN

from GV$LOGMNR_SESSION;

SELECT SESSION_ID, NAME, VALUE

FROM V$LOGMNR_STATS;


OUTBOUND PROGRESS TABLE

SELECT inst_id, sid, serial#, spid,server_name, startup_time, state,

tztal_messages_sent, committed_data_only, last_sent_message_number,

send_time, elapsed_send_time,bytes_sent,

to_char(last_sent_message_create_time,'mm-dd-yy hh24:mi')

last_sent_message_create_time,

FROM GV$XSTREAM_OUTBOUND_SERVER;



Configuring the Oracle GoldenGate Monitor Agent (JAGENT) for OEM on Linux

$
0
0

Configuring the Oracle GoldenGate Monitor Agent (JAGENT) for OEM on Linux


The Oracle GoldenGate Monitor Agent, or Jagent, is no longer shipped with Oracle GoldenGate 12.2 or newer.  In fact, it has been a best practice since Oracle GoldenGate 12.1 to not use the Jagent that came with GoldenGate, but to use the Jagent from the GoldenGate Monitor Agent.  The Oracle GoldenGate Monitor is available from the Oracle Website under Middleware -> GoldenGate.  Download the Oracle GoldenGate Monitor 12.2.1 zip file from the selection under Management Pack for Oracle GoldenGate.  This will download the file fmw_12.2.1.0.0_ogg_Disk1_1of1.zip file.


Prerequisites:

Before you can install the Jagent, you must install java version 1.7 or newer.  This can be downloaded from the Oracle Website.  This must be completed before installing the Oracle GoldenGate Monitor Agent.

Installation:

The Monitor Agent must be installed on each of the GoldenGate servers that are to be included in Monitor or OEM monitoring.  This installation and configuration process is shown below.

If OEM monitoring is used with the JAGENT, the OEM agent must be installed as well.  Installing and configuring OEM is not covered here.  These steps are to be followed in order to configure the Jagent.

Install and Configure the Monitor Agent:

1. Prepare the systemValidate or install java jdk 1.7 or later
a. Download and install the latest java JDK to server.
1.  Install the JDK.
2. This will install both the jre and jdk. The jre is accessed by default.
3. yum install jdk-8u91-linux-x64.rpm —nogpgcheck
This will install the Java JDK in /usr/java as /usr/java/jdk1.8.0_91

Note:  You might have to uninstall order versions in order for the JDK to be accessed by default.

2.  Install GoldenGate Monitor Agent only.

a.  Change directory to the location where the GoldenGate Monitor software has been saved.
b. Set the environment variable JAVA_HOME to the JDK directory.

$ export JAVA_HOME=/usr/java/jdk1.8.0_91

Run the installer from java
$ java –jar fmw_12.2.1.0.0_ogg.jar

Choose to install the Monitor Agent.

Welcome Screen. Click Next.

         1. Auto Updates Screen. Click Next.
       
         2.This is the location where the agent software will be installed.
       
         I used /u01/app/oracle/product/oggmon but you can choose your own location.
       
         3.Choose Oracle GoldenGate Monitor Agent
       
         4.Prerequisites Checks Screen. Click Next.
       
         5.Installation Summary Screen. Click Next.
       
         6.When completed and everything has a green check, click Finish. The installer will exit.

3.  Change directory to the oggmon/ogg_agent directory under where you just installed the Monitor Agent software. In our case, this is

/u01/app/oracle/product/oggmon/oggmon/ogg_agent.

Set the JAVA_HOME environment variable.

4.  $ export JAVA_HOME=/usr/java/jdk1.8.0_91

5.  Run the createMonitorAgentInstance.sh script. You will be prompted for the GoldenGate installation directory and the location where you want to install the Monitor Agent instance.

[oracle@gg21a ogg_agent]$ ./createMonitorAgentInstance.sh

Please enter absolute path of Oracle GoldenGate home directory : /u01/app/oracle/product/12.2.0/ogghome_1

Please enter absolute path of OGG Agent instance : /u01/app/oracle/product/oggmon/agent1

Please enter unique name to replace timestamp in startMonitorAgent script (startMonitorAgentInstance_20160523141025.sh) : abc

Sucessfully created OGG Agent instance.

6.  Change directory to the agent installation directory; /u01/app/oracle/product/oggmon/agent1.
7.  Change directory to the agent1/bin directory.
8.  Create the Jagent wallet

../pw_agent_util.sh –jagentonly

Supply a password. This is the agent password that you will use in OME or GoldenGate Monitor.

9.  Change directory to the agent1/cfg directory.
10.  Edit the Configuration.properties file.

a. Modify jagent.host=localhost to change to the actual host name.
b. Modify jagent.username=oggmajmxusr if desired (can be jagent, root, etc).
c. Make note of jagent.rmi.port=5559.
d. Modify agent.type.enabled=OGGMON to agent.type.enabled=OEM if using OEM for monitoring.

11.  Add ENABLEMONITORING to GLOBALS file.
12.  Restart the mgr process from ggsci.
a.  Once the manager has restarted the jagent will be visible from ggsci.
12.  Start JAGENT from GGSCI.

At this point the jagent should be running properly in GGSCI.

GGSCI (gg21a) 6> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
JAGENT      RUNNING

Once the Jagent is running, OEM or GoldenGate Monitor should be able to monitor GoldenGate.


Resize Operation Completed For File# 201; FILE# Does Not Exist

$
0
0
Alert log will contain following entries

Resize Operation Completed For File# 201; FILE# Does Not Exist

This is tempfiles and its number is generated dynamically based on parameter db_files - default value 200. This occurs in DB version 12c only.

How ever these messages are for information only and we can disable them using below parameter.

alter system set "_disable_file_resize_logging"=TRUE ;

ORA-14400: inserted partition key does not map to any partition in OMS 12c

$
0
0
This is due to job_queue_processes initialization parameter is set to zero and none of the Repository DBMS_SCHEDULER jobs are running.

Please execute the below in repository database.

alter system set JOB_QUEUE_PROCESSES=100 scope=both;

Depending on the work load we may need to increase the value.

JBO-29000: Unexpected exception caught: java.sql.SQLException, msg=ORA-20206: Target does not exist: : ORA-06512: at "SYSMAN.MGMT_TARGET"

$
0
0
JBO-29000: Unexpected exception caught: java.sql.SQLException,
msg=ORA-20206: Target does not exist: :
ORA-06512: at "SYSMAN.MGMT_TARGET", line 1228
ORA-06512: at "SYSMAN.EM_MONITORING", line 62

This is due to daylight Savings Time (DST) transition changes in Brazil.
EM 12c Agent was not running with the correct TZ after these DST changes to the time zone.

cd $AGENT_INST/bin

-- ./emctl stop agent
-- export TZ=Etc/GMT+3 <<<< This is the correct TZ after DST changes in the Brazil.
-- ./emctl resetTZ agent


Please Run the PLSQL which were printed by executing the above command on the repo database as SYSMAN and start the agent.

-- ./emctl start agent

Unable To Get Logical Block Size For Spfile Reported in Database Instance Alert Log File

$
0
0
This occurs when pfile in DBS directory has reference to non existent spfile reported in alert log.

This is due to bug in Oracle 12c.

Workaround:

1. Remove the PFILE if Database does not references it.
2. Edit the PFILE to refer the correct SPFILE path.

OMS 12c : 12.2 ASM / Database /Grid Infrastructure Targets are Shown as Down

$
0
0
We will be able to add 12.2 database in 12.1.0.4 and also test connection will work fine in moitoring configuration page.

But in the target page it will be shown as down with the below error

ORA-28040: No matching authentication protocol

The same occurs for ASM and cluster targets too.

This is because 12.1.0.4 OMS is certified to monitor only DB versions below 12.1.0.2.

Solution:

1. For monitoring 12.2 DB and related targets, OMS needs to be at least of 12.1.0.5 release with 12.1.0.8 DB plugin.

2. If this issue occurs in 12.1.0.5 release too, ensure that the latest DB and Agent bundle patches are applied at the Agent.


To Display the Sql prompt with Database Name.

$
0
0

Check the SQL Prompt value:


[oracle@TESTSERVER:TESTDB] dba
SQL*Plus: Release 12.2.0.1.0 Production on Fri Dec 29 00:17:56 2017
Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Note : Sql prompt didn't have any database name value. So it may lead the admin to wrongly run the command under the prod database.To avoid that we can enable this feature in the following file "/wwi/wwdb/db/oracle/product/12.2.0.1/db_1/sqlplus/admin/glogin.sql ".
By default we don't have any command in this file.

Add this lines to the file:

SET SQLPROMPT "_USER'@'_CONNECT_IDENTIFIER> "
SET PAGESIZE 100
SET LINESIZE 200


Modify the file with the value.

[oracle@TESTSERVER:TESTDB] vi /wwi/wwdb/db/oracle/product/12.2.0.1/db_1/sqlplus/admin/glogin.sql

Check the modification:

[oracle@TESTSERVER:TESTDB] cat /wwi/wwdb/db/oracle/product/12.2.0.1/db_1/sqlplus/admin/glogin.sql
--
-- Copyright (c) 1988, 2005, Oracle.  All Rights Reserved.
--
-- NAME
--   glogin.sql
--
-- DESCRIPTION
--   SQL*Plus global login "site profile" file
--
--   Add any SQL*Plus commands here that are to be executed when a
--   user starts SQL*Plus, or uses the SQL*Plus CONNECT command.
--
-- USAGE
--   This script is automatically run
--
SET SQLPROMPT "_USER'@'_CONNECT_IDENTIFIER> "
SET PAGESIZE 100
SET LINESIZE 200


Check the database prompt now:

[oracle@TESTSERVER:TESTDB] dba

SQL*Plus: Release 12.2.0.1.0 Production on Fri Dec 29 00:19:32 2017
Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SYS@TESTDB> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
[oracle@TESTSERVER:TESTDB]

Enable SQL Trace in Oracle Apps R12

$
0
0
Enable SQL Trace in Oracle Apps R12

SQL trace can be enabled in any E-Business module via different ways. The following table shows all possible ways for enabling trace in a form, self-service page, concurrent program, specific application user, instance wide, etc.

Applications Form:
1) Set the value for profile option Utilities: Diagnostics to 'Yes' at the user-level
2) Navigate to form where you want to trace
3) Turn on Tracing by using the menu option: Home > Diagnostics > Trace > Trace with waits
4) A pop-up with the trace file name and location gets displayed. Note down the trace filename
5) Proceed with steps that need to be traced. Once done tracing, exit the Application
6) Retrieve the raw trace file using the filename (from Step 4) located on the db server

Self Service Page:

1) Set the value for the profile option FND: DIAGNOSTICS to 'Yes' at user-level.
2) Navigate to the Self-Service page where you want to trace
3) Click the Diagnostics icon at the top-right of the page
4) Select Set Trace Level radio button and click 'Go'
5) Select Trace with  waits (recommended) and click 'Save'  
6) Select 'Home' and proceed with performing your screen processing
7) Disable trace once you are done: click on Diagnostics > Set Trace Level > Disable Trace
8) Write down the 'Trace Ids' provided on the left side of the screen
9) Logout/Exit from the application
10) Retrieve raw trace file using the Trace Ids (from step 8) and/or the tracefile_identifier (set by default to the userid)


Concurrent Program Definition:

1) Choose an appropriate responsibility and select the Concurrent > Program > Define screen
2) Search for the concurrent program you want to trace
3) Check the Enable Trace box to turn on tracing for the concurrent program
4) Submit and run the concurrent program
5) Write down the request_id of your concurrent program job
6) Go back to the Define screen and un-check the Enable Trace box for this concurrent program
7) Retrieve the raw trace file using the request_id (from step 5) and/or the tracefile_identifier (set by default to the userid)


Concurrent Program Submission:

1) Set the value for the profile option Concurrent: Allow Debugging to 'Yes' at user-level
2) Choose the appropriate responsibility and concurrent program to be executed
3) Click on the Debug Options button
4) Enable tracing by selecting the SQL Trace Check box and choose the desired trace level
5) Confirm your selection by clicking the OK button
6) Submit the concurrent program
7) Write down the request_id of your concurrent program job.
8) Retrieve the raw trace file using the request_id (from step 7) and/or the tracefile_identifier (set by default to the userid)


Profile Option:
1) If you are activating trace for your own account, navigate to Profile > Personal
2) Press F11, type Initialization% in the Profile Name column, then hit CTRL-F11
3) If you are enabling trace for another user, navigate to Profile > System  
4) Check User and Type in the Username to be traced
5) Type Initialization% in the Profile box and Hit 'Find' 
6) In the User box for Initialization SQL Statement – Custom, type the following statement [Quotes in the statement are all 'Single' quotes]:
BEGIN FND_CTL.FND_SESS_CTL('','','TRUE','TRUE','','ALTER SESSION SET TRACEFILE_IDENTIFIER=''User_Trace'' MAX_DUMP_FILE_SIZE=unlimited EVENTS=''10046 TRACE NAME CONTEXT FOREVER, LEVEL 8''');END;
7) Save. Logout then Login back to applications as the user for whom you turned on tracing, and promptly recreate the problem.
8) Go back to the Profile option in the Form application and delete the Initialization SQL statement, and Hit 'Save', exit the Application
9) Identify and retrieve the trace file using the tracefile_identifier specified in Step 6

Session Level:
You can enable trace on session level using the following commands:
-- For current session only
SQL> ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
SQL> ALTER SESSION SET EVENTS '10046 trace name context off';
-- For current session / other session
SQL> CONN sys/password AS SYSDBA;   -- user must have SYSDBA
SQL> ORADEBUG SETMYPID;                  -- debug current session
SQL> ORADEBUG SETOSPID 1234;         -- debug session with OS Process ID (SID)
SQL> ORADEBUG SETORAPID 123456;  -- debug session with Oracle Process ID (SPID)
SQL> ORADEBUG EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 8; 
SQL> ORADEBUG TRACEFILE_NAME;    -- display the current trace file.
SQL> ORADEBUG EVENT 10046 TRACE NAME CONTEXT OFF;

System Level:

You can enable trace on the entire system (Instance wide) using the following commands:
SQL> alter system set events '10046 trace name context forever,level 8'; 
OR set the following event in init.ora file:
event="10046 trace name context forever,level 8"
Viewing all 1640 articles
Browse latest View live