Quantcast
Channel: Doyensys Allappsdba Blog..
Viewing all 1640 articles
Browse latest View live

Article 1

$
0
0

 UC: Time Drift Detected

Issue:-

alert found into the alert logfile.

Warning: VKTM detected a forward time drift.

Time drifts can result in unexpected behavior such as time-outs.

Please see the VKTM trace file for more details:

/......................................./trace/<INSTNAME_vktm_100191.trc


Trace file  information:-

*** 2020-06-09 01:50:35.058

kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (8914177)usec at (606567530696) whereas (1000000) is allowed
kstmchkdrift (kstmhighrestimecntkeeper:lowres): Time stalled at 1591668005

*** 2020-06-09 03:00:06.722
kstmchkdrift (kstmhighrestimecntkeeper:lowres): Stall, backward drift ended at 1591668006 drift: 1
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (20827068)usec at (628764932274) whereas (1000000) is allowed

*** 2020-06-09 08:06:50.417
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (3425343)usec at (629135770137) whereas (1000000) is allowed
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1752783)usec at (629152967094) whereas (1000000) is allowed

*** 2020-06-09 08:07:15.555
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1927310)usec at (629160907824) whereas (1000000) is allowed

*** 2020-06-09 08:07:22.001
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1045550)usec at (629167353678) whereas (1000000) is allowed
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (9883474)usec at (650980849367) whereas (1000000) is allowed

*** 2020-06-09 14:16:55.523
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (5572366)usec at (651334658429) whereas (1000000) is allowed

*** 2020-06-09 14:17:07.755
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1767549)usec at (651346889548) whereas (1000000) is allowed

*** 2020-06-09 14:17:13.597
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1724214)usec at (651352731509) whereas (1000000) is allowed

*** 2020-06-09 14:17:15.627
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1436000)usec at (651354762365) whereas (1000000) is allowed

*** 2020-06-09 14:17:17.022
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1197067)usec at (651356157174) whereas (1000000) is allowed
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (8572706)usec at (672871286376) whereas (1000000) is allowed
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1576580)usec at (673265980382) whereas (1000000) is allowed

*** 2020-06-09 20:22:35.077
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1535101)usec at (673271145160) whereas (1000000) is allowed

*** 2020-06-09 20:22:39.523
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1282610)usec at (673275591047) whereas (1000000) is allowed
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (24717852)usec at (694950560985) whereas (1000000) is allowed

*** 2020-06-10 02:23:59.562
kstmchkdrift (kstmhighrestimecntkeeper:lowres): Time stalled at 1591752239

*** 2020-06-10 02:24:00.562
kstmchkdrift (kstmhighrestimecntkeeper:lowres): Stall, backward drift ended at 1591752240 drift: 1

*** 2020-06-10 02:29:07.886
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1625056)usec at (695259885680) whereas (1000000) is allowed

*** 2020-06-10 02:29:18.496
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1999112)usec at (695270496269) whereas (1000000) is allowed

*** 2020-06-10 02:29:23.351
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1522429)usec at (695275351121) whereas (1000000) is allowed
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (21132645)usec at (716880786496) whereas (1000000) is allowed
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1502494)usec at (717635721550) whereas (1000000) is allowed




kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1390547)usec at (1469985085944) whereas (1000000) is allowed
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (37827340)usec at (1491730580699) whereas (1000000) is allowed

*** 2020-06-19 07:50:08.768
kstmchkdrift (kstmhighrestimecntkeeper:lowres): Time stalled at 1592549408

*** 2020-06-19 07:50:09.769
kstmchkdrift (kstmhighrestimecntkeeper:lowres): Stall, backward drift ended at 1592549409 drift: 1
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (2646945)usec at (1492176038753) whereas (1000000) is allowed

*** 2020-06-19 07:57:42.776
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1810836)usec at (1492185590299) whereas (1000000) is allowed

*** 2020-06-19 07:57:48.930
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1991129)usec at (1492191744680) whereas (1000000) is allowed

*** 2020-06-19 07:57:50.775
kstmchkdrift (kstmhighrestimecntkeeper:highres): Time jumped forward by (1122195)usec at (1492193590067) whereas (1000000) is allowed

Action plan:- 

Time drift error is a message which can be ignored, and to remove it you can set:
SQL> alter system set event="10795 trace name context forever, level 2" scope=spfile;
Then bounce the DB instances to implement the event/change.



Article 0

$
0
0

 ORA-29283: invalid file operation: path traverses a symlink [29433]


DB : Oracle 19.5

OS : RHEL 7

Expdp is failing due to the below error in my 19c database


Export: Release 19.0.0.0.0 - Production on Thu Apr 30 06:06:42 2020

Version 19.5.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

ORA-39002: invalid operation

ORA-39070: Unable to open the log file.

ORA-29283: invalid file operation: path traverses a symlink [29433]


Reason:-


here my export directory is using symlink and as per oracle from 18c onwards No symbolic links for Data Pump directories

  lrwxrwxrwx  1 oracle oinstall   10 Aug 24  2017 exp_symlk -> /export/gold


Solution :-

Remove symlink for that directory and rerun export again

$ rm -f expimp

$ mkdir expimp

(OR)

If you don’t want to remove symlink then you have to restore back to old behavior

To restore the old behavior, the following underscore parameter must be set.

SQL> ALTER SYSTEM SET "_disable_directory_link_check" = TRUE SCOPE=SPFILE; (Recommend NOT to use)


NOTE :

To identify directory objects with symbolic links in the path name, run

      $ORACLE_HOME/rdbms/admin/utldirsymlink.sql AS SYSDBA


Refer :

DataPump Export (EXPDP) Fails Due to ORA-39155 ORA-48128 (Doc ID 2654167.1)

https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-data-pump-overview.html#GUID-06B2DF71-2A66-498F-B659-1EF5859B1648

EBS R12.2 ADOP phases and parameters

$
0
0

                                                 EBS R12.2 ADOP phases and parameters


1) Prepare  - Starts a new patching cycle.

    Syntax:  adop phase=prepare

2) Apply - Used to apply a patch to the patch file system (online mode)

    Syntax: adop phase=apply  patches=<patches numbers>

         Optional parameters during apply phase

                  --> input file : adop accepts parameters in a input file

              adop phase=apply input_file=

         

             Input file can contain the following parameter:

             workers=

              patches=:.drv, :.drv ...

             adop phase=apply input_file=input_file 

             patches

             phase

             patchtop

             merge

             defaultsfile

             abandon

             restart

             workers


Note : Always specify the full path to the input file


    a)restart  --  used to resume a failed patch

           adop phase=apply patches=<> restart=yes


    b)abandon  -- starts the failed patch from scratch

           adop phase=apply patches=<>  abandon=yes


    c)apply_mode 

           adop phase=apply patches=<>  apply_mode=downtime


       NOTE: Use apply_mode=downtime to apply the patch in downtime mode ( in this case,patch is applied on run file system)

  

    d) apply=(yes/no)

        To run the patch test mode, specify apply = no

   

    e) analytics 

        adop phase=apply analytics=yes

  

           Specifying this option will cause adop to run the following scripts and generate the associated output files (reports):

   ADZDCMPED.sql - This script is used to display the differences between the run and patch editions, including new and changed objects.

   The output file location is: /u01/R122_EBS/fs_ne/EBSapps/log/adop////adzdcmped.out.

      ADZDSHOWED.sql - This script is used to display the editions in the system.

   The output file location is: /u01/R122_EBS/fs_ne/EBSapps/log/adop///adzdshowed.out.

     ADZDSHOWOBJS.sql - This script is used to display the summary of editioned objects per edition.

   The output file location is: /u01/R122_EBS/fs_ne/EBSapps/log/adop///adzdshowobjs.out

      ADZDSHOWSM.sql - This script is used to display the status report for the seed data manager.

   The output file location is: /u01/R122_EBS/fs_ne/EBSapps/log/adop///adzdshowsm.out

  3) Finalize :  Performs any final steps required to make the system ready for cutover, invalid objects are compiled in this phase

    Usage: adop phase=finalize

   finalize_mode=(full|quick)  

   4) Cutover  : A new run file system is prepared from the existing patch file system.

   adop phase=cutover

      Optional parameters during cutover phase:

     a)mtrestart - With this parameter, cutover will complete without restarting the application tier services

       adop phase=cutover mtrestart=no  

       b)cm_wait -  Can be used  to specify how long to wait for existing concurrent processes to finish running before shutting down the Internal Concurrent Manager.

           By default, adop will wait indefinitely for in-progress concurrent requests to finish. 

   

5) CLEANUP    

 cleanup_mode=(full|standard|quick)  [default: standard]

6) FS_CLONE  : This phase syncs the patch file system with the run file system.

    Note : Prepare phase internally runs fs_clone if it is not run in the previous patching cycle

    Optional parameters during fs_clone phase:

     a) force - To start a failed fs_clone from scratch

       adop phase=fs_clone force=yes  [default: no]

    b) Patch File System Backup Count ==> s_fs_backup_count  [default: 0 : No backup taken]

 Denotes the number of backups of the patch file system that are to be preserved by adop. The variable is used during the fs_clone phase,

 where the existing patch file system is backed up before it is recreated from the run file system.

7) Abort - used to abort the current patching cylce.

   abort can be run only before the cutover phase

    adop phase=abort

To recover Concurrent Manager - Concurrent Manager recovery Wizard - EBS 11i/R12

$
0
0

 Concurrent Manager Recovery Wizard - Oracle Applications Manager Troubleshooting and Diagnostics

Follow the below steps to use the Concurrent Manager Recovery Wizard:

1. To access the Concurrent Manager Recovery Wizard, use the following navigation path:

Navigation:

System Administrator -> Oracle Applications Manager -> Debug Workbench -> Site Map -> Diagnostics and Repair > Concurrent Manager Recovery (under Troubleshooting Wizards)

2. Click the Run Wizard button to start the recovery process.  (Note: You cannot run this process if the Internal Concurrent Manager is currently running.)

3. Follow the below steps for troubleshooting any Concurrent Manager issues:

Step 1- Active Managers with a Database Session

This screen lists all managers that must be stopped before proceeding with the recovery.

Listed for each manager are:

CP ID - The Concurrent Program ID.

Manager - The manager name.

Node - The node on which the manager is running.

DB Session ID - Drills down to the Database Session Details screen.

Session Status

OS ID

Started At - The time at which the manager was started.

Running Request - Drills down to display the request in the Advanced Search for Requests page.

You may want to wait for any requests that are running to complete before you execute the shutdown. Drill down on the Running Request to view it.

Click Shutdown to shut down all the listed managers, and then click the Refresh icon to verify that they were shut down. If a manager fails to shut down from this page, you can drill down to the Database Session Details page and use the Terminate button to end the session from there. Return to the Concurrent Manager Recovery screen and refresh the page to verify all managers have been shut down before proceeding to the next step.

Step 2 - Managers Deemed Active but Without Database Session

Any processes listed here must be terminated before continuing. Because these processes have lost their database sessions, they must be manually terminated from the command line. Refer to your operating system documentation for instructions on terminating a process from the command line.

After terminating the processes, click Update to mark the processes as no longer active in the database table. Click the Refresh icon to verify that all processes have been terminated.

Listed for each process are:

CP ID

Manager

Node

OS PID

Started At

Step 3 - Reset Conflict Resolution

Click the Reset button to reset the listed requests for conflict resolution. This action changes requests that are in a Pending/Normal phase and status to Pending/Standby. Click the Refresh icon to verify that all requests have been reset.

You can drill down on the Request ID to view the request in the Advanced Search for Requests screen.

Listed for each request are:

Request ID

Program

User

Step 4 - Requests that are Orphaned

This page lists the requests that do not have a manager. If any requests have Active Sessions listed, drill down on the session ID and terminate the session from the Database Session Details screen. Return to the Concurrent Manager Recovery screen and click the Refresh icon to verify that the session is no longer active.

Listed for each request are:

Request ID - Drills down to display the request in the Advanced Search for Requests page.

Parent ID

Program

User

Phase

Status

Active Session

Step 5 - Concurrent Manager Recovery Summary

The summary page lists the information collected from the previous steps. After reaching this page, you should be able to restart your Internal Concurrent Manager.

 

If you cannot, retry starting the Internal Concurrent Manager with DIAG=Y, refresh the summary page, add it to the Support Cart with the log files, and send them to Oracle Support.

Log Files Collected - Click on the log file name to view it. The following log files can be added to the Support Cart:

Report Summary

Active Managers with a Database Session

Managers Deemed Active but Without a Database Session

Reset Conflict Resolution

Requests that are Orphaned


Reference:

Concurrent Manager Recovery Wizard - Oracle Applications Manager Troubleshooting and Diagnostics (Doc ID 2130545.1)

How To Clear BNE Cache in EBS 11i/R12

$
0
0

GOAL:

To clear BNE cache in EBS environment

Solution:

To Clear BNE cache:

A. Releases prior to 12.2.7

Using System Administrator responsibility:

Login to the application as a user with System Administrator Responsibility.

Select the System Administrator Responsibility.

Bring up the AdminServlet by changing the URL in the browser where you are logged into the applications keeping the appropriate hostname.domain and port number.

Release 11i:

http://hostname.domain:portnumber/oa_servlets/oracle.apps.bne.framework.BneAdminServlet

On the new web page that loads, scroll down and click the "clear-cache" link

Release 12.0 - 12.6:

http://hostname.domain:portnumber/OA_HTML/BneAdminServlet

On the new web page that loads, scroll down to the Cache Name section.

Clear the cache for the following by clicking the "(clear)" link:

Cache Name

============

Default Cache

Generic SQL Statements

Web ADI Repository Objects

Web ADI Parameter Lists

Web ADI Parameter Definitions


Alternatively the BNE cache (current JVM only),  may be cleared using the Bne Admin Servlet:

argument: bne:action=clear-cache

http://hostname.domain:portnumber/OA_HTML/BneAdminServlet?bne:action=clear-cache

The BNE cache is now cleared.

B. Release 12.2.7 and higher

Access to the BneAdminServlet is now secured by the function WDF_CREATE_INTEGRATOR (Desktop Integration Manager - Create Integrator).

Note: WDF_CREATE_INTEGRATOR is included in the seeded responsibility Desktop Integration Manager.

Login to the application.

Select the Desktop Integration Manager responsibility or a responsibility that includes the function WDF_CREATE_INTEGRATOR.

Proceed as in step 5 for release 12 above.

 

Note :This action is similar to clearing the Apache cache and does not need to be performed first on a

non-production instance. It does not require a bounce of Apache or downtime.

Reference:

How To Clear BNE Cache (Doc ID 1075840.1)

How to catch DDL statements oracle database.

$
0
0

 Need to create table to capture DDL statements.


CREATE TABLE AUDIT_DDL ( DDL_DATE date, 

OSUSER varchar2(255),

CURRENT_USER varchar2(255),

HOST varchar2(255), 

TERMINAL varchar2(255), 

IP_ADDRESS VARCHAR2(100), 

module varchar2(100),

owner varchar2(30), 

type varchar2(30), 

name varchar2(30), 

sysevent varchar2(30),

sql_txt varchar2(4000) ) 

tablespace tbs_audit;


Need to create trigger at sys user to catch statements.

create or replace trigger

sys.audit_ddl_trg after ddl on database

declare

sql_text ora_name_list_t;

stmt VARCHAR2(4000) := '';

n number;

begin

n:=ora_sql_txt(sql_text);

for i in 1..n

loop

stmt:=substr(stmt||sql_text(i),1,4000);

end loop;

insert into audit_ddl(

DDL_DATE,osuser,

current_user,host,

terminal,

ip_address,module,

owner,

type,name,sysevent,sql_txt)

values(

sysdate,

sys_context('USERENV','OS_USER') ,

sys_context('USERENV','CURRENT_USER') ,

sys_context('USERENV','HOST') ,

sys_context('USERENV','TERMINAL') ,

UTL_INADDR.get_host_name('USERENV'),

sys_context('USERENV','MODULE') ,

ora_dict_obj_owner,

ora_dict_obj_type,

ora_dict_obj_name,

ora_sysevent,

stmt

);

end;

/




How to find last modified table in oracle.

$
0
0

Here is the query below.

set linesize 500;

select TABLE_OWNER, TABLE_NAME, INSERTS, UPDATES, DELETES,

to_char(TIMESTAMP,'YYYY-MON-DD HH24:MI:SS')

from all_tab_modifications

where table_owner<>'SYS' and

EXTRACT(YEAR FROM TO_DATE(TIMESTAMP, 'DD-MON-RR')) > 2010

order by 6; 

Run sql statements in all PDBS .

$
0
0

 Oracle catcon.pl program is used to run a SQL scripts or SQL statements in all PDBS .



$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -u SYS/****AhAhpassword-changed*** -d /export/home/oracle/scripts -l /export/home/oracle/scripts/logs -b give-write_ps give-write_ps.sql


-u username/password

-d directory for the script to run on all PDBS

-l location of logs

-b basename of log files


How to change Oracle Database NLS DATE FORMAT

$
0
0

 alter system set nls_date_format=’dd.mm.yyyy hh24:mi:ss’ scope=spfile

WHEN WE FACE ERROR ORA-12719: OPERATION REQUIRES DATABASE IS IN RESTRICTED MODE

$
0
0

 SQL> startup mount

ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.

Total System Global Area 8589934592 bytes
Fixed Size 6877112 bytes
Variable Size 1476395080 bytes
Database Buffers 7079985152 bytes
Redo Buffers 26677248 bytes
Database mounted.

SQL> select name from v$database;

NAME
———
ERPDEV

SQL> drop database;
drop database
*
ERROR at line 1:
ORA-12719: operation requires database is in RESTRICTED mode

SQL> shut immediate
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.

Solution:-

SQL> STARTUP MOUNT RESTRICTED;
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.

Total System Global Area 8589934592 bytes
Fixed Size 6877112 bytes
Variable Size 1476395080 bytes
Database Buffers 7079985152 bytes
Redo Buffers 26677248 bytes
ORA-01504: database name ‘RESTRICTED’ does not match parameter db_name ‘ERPDEV’

SQL> shut immediate
ORA-01507: database not mounted

ORACLE instance shut down.
SQL> STARTUP NOMOUNT RESTRICT;
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.

Total System Global Area 8589934592 bytes
Fixed Size 6877112 bytes
Variable Size 1476395080 bytes
Database Buffers 7079985152 bytes
Redo Buffers 26677248 bytes
SQL> alter database mount;

Database altered.

SQL> drop database;

Database dropped.

Solution for ORA-09925: Unable to create audit trail file

$
0
0

 


PROBLEM:

Users are getting below error, while trying to connect to the database.


ERROR:

ORA-09925: Unable to create audit trail file

Linux-ia64 Error: 28: No space left on device

Additional information: 9925

ORA-01075: you are currently logged on


SOLUTION:

The error occurs , because the mount point where audit logs are written is filled.


check the mount point :


[oracle@ram ~]$ df -h


Filesystem    Size     Used     Avail   Use%    Mounted on

/dev/sda2      20G     9.6G      8.7G    53%      /

tmpfs          3.0G    276K      3.0G    1%       /dev/shm

/dev/sda1      194M    105M      79M     58%      /boot

/dev/sda5      45G     40G       3.4G    93%      /u01

.host:/        293G    203G      91G     70%      /mnt/hgfs


[oracle@ram ~]$

We can see that mount point is filled, so database is not able to write audit logs in adump location.


To fix this, clear space from that mount point And make sure free space is available for the audit logs.

Solution for ORA-28365: wallet is not open

$
0
0

 Problem:-

[oracle@prod101:~ orcl101] expdp tables=scott.tde_test directory=TEST_DIR dumpfile=emp121.dmp logfile=emp121.log

Export: Release 18.0.0.0.0 – Production on Fri Aug 24 00:48:16 2018
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Username: scott
Password:

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 – Production
Starting “SCOTT”.”SYS_EXPORT_TABLE_01″: scott/******** tables=scott.tde_test directory=TEST_DIR dumpfile=emp121.dmp logfile=emp121.log
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-31693: Table data object “SCOTT”.”TDE_TEST” failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEPOPULATE callout
ORA-28365: wallet is not open

Solution:-

SQL> alter system set encryption key authenticated by “ORACLE@123”;

System altered.

SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
[oracle@prod101:~ orcl101] expdp tables=scott.tde_test directory=TEST_DIR dumpfile=emp122.dmp logfile=emp122.log

Export: Release 18.0.0.0.0 - Production on Fri Aug 24 01:01:55 2018
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Username: scott
Password:

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Starting "SCOTT"."SYS_EXPORT_TABLE_01": scott/******** tables=scott.tde_test directory=TEST_DIR dumpfile=emp122.dmp logfile=emp122.log
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "SCOTT"."TDE_TEST" 5.546 KB 1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
/u01/app/oracle/datapump/emp122.dmp
Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at Fri Aug 24 01:02:11 2018 elapsed 0 0

Solution for ORA-39035: Data filter SUBQUERY has already been specified

$
0
0

 Problem




[oracle@testdb ~]$ expdp HAA/HAA@pdb1 tables=emp directory=TEST_DIR query=”\’where salary > 5000\'” dumpfile=query1.dmp logfile=query.log




Export: Release 18.0.0.0.0 – Production on Tue Jul 10 20:23:44 2018


Version 18.1.0.0.0




Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.




Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 – Production


ORA-39001: invalid argument value


ORA-39035: Data filter SUBQUERY has already been specified.




Solution




[oracle@testdb ~]$ expdp HAA/HAA@pdb1 tables=emp directory=TEST_DIR query=“‘where salary > 5000‘” dumpfile=query1.dmp logfile=query.log




Export: Release 18.0.0.0.0 – Production on Tue Jul 10 20:28:15 2018


Version 18.1.0.0.0




Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.




Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 – Production


Starting “HAA”.”SYS_EXPORT_TABLE_01″: HAA/********@pdb1 tables=emp directory=TEST_DIR query=’where salary > 5000′ dumpfile=query1.dmp logfile=query.log


Processing object type TABLE_EXPORT/TABLE/TABLE_DATA


Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS


Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER


Processing object type TABLE_EXPORT/TABLE/TABLE


. . exported “HAA”.”EMP” 13.71 KB 58 rows


Master table “HAA”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded


******************************************************************************


Dump file set for HAA.SYS_EXPORT_TABLE_01 is:


/u01/app/oracle/datapump/query1.dmp


Job “HAA”.”SYS_EXPORT_TABLE_01″ successfully completed at Tue Jul 10 20:28:26 2018 elapsed 0 00:00:11

Solution for ORA-39173: Encrypted data has been stored unencrypted in dump file set

$
0
0

 [oracle@testdb ~]$ expdp system/Chennai#123@pdb1 directory=TEST_DIR SCHEMAS=hr dumpfile=hr.dmp logfile=hr.log


Export: Release 18.0.0.0.0 - Production on Sat Jun 30 07:08:01 2018

Version 18.1.0.0.0


Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.


Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@pdb1 directory=TEST_DIR SCHEMAS=hr dumpfile=hr.dmp logfile=hr.log

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Processing object type SCHEMA_EXPORT/STATISTICS/MARKER

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/COMMENT

Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE

Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE

Processing object type SCHEMA_EXPORT/VIEW/VIEW

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/TRIGGER

. . exported "HR"."EMPLOYEES" 17.08 KB 107 rows

. . exported "HR"."LOCATIONS" 8.437 KB 23 rows

. . exported "HR"."JOB_HISTORY" 7.195 KB 10 rows

. . exported "HR"."JOBS" 7.109 KB 19 rows

. . exported "HR"."DEPARTMENTS" 7.125 KB 27 rows

. . exported "HR"."COUNTRIES" 6.367 KB 25 rows

. . exported "HR"."REGIONS" 5.546 KB 4 rows

ORA-39173: Encrypted data has been stored unencrypted in dump file set.

Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded

******************************************************************************

Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:

/u01/app/oracle/datapump/hr.dmp

Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Jun 30 07:09:10 2018 elapsed 0 00:01:07

Solution:-Just add the parameter encryption_password


[oracle@testdb ~]$ expdp system/Chennai#123@pdb1 directory=TEST_DIR SCHEMAS=hr dumpfile=hr1.dmp logfile=hr1.log encryption_password=hari


Export: Release 18.0.0.0.0 - Production on Sat Jun 30 07:10:44 2018

Version 18.1.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@pdb1 directory=TEST_DIR SCHEMAS=hr dumpfile=hr1.dmp logfile=hr1.log encryption_password=********

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Processing object type SCHEMA_EXPORT/STATISTICS/MARKER

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/COMMENT

Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE

Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE

Processing object type SCHEMA_EXPORT/VIEW/VIEW

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/TRIGGER

. . exported "HR"."EMPLOYEES" 17.09 KB 107 rows

. . exported "HR"."LOCATIONS" 8.445 KB 23 rows

. . exported "HR"."JOB_HISTORY" 7.203 KB 10 rows

. . exported "HR"."JOBS" 7.117 KB 19 rows

. . exported "HR"."DEPARTMENTS" 7.132 KB 27 rows

. . exported "HR"."COUNTRIES" 6.375 KB 25 rows

. . exported "HR"."REGIONS" 5.554 KB 4 rows

Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded

******************************************************************************

Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:

/u01/app/oracle/datapump/hr1.dmp

Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Jun 30 07:11:47 2018 elapsed 

LUN MIGRATION STEPS

$
0
0

 sudo fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-|type|identifier'



bosp13344 ORA_DATA01 = SSD 200GB  LUN ID= 00:01:D3    /dev/sdf1

bosp123     ORA_FRA01  = SSD 100GB  LUN ID= 00:01:C7    /dev/sdg1


sudo oracleasm createdisk ORA_DATA_01 /dev/sdf1



alter diskgroup ORA_DATA add disk '/dev/oracleasm/disks/ORA_DATA_01' NAME ORA_DATA_01 rebalance power 100;


Select * from v$asm_operation;


alter diskgroup ORA_DATA drop disk 'ORA_DATA_0000' rebalance power 100;


Select * from v$asm_operation;




    set lines 999;

    col diskgroup for a15

    col diskname for a20

    col path for a35

    select a.name DiskGroup,b.name DiskName, b.total_mb, (b.total_mb-b.free_mb) Used_MB, b.free_mb,b.path, 

    b.header_status

    from v$asm_disk b, v$asm_diskgroup a 

    where a.group_number (+) =b.group_number 

    order by b.group_number,b.name;

sudo oracleasm deletedisk ORA_DATA_001




-------------------------------------------------------------


sudo oracleasm createdisk ORA_FRA_01 /dev/sdg1



alter diskgroup ORA_FRA add disk '/dev/oracleasm/disks/ORA_FRA_01' NAME ORA_FRA_01 rebalance power 100;


Select * from v$asm_operation;


alter diskgroup ORA_FRA drop disk 'ORA_FRA_001_0000' rebalance power 100;


/dev/oracleasm/disks/ORA_FRA_01



    set lines 999;

    col diskgroup for a15

    col diskname for a20

    col path for a35

    select a.name DiskGroup,b.name DiskName, b.total_mb, (b.total_mb-b.free_mb) Used_MB, b.free_mb,b.path, 

    b.header_status

    from v$asm_disk b, v$asm_diskgroup a 

    where a.group_number (+) =b.group_number 

    order by b.group_number,b.name;

sudo oracleasm deletedisk ORA_FRA_001



Fix for ORA-03113 end-of-file on communication channel

$
0
0

 Fix for ORA-03113: end-of-file on communication channel


Oracle Database Server 12c Here is how to fix ORA-03113: end-of-file on communication channel


[oracle@host ~]$ sqlplus / as sysdba

...

...

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.


SQL> startup

ORACLE instance started.


Total System Global Area 2147483648 bytes

Fixed Size 2926472 bytes

Variable Size 1224738936 bytes

Database Buffers 905969664 bytes

Redo Buffers 13848576 bytes

Database mounted.

ORA-03113: end-of-file on communication channel

Process ID: 4903

Session ID: 237 Serial number: 26032

Solution:

SQL> exit

Disconnected from Oracle Database 12c 

Enterprise Edition Release 12.1.0.2.0 - 64bit Production


[oracle@zeus ~]$ sqlplus / as sysdba

...

...

Connected to an idle instance.


SQL> startup nomount

ORACLE instance started.


Total System Global Area 2147483648 bytes

Fixed Size 2926472 bytes

Variable Size 1224738936 bytes

Database Buffers 905969664 bytes

Redo Buffers 13848576 bytes

SQL> alter database mount;


Database altered.


SQL> alter database clear unarchived logfile group 1;

Database altered.


SQL> alter database clear unarchived logfile group 2;

Database altered.


SQL> alter database clear unarchived logfile group 3;

Database altered.


SQL> shutdown immediate

ORA-01109: database not open


Database dismounted.

ORACLE instance shut down.


SQL> startup


ORACLE instance started.


Total System Global Area 2147483648 bytes

Fixed Size 2926472 bytes

Variable Size 1224738936 bytes

Database Buffers 905969664 bytes

Redo Buffers 13848576 bytes

Database mounted.

Database opened.

SQL>


Now issue has been fixed

To calculate the percentage of the buffer cache used by an individual object

$
0
0

 To calculate the percentage of the buffer cache used by an individual object:


Find the Oracle Database internal object number of the segment by querying the DBA_OBJECTS view:

SELECT data_object_id, object_type

FROM DBA_OBJECTS 

WHERE object_name = UPPER('segment_name'); 


Because two objects can have the same name (if they are different types of objects), use the OBJECT_TYPE column to identify the object of interest.


Find the number of buffers in the buffer cache for SEGMENT_NAME:


SELECT COUNT(*) buffers

FROM V$BH

WHERE objd = data_object_id_value;


For data_object_id_value, use the value of DATA_OBJECT_ID from the previous step.


Find the number of buffers in the database instance:


SELECT name, block_size, SUM(buffers)

FROM V$BUFFER_POOL

GROUP BY name, block_size

HAVING SUM(buffers) > 0;



Calculate the ratio of buffers to total buffers to obtain the percentage of the cache currently used by SEGMENT_NAME:


% cache used by segment_name = [buffers(Step2)/total buffers(Step3)]

Workflow start and stop in back end

$
0
0

 a) Login to Environment as apps user


sqlplus apps/<apps pass>


b) Check workflow mailer service current status


select running_processes

from fnd_concurrent_queues

where concurrent_queue_name = 'WFMLRSVC';

Number of running processes should be greater than 0


c) Find current mailer status


select component_status

from fnd_svc_components

where component_id =

(select component_id

from fnd_svc_components

where component_name = 'Workflow Notification Mailer');


Possible values:

RUNNING

STARTING

STOPPED_ERROR

DEACTIVATED_USER


Now how to stop Notification Mailer from Backend


a) Login to Environment via sqlplus

sqlplus apps/<apps pass>


b) Stop notification mailer

declare

p_retcode number;

p_errbuf varchar2(100);

m_mailerid fnd_svc_components.component_id%TYPE;

begin

select component_id

into m_mailerid

from fnd_svc_components

where component_name = 'Workflow Notification Mailer';

fnd_svc_component.stop_component(m_mailerid, p_retcode, p_errbuf);

commit;

end;

/


Now how to start Notification Mailer from Backend


a)Login to Environment via sqlplus

sqlplus apps/<apps pass>

declare

p_retcode number;

p_errbuf varchar2(100);

m_mailerid fnd_svc_components.component_id%TYPE;

begin

select component_id

into m_mailerid

from fnd_svc_components

where component_name = 'Workflow Notification Mailer';

fnd_svc_component.start_component(m_mailerid, p_retcode, p_errbuf);

commit;

end;

/

WEB ADI Performance Issue When Data Entry Rows is High

$
0
0

 

While updating the Data Entry Rows in the Lines section to 30000, then the upload takes a long time (10 minutes+) even if only 2 rows are selected for updating.

Desktop Integration > Define Layout
Query Integrator (such as General Ledger - Journals)
Go-> Select Layout (such as Functional Actuals - Single)
Update->Next->Next-> Under the Line region, see: Data Entry Rows: 10

Updated the Data Entry Rows in the Lines section to 30000

Cause:

You can increase the default value, however, 30000 is too much and considering that more rows can be added once the document is downloaded to Excel, there is really no value in increasing the value to such a high number.

Solution:

The Data Entry Rows property determines the initial number of rows created in the integrator template document. Accept the default value of 10.
The user can add more rows once the document is downloaded to Excel

You can increase the default value, however, 30000 is too high. Considering that you can add more rows once the document is downloaded to Excel, there is really no value in increasing the value to such a high number.

Please continue to use the default value of 10, or experiment around with other values (20, 50, 100) but you should not use such a high value as 30000.

Upgrade Excel To 2013 64-bit Error 'The code in this project must be updated for use on 64 bit systems' Uploading Data To WEBADI Template

$
0
0

 

On Oracle Applications 12.1.3 version,

After upgrading Microsoft Excel to version 2013 64bit, when uploading data to the template WEBADI, the template is blank.

However when looking at the Excel Macro Information, the following Error is shown:

 

MS Visual Basic pop up message:

The code in this project must be updated for use on 64 bit systems.

Please review and update Declare statements and then and then mark them with PtrSafe attribute.

The issue can be reproduced at will with the following steps:

1. Upgrade MS Office Excel 2013 64 bit.

2. Upload data to the template WEBADI.

Cause:

The issue is caused by the following setup:

Not having the  Office 2013 18402256:R12.BNE.B Patch in place and Parameter Definition-Validation Type incorrectly set.

This issue is described in My Oracle Support's Note Microsoft Office Integration with Oracle E-Business Suite 11i and R12 (Doc ID 1077728.1)

This note outlines the Office 2013 Requirements for EBS Applications and Web ADI.

Also the following Steps helped and or Resolved as well:

Desktop Integration Manager - Define Parameter – Content parameters - Parameter Definition-Validation Type- Validation Value

Solution:

To implement the solution test the following steps:

1. Download and review the readme and pre-requisites for Patches Office 2013 Patch 18402256:R12.BNE.B and Patch 19273341 R12.BNE.B.delta.4.

2. Ensure that you have taken a backup of your system before applying the recommended patch.

3. Apply the patches in a test environment.

4. Confirm the settings for the Desktop Integration Manager - Define Parameter – Content parameters - Parameter Definition-Validation Type- Validation Value for the Integrator failing to upload.

5. Retest the issue.

6. Migrate the solution as appropriate to other environments.

Viewing all 1640 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>