Quantcast
Channel: Doyensys Allappsdba Blog..
Viewing all 1640 articles
Browse latest View live

Script to automatically compile fmb files

$
0
0
#Script to migrate fmb files
. Source your environment file here

scr_log=file_directory_location/MIGRATION/logs/forms_migr_op.log;export scr_log
find_log=file_directory_location/MIGRATION/logs/forms_migr_find.log;export find_log
pw=`cat file_directory_location/MIGRATION/scripts/.watchwrd`;export pw
find_log2=file_directory_location/MIGRATION/logs/forms_migr_find2.log;export find_log2
usr_nme=`whoami`; export usr_nme
hstnme=`hostname`; export hstnme
dateis=`date +"%d"/"%m"/"%Y"_"%H":"%M":"%S"`; export dateis

#echo $usr_nme
#echo $hstnme
name_conv=`echo $usr_nme|tr 'a-z''A-Z'`;export name_conv
#echo "User Name is : $name_conv"
echo "User $name_conv has logged in to $hstnme at $dateis">> $scr_log

echo "Enter the file name : \c"
read fle_name
fmbfile=`echo $fle_name|cut -d "." -f1`;export fmbfile
#echo $fmbfile
if [ "$fle_name" = "" ]; then
echo "You entered NULL Value for filename, please enter a valid file name. Exiting from script !!!!"
echo "User entered NULL Value for filename, exiting from script !!!! - Date: $dateis">> $scr_log
echo "\n">> $scr_log
exit;
else
echo "File name you entered is $fle_name"
echo "User $name_conv has typed the filename $fle_name">> $scr_log
#echo "\n">> $scr_log
cd file_directory_location/MIGRATION/FORMS/
echo "Finding file $fle_name from file_directory_location/MIGRATION/FORMS - Date: $dateis">> $scr_log
find . -name "$fle_name"> $find_log
F15M="cut -d "/" -f2 $find_log";export F15M
F16M=`cat $find_log|wc -l`;export F16M
if [ "$F16M" -gt 0 ]; then
echo "File $fle_name, found in file_directory_location/MIGRATION/FORMS - Date: $dateis, file exists !!!!">> $scr_log
find $AU_TOP/forms/US -name "$fle_name"> $find_log2
F17M=`cut -d "/" -f12 $find_log2`;export F17M
#echo "File name ******* $F17M"
F18M=`cat $find_log2|wc -l`;export F18M
if [ $F18M -gt 0 ]; then
echo "Finding file $fle_name from "$"AU_TOP/forms/US - Date: $dateis">> $scr_log
echo "File $fle_name, found in "$"AU_TOP/forms/US - Date: $dateis, file exists !!!! in XBOL_TOP">> $scr_log
cp $AU_TOP/forms/US/$fle_name $AU_TOP/forms/US/"$fle_name"_`date +%d%b%y_%H%M%S`
echo "Backing up $fle_name in "$"AU_TOP/forms/US">> $scr_log
echo "Backing up $fle_name in "$"AU_TOP/forms/US"
mv $fle_name $AU_TOP/forms/US
echo "File $fle_name has been moved to "$"AU_TOP/forms/US Date - $dateis"
echo "File $fle_name has been moved to "$"AU_TOP/forms/US Date - $dateis">> $scr_log
#echo "\n">> $scr_log
cd $AU_TOP/forms/US
echo "Compiling form $fle_name in "$"AU_TOP/forms/US">> $scr_log
echo "Compiling form $fle_name in "$"AU_TOP/forms/US"
watw=`echo $pw|openssl enc -aes-128-cbc -a -d -salt -pass pass:asdffdsa`; export watw
#echo $watw
frm_cmp_log=file_directory_location/MIGRATION/logs/"$fmbfile"_`date +%d%b%y_%H%M%S`.log; export frm_cmp_log
touch $frm_cmp_log
echo "Logfile :- $frm_cmp_log">> $scr_log
echo "Logfile :- $frm_cmp_log"
frmcmp_batch module=$AU_TOP/forms/US/$fle_name userid=apps/"$watw" output_file=$XBOL_TOP/forms/US/$fmbfile.fmx module_type=form compile_all=special > $frm_cmp_log
ct=`cat $frm_cmp_log|grep -i "Form not created"|wc -l`; export ct
if [ $ct -eq 1 ]; then
echo "Forms not Created. Please check logfile:- $frm_cmp_log) for errors"
else
echo "Form $fle_name has been compiled">> $scr_log
echo "Form $fle_name has been compiled"
echo "\n">> $scr_log
fi
else
echo "This file is not found in "$"AU_TOP/forms/US, please confirm if this is a new file to be migrated (Y/N) :\c"
echo "This file is not found in "$"AU_TOP/forms/US, might be a new file. Waiting for User confirmation(Y/N)">> $scr_log
read confim
if [ $confim = 'Y' ]; then
echo "User entered $confim Date - $dateis"
echo "User entered $confim, compiling a new file in "$"AU_TOP Date - $dateis">> $scr_log
mv $fle_name $AU_TOP/forms/US
echo "File $fle_name has been moved to "$"AU_TOP/forms/US Date - $dateis"
echo "File $fle_name has been moved to "$"AU_TOP/forms/US Date - $dateis">> $scr_log
cd $AU_TOP/forms/US
echo "Compiling form $fle_name in "$"AU_TOP/forms/US">> $scr_log
echo "Compiling form $fle_name in "$"AU_TOP/forms/US"
watw=`echo $pw|openssl enc -aes-128-cbc -a -d -salt -pass pass:asdffdsa`; export watw
#echo $watw
frm_cmp_log=file_directory_location/MIGRATION/logs/"$fmbfile"_`date +%d%b%y_%H%M%S`.log; export frm_cmp_log
touch $frm_cmp_log
echo "Logfile :- $frm_cmp_log">> $scr_log
echo "Logfile :- $frm_cmp_log"
frmcmp_batch module=$AU_TOP/forms/US/$fle_name userid=apps/"$watw" output_file=$XBOL_TOP/forms/US/$fmbfile.fmx module_type=form compile_all=special > $frm_cmp_log
ct=`cat $frm_cmp_log|grep -i "Form not created"|wc -l`; export ct
if [ $ct -eq 1 ]; then
echo "Forms not Created. Please check logfile:- $frm_cmp_log) for errors"
else
echo "Form $fle_name has been compiled">> $scr_log
echo "Form $fle_name has been compiled"
echo "\n">> $scr_log
fi
else
echo "User entered N,(please review the file name, the file you entered doesnot exist in "$"AU_TOP/forms/US) exiting from script Date - $dateis !!!!"
echo "User entered N,(the file $fle_name doesnot exist in "$"AU_TOP/forms/US,  might be a new file) exiting from script Date - $dateis !!!!">> $scr_log
echo "\n">> $scr_log
exit;
fi
fi
else
echo "The file $fle_name does not exist in file_directory_location/MIGRATION/FORMS - Date: $dateis, pls verify the file name"
echo "The file $fle_name does not exist in file_directory_location/MIGRATION/FORMS - Date: $dateis">> $scr_log
echo "\n">> $scr_log
fi
fi

Script to migrate rdf files to the server

$
0
0
#Script to migrate rdf files
. Source your environment file here

scr_log=file_directory_location/MIGRATION/logs/reports_migr_op.log;export scr_log
find_log=file_directory_location/MIGRATION/logs/report_find.log;export find_log
find_log2=file_directory_location/MIGRATION/logs/report_find2.log;export find_log2
usr_nme=`whoami`; export usr_nme
hstnme=`hostname`; export hstnme
find_log=file_directory_location/MIGRATION/logs/reports_find.log;export find_log
dateis=`date +"%d"/"%m"/"%Y"_"%H":"%M":"%S"`; export dateis

#echo $usr_nme
#echo $hstnme
name_conv=`echo $usr_nme|tr 'a-z''A-Z'`;export name_conv
#echo "User Name is : $name_conv"
echo "User $name_conv has logged in to $hstnme at $dateis">> $scr_log

echo "Enter the file name : \c"
read fle_name
if [ "$fle_name" = "" ]; then
echo "You entered NULL Value for filename, please enter a valid file name. Exiting from script !!!!"
echo "User entered NULL Value for filename, exiting from script !!!! - Date: $dateis">> $scr_log
echo "\n">> $scr_log
exit;
else
echo "File name you entered is $fle_name"
echo "User $name_conv has typed the filename $fle_name">> $scr_log
#echo "\n">> $scr_log
cd file_directory_location/MIGRATION/REPORTS/
echo "Finding file $fle_name from file_directory_location/MIGRATION/REPORTS - Date: $dateis">> $scr_log
find . -name "$fle_name"> $find_log
F15M="cut -d "/" -f2 $find_log";export F15M
F16M=`cat $find_log|wc -l`;export F16M
if [ "$F16M" -gt 0 ]; then
echo "File $fle_name, found in file_directory_location/MIGRATION/REPORTS - Date: $dateis, file exists !!!!">> $scr_log
echo "Finding file $fle_name from "$"XBOL_TOP/reports/US - Date: $dateis">> $scr_log
find $XBOL_TOP/reports/US -name "$fle_name"> $find_log2
F17M=`cut -d "/" -f12 $find_log2`;export F17M
#echo "File name ******* $F17M"
F18M=`cat $find_log2|wc -l`;export F18M
if [ $F18M -gt 0 ]; then
echo "File $fle_name, found in "$"XBOL_TOP/reports/US - Date: $dateis, file exists !!!! in XBOL_TOP">> $scr_log
cp $XBOL_TOP/reports/US/$fle_name $XBOL_TOP/reports/US/"$fle_name"_`date +%d%b%y_%H%M%S`
echo "Backing up $fle_name in "$"XBOL_TOP/reports/US">> $scr_log
mv $fle_name $XBOL_TOP/reports/US
echo "File $fle_name has been moved to "$"XBOL_TOP/reports/US Date - $dateis"
echo "File $fle_name has been moved to "$"XBOL_TOP/reports/US Date - $dateis">> $scr_log
echo "\n">> $scr_log
else
echo "This file is not found in "$"XBOL_TOP/reports/US, please confirm if this is a new file to be migrated (Y/N) :\c"
echo "This file is not found in "$"XBOL_TOP/reports/US, might be a new file. Waiting for User confirmation(Y/N)">> $scr_log
read confim
if [ $confim = 'Y' ]; then
echo "User entered $confim Date - $dateis"
echo "User entered $confim, migrating a new file to XBOL_TOP Date - $dateis">> $scr_log
mv $fle_name $XBOL_TOP/reports/US
echo "File $fle_name has been moved to "$"XBOL_TOP/reports/US Date - $dateis">> $scr_log
echo "\n">> $scr_log
echo "File $fle_name has been moved to "$"XBOL_TOP/reports/US"
else
echo "User entered N,(please review the file name, the file you entered doesnot exist in XBOL_TOP) exiting from script Date - $dateis !!!!"
echo "User entered N,(the file $fle_name doesnot exist in XBOL_TOP,  might be a new file) exiting from script Date - $dateis !!!!">> $scr_log
echo "\n">> $scr_log
exit;
fi
fi
else
echo "The file $fle_name does not exist in file_directory_location/MIGRATION/REPORTS - Date: $dateis, pls verify the file name"
echo "The file $fle_name does not exist in file_directory_location/MIGRATION/REPORTS - Date: $dateis">> $scr_log
echo "\n">> $scr_log
fi
fi

API to assign responsibilities of an application user to another user

$
0
0
declare

res_user_name        varchar2(100);
res_app_sn             varchar2(200);
res_respkey            varchar2(200);
res_sgkey              varchar2(200);
res_desc               varchar2(200);
res_name               varchar2(200);

cursor usrname is select user_name from fnd_user where user_name in ('SAMPLE3'); ------- Target User

cursor respname is
SELECT fa.application_short_name,fr.responsibility_key,frg.security_group_key,frt.description,frt.responsibility_name
FROM apps.fnd_responsibility fr,apps.fnd_application fa,apps.fnd_security_groups frg,apps.fnd_responsibility_tl frt
WHERE fr.application_id = fa.application_id
AND    fr.data_group_id = frg.security_group_id
AND    fr.responsibility_id = frt.responsibility_id
AND    frt.LANGUAGE = 'US'
AND    frt.responsibility_name in
(SELECT frtl.responsibility_name
FROM apps.fnd_user_resp_groups_direct furd, apps.fnd_responsibility_tl frtl
WHERE furd.responsibility_id = frtl.responsibility_id
AND furd.user_id IN ( SELECT user_id FROM apps.fnd_user WHERE user_name='SAMPLE2' ) -------- Source User
AND (furd.end_date is null)
and frtl.LANGUAGE = 'US');

begin

open usrname;
loop
fetch usrname into res_user_name;
exit when usrname%notfound;

open respname;
loop
fetch respname into res_app_sn,res_respkey,res_sgkey,res_desc,res_name;
exit when respname%notfound;
fnd_user_pkg.addresp (username              => res_user_name,
                      resp_app               => res_app_sn,
              resp_key                => res_respkey,
                      security_group         => res_sgkey,
                      description            => res_desc,
                      start_date              => SYSDATE,
                      end_date                => NULL
                     );
commit;
end loop;
close respname;

end loop;
close usrname;
end;
/

Article 2

$
0
0
                                           Increase the JVM heap memory in R12.2


To increase the JVM memory in E12.2 , run the following script :

Change the JVM from 1GB to 4 GB

 perl <AD_TOP>/patch/115/bin/adProvisionEBS.pl ebs-set-managedsrvproperty \
-contextfile=<CONTEXT_FILE> -managedsrvname=<MANAGED SERVER NAME> \
-serverstartargs="<COMPLETE LIST OF JVM ARGUMENTS>"

managedsrvname= Name of the Managed server say(Oacore)

serverstartargs="<COMPLETE LIST OF JVM ARGUMENTS>"

You can take the JVM argument from the followoing locaiton :

Change heap size login to WebLogic console and navigate to below path : 
Home >Summary of Servers >oacore_server1> Configuration(Tab) > Server Start(Tab) > Arguments


Argument:
d64 -Xms1024m -Xmx1024m -XX:CompileThreshold=8000 -XX:PermSize=128m -XX:MaxPermSize=256m -Djava.security.policy=/RLFSIT/apps/R12apps/fs1/FMW_Home/wlserver_10.3/server/lib/weblogic.policy -Xverify:none -da -.................................................................................................................................................................../RLFSIT/apps/R12apps/fs1/EBSapps/appl/pon/12.0.0/bin64:/RLFSIT/apps/R12apps/fs1/EBSapps/appl/sht/12.0.0/lib:null


to

d64 -Xms4096m -Xmx4096m -XX:CompileThreshold=8000 -XX:PermSize=128m -XX:MaxPermSize=256m -Djava.security.policy=/RLFSIT/apps/R12apps/fs1/FMW_Home/wlserver_10.3/server/lib/weblogic.policy -Xverify:none -da -.................................................................................................................................................................../RLFSIT/apps/R12apps/fs1/EBSapps/appl/pon/12.0.0/bin64:/RLFSIT/apps/R12apps/fs1/EBSapps/appl/sht/12.0.0/lib:null


Usage:
-------
perl <AD_TOP>/patch/115/bin/adProvisionEBS.pl ebs-set-managedsrvproperty \
-contextfile=<CONTEXT_FILE> -managedsrvname=Oacore_server1 \

-serverstartargs="d64 -Xms4096m -Xmx4096m -XX:CompileThreshold=8000 -XX:PermSize=128m -XX:MaxPermSize=256m -Djava.security.policy=/RLFSIT/apps/R12apps/fs1/FMW_Home/wlserver_10.3/server/lib/weblogic.policy -Xverify:none -da -.................................................................................................................................................................../RLFSIT/apps/R12apps/fs1/EBSapps/appl/pon/12.0.0/bin64:/RLFSIT/apps/R12apps/fs1/EBSapps/appl/sht/12.0.0/lib:null"

Article 1

$
0
0
Adop Finalize phase has failed with Data dictionary corruption - timestamp mismatch.


While running Finalize phase in Adop cycle , we encounter with the following error Data dictionary corruption - timestamp mismatch.

 Error message :
---------------------

UNEXPECTED]Data dictionary corrupted:
[UNEXPECTED]Data dictionary corruption - timestamp mismatch
APPS GME_PENDING_PRODUCT_LOTS_DBL V_20170621_1401 APPS APPS V_20170621_1401 P Status: 5
APPS BMT_SUPPLIER_BANKS_PK V_20170621_1401 APPS APPS V_20170621_1401 P Status: 5
APPS EAM_ASSET_MOVE_PUB V_20170621_1401 APPS APPS V_20170621_1401 P Status: 5
APPS PSP_ER_WORKFLOW V_20170621_1401 APPS APPS V_20170621_1401 P Status: 5
APPS JAI_CMN_ST_FORMS_PKG V_20170621_1401 APPS APPS V_20170621_1401 P Status: 5
APPS PAY_DK_PAYMENT_PROCESS_PKG V_20170621_1401 APPS APPS V_20170621_1401 P Status: 5
APPS PA_ENDECA_INTEGRATION V_20170621_1401 APPS APPS V_20170621_1401 P Status: 5
APPS MSD_DEM_CTO V_20170621_1401 APPS APPS V_20170621_1401 P Status: 5
APPS PAY_CA_RL1_MAG V_20170621_1401 APPS APPS V_20170621_1401 P Status: 5
[UNEXPECTED] Follow the instructions in the "Fix data dictionary corruption"


Soultion: 
------------
Fix data dictionary or time stamp mismatch

Step 1
-------
1. Run the file $AD_TOP/patch/115/sql/adzddtsfix.sql
2. Run the $AD_TOP/sql/ADZDDBCC.sql script to identify whether the data dictionary corruption is still present.
a. If no corruption is found, proceed with the upgrade.
b. If corruption is still present, follow Step 2 below.

Step 2
--------

Fix Logical data dictionary corruption (missing-parent)

Follow this step only when logical data-dictionary corruption is present.
1. Connect to database as "SYSDBA".
2. Run the $AD_TOP/patch/115/sql/adzddmpfix.sql
3. Run the $AD_TOP/sql/ADZDDBCC.sql script again to identify whether the logical data dictionary corruption is still present.
a. If no corruption is found, proceed with the upgrade or adop patching-cycle.
b. If corruption is still present, contact Oracle Support and log a bug.

Step 3
------


Follow this step only when data dictionary corruption is still present after following Step 1 above.
1. On database node, go to $ORACLE_HOME/rdbms/admin directory.
2. Run utlirp.sql
3. Run utlrp.sql
4. Run the $AD_TOP/sql/ADZDDBCC.sql script again to identify whether the data dictionary corruption is still present.
a. If no corruption is found, proceed with the upgrade.
b. If corruption is still present, contact Oracle Support and log a bug.

Ref: Doc ID 1531121.1





Article 0

$
0
0
                                   ADadmin Fails when ADOP Patching cycle is in progress. 


When we try use adadmin , while adop patching cycle is on , adadmin will fail with the below error as it wont allow the user to edit/modify the any application related objects .

Error:
----------
AD Administration error:

ERROR: Patching cycle in progress - run this utility from patch file system.
You can only run it from run file system when not patching.

Solution:
--------------

To resolve the issue test the following steps in a development instance and then migrate accordingly:

1. Complete or abort any open patching cycles.

2. Any remaining database editions (for example 'EDITION1') will need to be addressed by the DBA.

3. Run ADZDSHOWED.sql to confirm the editions are corrected.

4. Run adadmin and and confirm it proceeds without error.


Note : ADADMIN can run only on the patch file system when the patch is in progress, ADADMIN can run only on the patch file system when the patch is in progress

Article 0

$
0
0
How to check invalid objects in Run & Patch Edition in R12.2 


To determine the Run/Patch edition in R12.2

set linesize 1024
col OLD for a10
col run for a20
col patch for a30
select apps.ad_zd.get_edition('OLD') "OLD",
apps.ad_zd.get_edition('RUN') "RUN",
apps.ad_zd.get_edition('PATCH') "PATCH"
from dual;

OLD        RUN                  PATCH
---------- -------------------- ------------------------------
           ORA$BASE             V_20180322_1648

Check invalid objects in Run Edition

SQL> select count(*) from dba_objects where status='INVALID';

  COUNT(*)
----------
       273

Check invalid objects in Patch Edition

ALTER SESSION SET EDITION = Patch_Edition_Name; (got from 1st query)

SQL> ALTER SESSION SET EDITION = V_20180322_1648;

Session altered.

SQL> select count(*) from dba_objects where status='INVALID';

  COUNT(*)
----------
       274

SQL>


Article 0

$
0
0
Concurrent Processing - Command-Line Utility
   
The command-line utility cpadmin consolidates various existing utilities for concurrent processing into a single menu-based utility that is run using a shell script, cpadmin.sh.

This adadmin-style utility can be used for multiple tasks, including:

Manager Status
This option is used to view the status of concurrent managers and services. 
Use this option to display running managers (with or without process IDs) or display all managers.
The same status information is shown in Administer Concurrent Managers form and OAM Concurrent Managers page.

Clean CP Tables
Use this option to clean up the concurrent processing tables. This utility replaces cmclean.sql. Use this option when the Internal Concurrent Manager (ICM) fails to start due to corrupted/conflicted tables.  Note the following actions when choosing this option:

    Managers must be stopped first (the utility will verify this).
    Clean up inconsistencies in manager tables; remove corruption.
    Reset manager states for clean start-up.
    Clean and reset Advanced Queue tables for the Output Post-Processor and FNDSM Service Manager.
    Reset request conflicts for the Conflict Resolution Manager (CRM).
    Identify and clean orphaned requests.

This option is supported by Oracle for use on client systems.

Important: DO NOT USE the cmclean.sql script.

Set Manager Diagnostics
Turn diagnostics on or off for individual managers with this option. You can use this option to turn diagnostics on/off for specific managers without bouncing all services.
This option is available for:

    ICM
    CRM
    Output Post-Processor
    Request Processing Managers
    Transaction Managers

Each option will display the current diagnostic status (ON or OFF) of the running managers/services and allow you to change the status.

Manager Control

Use this option to send a request such as start, stop, or verify to an individual manager.

Use to send a control request to a manager or service.

This option does the following:

    Displays current status of all managers and processes.
    Once a manager or service is chosen, offers valid control options for that specific choice.
    Valid options for managers: activate, deactivate, verify, restart, abort, shut down.
    Valid options for services also include: suspend and resume. Any service can be programmed to respond appropriately to each option; but if one is not, then the service will not respond.

Rebuild Manager Views

Use when Manager Views must be rebuilt.

This option rebuilds the FND_CONCURRENT_WORKER_REQUEST and FND_CONCURRENT_CRM_REQUESTS views with the following steps:

    Managers must be stopped first. The utility verifies that these are stopped.
    Rebuilds FND_CONCURRENT_WORKER_REQUESTS.
    Rebuilds FND_CONCURRENT_CRM_REQUESTS.

Running this option is the same action as running FNDCPBWV.

Move Request Files

Change request log and output file locations with this option.

Use to update the concurrent processing tables for changing the following values for request LOG file, OUT file, or BOTH:

    Individual requests: fully qualified file name or node name
    Range of requests: directory name or node name
    Range of requests can be selected by minimum/maximum date or minimum/maximum request_id

Important: The cpadmin utility changes only the concurrent processing database table values to support movement. The files must be manually moved by an administrator.

Analyze Request

Use this option to analyze a concurrent request.

Use when analyzing a request for any reason. This is non-destructive.

Managers need not be shut down for this option.

This option does the following:

    Checks the manager's status.
    Analyzes the request's status.
    Provides a detailed report on concurrent program.
    Gives a detailed report on request's current status.


Running the cpadmin utility:

    Set the environment.
    From any directory, start cpadmin with this command:
    $ cpadmin.sh
    The utility starts and prompts you for the APPS password (required).
    Respond to prompts. Supply the information requested by cpadmin prompts. Prompts unique to an option are described with the option. When you complete the prompts, the Main Menu appears.
    Choose one of the tasks listed above.
    Exit the cpadmin utility.
   
   
Here is the menu when we run cpadmin   

[applmgr@ebsuite:bin:] cpadmin.sh


                     Copyright (c) 2015 Oracle Corporation
                        Redwood Shores, California, USA

                   Oracle E-Business Suite CP Administration
                             $Revision: 120.0.12020000.5 $

Logging to file /u01/applebs/EBSUATR122//fs_ne/inst/EBSUAT_ebsuite/logs/appl/conc/log/cpadmin.040320181433

Enter the password for your 'APPS' ORACLE schema:
Connecting to database...


         CP Administration Main Menu
   --------------------------------------------------

   1.    Administer Concurrent Managers

   2.    Administer Concurrent Requests


   E.    (E)xit CP Administration


Enter your choice: [E] :  1


         Administer Concurrent Managers
   --------------------------------------------------

   1.    Manager Status
          - Show status of all managers

   2.    Clean concurrent processing tables
          - Ensure concurrent processing tables are cleaned and reset for ICM startup

   3.    Set Manager Diagnostics
          - Turn diagnostics on/off for specific managers

   4.    Control a manager or service
          - Send a control request to a manager or service

   5.    Rebuild Concurrent Manager Views
          - Rebuild Views for Fnd_Concurrent_Worker_Requests and Fnd_Concurrent_CRM_Requests


   R.    (R)eturn to previous menu

   E.    (E)xit CP Administration


Enter your choice: [R] :  R



         CP Administration Main Menu
   --------------------------------------------------

   1.    Administer Concurrent Managers

   2.    Administer Concurrent Requests


   E.    (E)xit CP Administration


Enter your choice: [E] :  2


         Administer Concurrent Requests
   --------------------------------------------------

   1.    Analyze Concurrent Requests
          - Analyze Concurrent Requests and print out details

   2.    Move Request Files
          - Commands to change request log and output file locations

   3.    Choose Request Log and Out File Directory Management Option
          - Commands to choose request log and out file directory management option


   R.    (R)eturn to previous menu

   E.    (E)xit CP Administration


Enter your choice: [R] :



SQL*Loader-128: unable to begin a session , ORA-01017: invalid username/password; logon denied

$
0
0
Error:
====

+-----------------------------
Starting concurrent program execution...
-----------------------------
Arguments
------------
/test/TEST/apps/apps_st/appl/ce/12.0.0/CMW_HO_CNB_EXNHO_UACA0000001.csv

SQL*Loader-128: unable to begin a session
ORA-01017: invalid username/password; logon denied

Program exited with status 1
Concurrent Manager encountered an error while running SQL*Loader for your concurrent request 33142095.

Review your concurrent request log file for more detailed information.

Solution:
======

1. Just Bounce the concurrent managers and if possible please bounce the Apps and the db also. As there may be some block would have happend.

If the issue still persists then Conctact Oracle Support.

ORA-10873: file 1 needs to be either taken out of backup mode or media recovered

$
0
0
Scenario :

Database went down due to power fluctuation and during that time hot backup was running.So once we started the database we face the following issue.

Error:

ORA-10873: file 1 needs to be either taken out of backup mode or media recovered

Cause:

Database in backup mode while it went down.

Solution:

Step 1 : Sqlplus  ‘/as sysdba’
Step 2 : startup mount
Step 3 : Check backup file using below command.
            select * from V$BACKUP
Step 4 : The following command can be used to take all of the data files out of hot backup mode:
            ALTER DATABASE END BACKUP;
Step 5 : Alter database open

ORA-00845: MEMORY_TARGET not supported on this system

$
0
0
SQL> startup nomount;
ORA-00845: MEMORY_TARGET not supported on this system

Reason:
/dev/shm is also know as tmpfs i.e. temporary file system which keeps all the file system in virtual memory to speed up several processes.
Solution:

Increase the size of /dev/shm

To check the size of /dev/shm
# df -h
Filesystem    Size   Used  Avail  Use%  Mounted on
/dev/sda3     7.6G   4.4G  2.9G  61%    /
tmpfs         504M   76K   504M  1%     /dev/shm
/dev/sda1     194M   25M   160M   14%   /boot

To increase the size
# mount -o remount,size=3G /dev/shm
Verify the size

# df -h
Filesystem   Size   Used   Avail  Use%  Mounted on
/dev/sda3   7.6G    4.4G   2.9G   61%   /
tmpfs       3G      1007M  2.1G   33%   /dev/shm
/dev/sda1   194M    25M    160M   14%   /boot
To make permanent changes to your file system update your fstab

# vi /etc/fstab
tmpfs  /dev/shm  tmpfs  defaults,size=3G  0 0
Update the new fstab file

# mount -a

ORA-06512: at "APPS.FND_CP_OPP_IPC", line 85

$
0
0
ISSUE:XML reports  complete in warning

This is a freshly cloned instance and all managers and OPP services
were up and running.
However users were reporting of XML reports completing in error.

A check of the OPP logs revealed the following error:

[OPPServiceThread0] java.sql.SQLException: ORA-24067: exceeded maximum number of subscribers for queue APPLSYS.FND_CP_GSM_OPP_AQ
ORA-06512: at "APPS.FND_CP_OPP_IPC", line 85
ORA-06512: at line 1

    at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
    at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
    at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
    at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
    at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:191)
    at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:950)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1225)
    at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3387)
    at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3488)
    at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:3857)
    at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1374)
    at oracle.apps.fnd.cp.opp.OPPAQMonitor.initAQ(OPPAQMonitor.java:558)
    at oracle.apps.fnd.cp.opp.OPPAQMonitor.init(OPPAQMonitor.java:534)
    at oracle.apps.fnd.cp.opp.OPPAQMonitor.initialize(OPPAQMonitor.java:89)
    at oracle.apps.fnd.cp.opp.OPPServiceThread.init(OPPServiceThread.java:94)
    at oracle.apps.fnd.cp.gs
f.BaseServiceThread.run(BaseServiceThread.java:135)


SOLUTION:
SQL> select count(*) from applsys.FND_CP_GSM_OPP_AQTBL ;

  COUNT(*)
----------
   1039973

Take a back up of the table :

SQL> create table applsys.FND_CP_GSM_OPP_AQTBL_bkp2 as select * from applsys.FND_CP_GSM_OPP_AQTBL;

Table created.

SQL> show user
USER is "APPS"
SQL> conn applsys/devapps
Connected.
SQL> show user
USER is "APPLSYS"


SQL> exec dbms_aqadm.purge_queue_table('FND_CP_GSM_OPP_AQTBL', null, null);


PL/SQL procedure successfully completed.

SQL> SQL>
SQL> select count(*) from applsys.FND_CP_GSM_OPP_AQTBL ;

  COUNT(*)
----------
         0

Connect as APPS

sqlplus apps/testapps

SQL*Plus: Release 11.2.0.4.0 Production on Thu Feb 5 11:14:34 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


SQL>@$FND_TOP/patch/115/sql/afopp002.sql
Enter value for 1: APPLSYS
Enter value for 2: devapps
Connected.

PL/SQL procedure successfully completed.

SQL> exec fnd_cp_opp_ipc.remove_all_subscribers();

Oracle RAC : ASM instance startup failing with "terminating the instance due to error 482" in alert log

$
0
0
Oracle RAC : ASM instance startup failing with "terminating the instance due to error 482" in alert log
ASM instance startup failing with "terminating the instance due to error 482" in alert log
ASM instance alert log shows below error while starting the ASM instance on second node

Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x1994] [PC:0x43EFF99, kjbmprlst()+1369] [flags: 0x0, count: 1]
ORA-07445: exception encountered: core dump [kjbmprlst()+1369] [SIGSEGV] [ADDR:0x1994] [PC:0x43EFF99] [Address not mapped to object] []
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Dumping diagnostic data in directory=[cdmp_20180131051633], requested by (instance=2, osid=39620 (LMD0)), summary=[incident=224081].
PMON (ospid: 39577): terminating the instance due to error 482


Fix :
The cluster_database parameter was changed to FALSE which was resulting in the ASM startup failures

Please run the below from the running cluster node making change to the ASM instance

SQL> alter system set cluster_database=TRUE scope=spfile sid='+ASM2';

System altered.

This should start your ASM instance now. I hope this resolves your issue.

Oracle : Spatial is INVALID In DBA_REGISTRY after applying patches

$
0
0
Oracle : Spatial is INVALID In DBA_REGISTRY after applying patches
Spatial is INVALID In DBA_REGISTRY after applying patches


Issue :
COMP_ID        COMP_NAME                            VERSION      STATUS
-------------- ------------------------------------ ------------ --------
AMD            OLAP Catalog                         11.2.0.4.0   VALID
SDO            Spatial                              11.2.0.4.0   INVALID  <--- Invalid
ORDIM          Oracle Multimedia                    11.2.0.4.0   VALID
XDB            Oracle XML Database                  11.2.0.4.0   VALID
CONTEXT        Oracle Text                          11.2.0.4.0   VALID
EXF            Oracle Expression Filter             11.2.0.4.0   VALID
RUL            Oracle Rules Manager                 11.2.0.4.0   VALID
CATALOG        Oracle Database Catalog Views        11.2.0.4.0   VALID
CATPROC        Oracle Database Packages and Types   11.2.0.4.0   VALID
JAVAVM         JServer JAVA Virtual Machine         11.2.0.4.0   VALID
XML            Oracle XDK                           11.2.0.4.0   VALID
CATJAVA        Oracle Database Java Packages        11.2.0.4.0   VALID
APS            OLAP Analytic Workspace              11.2.0.4.0   VALID
XOQ            Oracle OLAP API                      11.2.0.4.0   VALID
RAC            Oracle Real Application Clusters     11.2.0.4.0   VALID

15 rows selected.

Solution :

connect / as sysdba
alter session set current_schema="MDSYS";
@?/md/admin/sdogr.sql
@?/md/admin/prvtgr.plb

alter session set current_schema="SYS";
set serveroutput on
exec utl_recomp.recomp_serial('MDSYS');
exec sys.VALIDATE_SDO();
select comp_name, version, status from dba_registry where comp_id='SDO';
COMP_NAME  VERSION                 STATUS
---------  ----------------------  ------------------
Spatial    11.2.0.4.0              VALID

Hope this solution works for you..

Configure the Streams pool for integrated replication on Goldengate

$
0
0
Configure the Streams pool for integrated replication in Goldengate:

When using integrated Replicat the Streams pool must be configured.
If using non-integrated Replicat the Streams pool is not necessary.

The size requirement of the Streams pool for integrated Replicat is based on a single parameter, MAX_SGA_SIZE. The MAX_SGA_SIZE parameter defaults to INFINITE which allows the Replicat process to use as much of the Streams pool as possible. Oracle does not recommend setting the MAX_SGA_SIZE parameter.


Set the STREAMS_POOL_SIZE initialization parameter for the database to the following value:
(1GB * # of integrated Replicats) + 25% head room
For example, on a system with one integrated Replicat process the calculation would be as follows:
(1GB * 1) * 1.25 = 1.25GB STREAMS_POOL_SIZE = 1280M

For example, on a system with two integrated Replicat process the calculation would be as follows:
(1GB * 2) * 1.25 = 2.25GB STREAMS_POOL_SIZE = 2560M



How to create Database File System DBFS

$
0
0
Creating a File System
Create a tablespace to hold the file system.

CONN / AS SYSDBA

CREATE TABLESPACE dbfs
  DATAFILE '/u01/app/oracle/oradata/DB11G/dbfs01.dbf'
  SIZE 1M AUTOEXTEND ON NEXT 1M;
Create a user, grant DBFS_ROLE to the user and make sure it has a quota on the tablespace. Trying to create a file system from the SYS user fails, so it must be done via another user.

CONN / AS SYSDBA

CREATE USER user1 IDENTIFIED BY user1
  DEFAULT TABLESPACE dbfs QUOTA UNLIMITED ON dbfs;

GRANT CREATE SESSION, RESOURCE, CREATE TABLE, CREATE VIEW, CREATE PROCEDURE, DBFS_ROLE TO user1;
Create the file system in tablespace by running the "dbfs_create_filesystem.sql" script as the test user. The script accepts two parameters identifying the tablespace and file system name.

cd $ORACLE_HOME/rdbms/admin
sqlplus user1/user1

SQL> @dbfs_create_filesystem.sql dbfs staging_area
The script created a partitioned file system. Although Oracle consider this the best option from a performance and scalability perspective, it can have two drawbacks:

FUSE Installation**********************
In order to mount the DBFS we need to install the "Filesystem in Userspace" (FUSE) software. If you are not planning to mount the DBFS or you are running on an Non-Linux platform, this section is unnecessary. The FUSE software can be installed manually, from the OEL media or via Oracle's public yum server. If possible, use the Yum installation.

Yum FUSE Installation
Configure the server to point to Oracle's public yum repository. The instructions for this are available at "http://public-yum.oracle.com".

Next, install the kernel development package and the FUSE software. The kernel development package may already be present, in which case you will see a "Nothing to do" message.

# yum install kernel-devel fuse fuse-libs

Mounting a File System**********************
The dbfs_client tool is used to mount file systems on Linux servers. The usage is displayed if you call it without any parameters.

[oracle@source.doyensys.com admin]$
First we need to create a mount point with the necessary privileges as the "root" user.

# mkdir /mnt/dbfs
# chown oracle:oinstall /mnt/dbfs
Add a new library path and create symbolic links to the necessary libraries in the directory pointed to by the new library path. Depending on your installation the "libfuse.so.2" library may be in an alternative location.

# 12cR2

# echo "/usr/local/lib">> /etc/ld.so.conf.d/usr_local_lib.conf
# export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/db_1
# ln -s $ORACLE_HOME/lib/libclntsh.so.12.1 /usr/local/lib/libclntsh.so.12.1
# ln -s $ORACLE_HOME/lib/libnnz12.so /usr/local/lib/libnnz12.so
# ln -s /lib64/libfuse.so.2 /usr/local/lib/libfuse.so.2
# ln -s /lib64/libfuse.so.2 /usr/local/lib/libfuse.so
# ldconfig
Edit the "/etc/fuse.conf" file, un-commenting the "user_allow_other" option. The contents should look like this.

# mount_max = 1000
user_allow_other
Loosen the permissions on the fusermount command.

# chmod +x /usr/bin/fusermount
Edit the file "/etc/abrt/abrt-action-save-package-data.conf" setting the following parameter.

ProcessUnpackaged = yes
Reboot the server.

# reboot
Make sure the "/usr/local/lib" directory is referenced in the LD_LIBRARY_PATH environment variable. You may want to add something like this to the profile for the "oracle" user, or any environment setup scripts.

$ export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib

*************************************************************

The file system we've just created is mounted  following commands from the "oracle" OS user.

$ # Connection prompts for password and holds session.
$ dbfs_client user1@DB11G /mnt/dbfs

Increasing Performance by Splitting Replication Loads on Goldengate

$
0
0
Steps for performing - Increasing Performance by Splitting Replication Loads 

On Source

[oracle@ggsource.doyensys.com sqlscripts]$ echo $ORACLE_SID
ggsource

[oracle@ggsource.doyensys.com sqlscripts]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Mon Jan 17 04:25:55 2018
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing opt ions

SQL> connect user1/user1
Connected.
SQL> @range_split.sql
DROP TABLE range_split
*
ERROR at line 1:
ORA-00942: table or view does not exist
Table created.

SQL> @populate_range_split.sql
Procedure created.

GGSCI (ggsource.doyensys.com) 2> dblogin useridalias ggsource
Successfully logged into database.

GGSCI (ggsource.doyensys.com) 3> add trandata user1.range_split
Logging of supplemental redo data enabled for table user1.RANGE_SPLIT.
TRANDATA for scheduling columns has been added on table 'user1.RANGE_SPLIT '.

GGSCI (ggsource.doyensys.com) 4> info trandata user1.r*
Logging of supplemental redo log data is enabled for table user1.RANGE_SPL IT.
Columns supplementally logged for table user1.RANGE_SPLIT: ROW_ID.

GGSCI (ggsource.doyensys.com) 1> edit params defsrc
DefsFile /u01/app/source/dirdef/rangesplit.def, Purge
UserIDAlias ggsource
Table user1.RANGE_SPLIT;

[oracle@ggsource.doyensys.com source]$ ./defgen paramfile /u01/app/source/dirprm/defsrc.prm
***********************************************************************
** Running with the following parameters **
***********************************************************************
DefsFile /u01/app/source/dirdef/rangesplit.def, Purge
UserIDAlias ggsource
Table user1.RANGE_SPLIT;
Retrieving definition for user1.RANGE_SPLIT.
Definitions generated for 1 table in /u01/app/source/dirdef/rangesplit.def.

[oracle@ggsource.doyensys.com dirdef]$ cp rangesplit.def /u01/app/target/dirdef/

On Target

[oracle@ggtarget.doyensys.com sqlscripts]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Mon Jan 17 04:46:48 2018
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> connect user1/user1
Connected.
SQL> @range_split.sql
DROP TABLE range_split
*
ERROR at line 1:
ORA-00942: table or view does not exist
Table created.
SQL>

On Source

GGSCI (ggsource.doyensys.com) 1> edit params pump1
Extract pump1
UserIDAlias ggsource
RmtHost 192.168.1.35, MgrPort 7810
RmtTrail /u01/app/target/dirdat/ea
Table user1.RANGE_SPLIT, Filter (@RANGE (1, 3));

GGSCI (ggsource.doyensys.com) 2> edit params pump2
Extract pump2
UserIDAlias ggsource
RmtHost 192.168.1.35, MgrPort 7810
RmtTrail /u01/app/target/dirdat/eb
Table user1.RANGE_SPLIT, Filter (@RANGE (2, 3));

GGSCI (ggsource.doyensys.com) 2> edit params pump3
Extract pump3
UserIDAlias ggsource
RmtHost 192.168.1.35, MgrPort 7810
RmtTrail /u01/app/target/dirdat/ec
Table user1.RANGE_SPLIT, Filter (@RANGE (3, 3));

GGSCI (ggsource.doyensys.com) 4> add extract pump1, tranlog, begin now
EXTRACT added.
GGSCI (ggsource.doyensys.com) 10> add extract pump2, tranlog, begin now
EXTRACT added.
GGSCI (ggsource.doyensys.com) 11> add extract pump3, tranlog, begin now
EXTRACT added.
GGSCI (ggsource.doyensys.com) 7> add rmttrail /u01/app/target/dirdat/ea,extract pump1
RMTTRAIL added.
GGSCI (ggsource.doyensys.com) 12> add rmttrail /u01/app/target/dirdat/eb,extract pump2
RMTTRAIL added.
GGSCI (ggsource.doyensys.com) 13> add rmttrail /u01/app/target/dirdat/ec,extract pump3
RMTTRAIL added.
GGSCI (ggsource.doyensys.com) 17> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT STOPPED pump1 00:00:00 00:06:17
EXTRACT STOPPED pump2 00:00:00 00:03:50
EXTRACT STOPPED pump3 00:00:00 00:03:41

On Target

GGSCI (ggtarget.doyensys.com) 2> edit params rep1
Replicat rep1
UserIDAlias ggtarget
SourceDefs /u01/app/target/dirdef/rangesplit.def
DiscardFile /u01/app/target/dirrpt/rep1.dsc, Append
Map user1.RANGE_SPLIT, Target user1.RANGE_SPLIT;

GGSCI (ggtarget.doyensys.com) 3> edit params rep2
Replicat rep2
UserIDAlias ggtarget
SourceDefs /u01/app/target/dirdef/rangesplit.def
DiscardFile ./u01/app/target/dirrpt/rep2.dsc, Append
Map user1.RANGE_SPLIT, Target user1.RANGE_SPLIT;

GGSCI (ggtarget.doyensys.com) 4> edit params rep3
Replicat rep3
UserIDAlias ggtarget
SourceDefs /u01/app/target/dirdef/rangesplit.def
DiscardFile ./u01/app/target/dirrpt/rep3.dsc, Append
Map user1.RANGE_SPLIT, Target user1.RANGE_SPLIT;

GGSCI (ggsource.doyensys.com) 7> add replicat rep1,exttrail /u01/app/target/dirdat/ea, checkpointtable ggate.chkptab
REPLICAT added.
GGSCI (ggsource.doyensys.com) 9> add replicat rep2,exttrail /u01/app/target/dirdat/eb, checkpointtable ggate.chkptab
REPLICAT added.
GGSCI (ggsource.doyensys.com) 10> add replicat rep3,exttrail /u01/app/target/dirdat/ec, checkpointtable ggate.chkptab
REPLICAT added.

On Source

GGSCI (ggsource.doyensys.com) 2> start pu*
Sending START request to MANAGER ...
EXTRACT pump1 starting
Sending START request to MANAGER ...
EXTRACT pump2 starting
Sending START request to MANAGER ...
EXTRACT pump3 starting
GGSCI (ggsource.doyensys.com) 3> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT STOPPED EFUNCS 00:00:00 01:44:29
EXTRACT RUNNING pump1 00:00:00 01:01:21
EXTRACT RUNNING pump2 00:00:00 00:58:53
EXTRACT RUNNING pump3 00:00:00 00:58:44
EXTRACT STOPPED ETTOKEN 00:00:00 41:10:43
EXTRACT ABENDED EXT1 00:00:10 645:45:12
EXTRACT ABENDED EXT2 00:00:09 32:42:02
EXTRACT ABENDED OCCEXT 00:00:00 204:43:33
EXTRACT STOPPED PFUNCA 00:00:00 01:44:34
EXTRACT STOPPED PFUNCS 00:00:00 01:44:21

On Target

GGSCI (ggtarget.doyensys.com) 12> start rep*
Sending START request to MANAGER ...
REPLICAT rep1 starting
Sending START request to MANAGER ...
REPLICAT rep2 starting
Sending START request to MANAGER ...
REPLICAT rep3 starting

GGSCI (ggtarget.doyensys.com) 29> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
REPLICAT RUNNING rep1 00:00:00 00:00:05
REPLICAT RUNNING rep2 00:00:00 00:00:04
REPLICAT RUNNING rep3 00:00:00 00:00:02

On Source

Connect to source database with user1 user schema
SQL> connect user1/user1
Connected.
SQL> exec populate_range_split(500000,1000);

GGSCI (ggsource.doyensys.com) 2> stats extract pump1
Sending STATS request to EXTRACT pump1 ...
Start of Statistics at 2018-04-17 06:15:08.
Output to /u01/app/target/dirdat/ea:
Extracting from user1.RANGE_SPLIT to user1.RANGE_SPLIT:

*** Total statistics since 2018-04-17 06:13:19 ***
Total inserts 36523.00
Total updates 0.00
Total deletes 0.00
Total discards 0.00
Total operations 36523.00

GGSCI (ggsource.doyensys.com) 3> stats extract pump2
Sending STATS request to EXTRACT pump2 ...
Start of Statistics at 2018-04-17 06:15:29.
Output to /u01/app/target/dirdat/ea:
Extracting from user1.RANGE_SPLIT to user1.RANGE_SPLIT:

*** Total statistics since 2018-04-17 06:13:19 ***
Total inserts 43180.00
Total updates 0.00
Total deletes 0.00
Total discards 0.00
Total operations 43180.00

GGSCI (ggsource.doyensys.com) 4> stats extract pump3
Sending STATS request to EXTRACT pump3 ...
Start of Statistics at 2018-04-17 06:15:40.
Output to /u01/app/target/dirdat/eb:
Extracting from user1.RANGE_SPLIT to user1.RANGE_SPLIT:

*** Total statistics since 2018-04-17 06:13:19 ***
Total inserts 46115.00
Total updates 0.00
Total deletes 0.00
Total discards 0.00
Total operations 46115.00

On Target

Check stats of all replicat in target
GGSCI (ggsource.doyensys.com) 40> stats replicat rep1
GGSCI (ggsource.doyensys.com) 41> stats replicat rep2
GGSCI (ggsource.doyensys.com) 42> stats replicat rep3

Replicat Abending with Mapping Error and Discard File Shows Missing Key Columns

$
0
0
Replicat Abending with Mapping Error and Discard File Shows Missing Key Columns

Source table :
create table user1 (empid number not null PRIMARY KEY, empname varchar2(10), dept varchar2(10));

Target table :
create table user1 (empid number not null, empname varchar2(10), dept varchar2(10));

In this case extract will consider empid as key column. There are no primary or unique key defined on target table, so replicat will consider all the columns as key columns.

If an update operation occurs on source, Extract will log only the changed column and the primary key column provided trandata was enabled on source table.

Replicat when processing the update operation will abend with error in mapping and discard file will show like the following


key column empname (1) is missing
key column dept (2) is missing


This is because if there are no primary key or unique key or PK constraint may be disabled or not validated replicat will consider all the columns as key columns).



The best thing is to add same set of primary key or unique key on both source and target tables.

However, as a workaround we can force the extract/replicat to use same columns as key columns using KEYCOLS parameter in the TABLE or MAP statement.

MAP <schema>.<table_name>, target <schema>.<table_name>, keycols (empid);



Anisble Installation on oracle linux 7

$
0
0
Step :1 Set EPEL repository
Ansible package is not available in the default yum repositories, so we will enable epel repository for Oracle Linux 7 using below commands
Step:2 Install Anisble using yum command

Once the installation is completed, check the ansible version :


Step:3 Setup keys based SSH authentication with Nodes.
Generate keys on the Ansible server and copy public key to the nodes.

Use ssh-copy-id command to copy public key of Ansible server to its nodes.




Step:4 Define the nodes or inventory of servers for Ansible.
File /etc/ansible/hosts‘ maintains the inventory of servers for Ansible

Step:5 Now try to run the Commands from Ansible Server.
Check the connectivity of test-servers’ or ansible nodes using ping

Executing Shell commands :
Example :1 Check the uptime of Ansible nodes

Example:2 Check Kernel Version of nodes

Example:3 Adding a user to the nodes


Example:4 Redirecting the output of command to a file




Ansible: Install and Configure Ansible Tower On oracle linux 7

$
0
0
Ansible: Install and Configure Ansible Tower On oracle linux 7


Ansible has two components: Ansible Core and Ansible Tower. Core provides the Ansible runtime that executes playbooks (yaml files defining tasks and roles) against inventories (group of hosts). Ansible Tower provides management, visibility, job scheduling credentials, RBAC, auditing / compliance.

Install Ansible Tower


Download latest Ansible Tower release.


Configure Setup.

Ansible Tower uses an Ansible playbook to deploy itself. As such configuration parameters or groupvars are stored in inventory file.


Example Inventory file


Example Inventory file for an external existing database


   Run setup


   Configure Ansible Tower

Ansible Tower Provides a RESTful API, CLI and UI. To connect to the UI simply open browser using http/https and point to your Ansible Tower IP or hostname.

https://<Ansible Tower IP or Hostname>

Viewing all 1640 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>