Quantcast
Channel: Doyensys Allappsdba Blog..
Viewing all 1640 articles
Browse latest View live

Article 0

$
0
0
DMZ URL is not working


 please confirm the port in DMZ and post clone steps for DMZ

            i. Modify the HTTP port in the $CONTEXT_FILE of the external node and run     autoconfig
            ii . Check SSL setup in the $CONTEXT_FILE and run autoconfig
              

Issue:
External URL not up. Due to http port using wrong port#8002.
Processes in Instance: EBS_web_TEST_OHS2
---------------------------------+--------------------+---------+----------+------------+----------+-----------+------
ias-component                    | process-type       |     pid | status   |        uid |  memused |    uptime | ports
---------------------------------+--------------------+---------+----------+------------+----------+-----------+-----
EBS_web_TEST                   | OHS                |   18624 | Alive    | 1412243724 |  1870960 |   0:24:01 | https:4445,https:10001,http:8002

Solution:
Manually modified port#8001 in httpd.conf since it is not changed by autoconfig. 
[apTEST@apdbadmz01 EBS_web_TEST]$ pwd
/d02/app/TEST122/fs1/FMW_Home/webtier/instances/EBS_web_TEST_OHS2/config/OHS/EBS_web_TEST
[apTEST@apdbadmz01 EBS_web_TEST]$ grep "8001" *
httpd.conf:Listen 8001
[apTEST@apdbadmz01 EBS_web_TEST]$


And now start the application .

How to Troubleshoot an ORA-28030 Error while accessing LDAP.

$
0
0


ORA-28030: Server encountered problems accessing LDAP directory service.
Cause: Unable to access LDAP directory service.
Action: Please contact your system administrator.

There are many reasons for causing this error when you are trying to login to the database with your oracle internet directory (OID) authentication. The error sample is shown as below:

    SQL> conn schema@dbtest
    Enter password:
    ERROR:
    ORA-28030: Server encountered problems accessing LDAP directory service

    Warning: You are no longer connected to ORACLE.

Here how I usually troubleshoot this kind of issue. Two examples.

First of all, you need to enable the trace to dump the actual errors in the database:

    SQL> alter system set events '28033 trace name context forever, level 9';
Second, regenerate the error:
    SQL> conn schema@dbtest
    Enter password:
    ERROR:
    ORA-28030: Server encountered problems accessing LDAP directory service
Third, disable the trace:
    SQL> alter system set events '28033 trace name context off';
After checking the trace files, I found errors. This is related to the OID server lnx-ldap DNS configuration. Check /etc/hosts or DNS to make sure the OID server lnx-ldap or the port 3131 is reachable.
    
    KZLD_ERR: failed to open connection to lnx-ldap:3131
    KZLD_ERR: 28030
    KZLD_ERR: failed from kzldob_open_bind.

Or you may see the error like this, this is because the wallet files were corrupted, you need to recreate the wallet, and make sure the wallet path is defined properly:

    kzld_discover received ldaptype: OID
    KZLD_ERR: failed to get cred from wallet
    KZLD_ERR: Failed to bind to LDAP server. Err=28032
    KZLD_ERR: 28032
    KZLD is doing LDAP unbind
    KZLD_ERR: found err from kzldini.

Resolving OID/SSO issues during Oracle EBS R12.1.3 Upgrade

$
0
0


During our recent EBS 12.1.3 upgrade, we ran into an enthralling yet a show-stopper issue. Just to elaborate a bit, instantly after the upgrade, we can successfully log into EBS 12.1.3. But opening forms would take forever.  Even enabling trace level 5 for Java console, did not produce useful results.  After exhausting customary troubleshooting steps in such scenarios and as part of process of elimination, we managed to nail the issue down to OID/SSO. Yes. This is not an issue for users with “Applications SSO Login Types set to Local”.
Furthermore, we tried removing the references, de-register/register options all produced successful completion, yet failed to resolve the issue. Run the following SQL statement to see whether OID/SSO registration was done correctly:
SELECT * FROM fnd_user_preferences
WHERE user_name='#INTERNAL' AND module_name= 'OID_CONF';

Interestingly, query did not return any values. In such cases, run the following to populate fnd_user_preferences with appropriate OID/SSO values.
SQL> execute fnd_oid_plug.setPlugin;

Verify that Custom DIT entries are now present in FND_USER_PREFERENCES table using below query:

SELECT * FROM fnd_user_preferences WHERE user_name='#INTERNAL' AND module_name= 'OID_CONF'

Now de-register EBuisness Instance with SSO/OID via interactive or non-interactive mode.
Syntax for interactive mode:
$FND_TOP/bin/txkrun.pl \
-script=SetSSOReg \
-deregister=yes
Non-interactive mode:
$FND_TOP/bin/txkrun.pl \
-script=SetSSOReg \
-deregister=yes \
-appspass= \
[-oidadminuser=cn= \]
-oidadminuserpass= \
[-ldaphost= \]
[-ldapport= \]
[-appname= \]
[-svcname= ]

Before registering make sure to change the following profiles with the values give as below.

    Applications SSO Type: SSWAw/SSO
    Applications SSO Auto Link User: Enable
    Applications SSO Login Types: Both
    Application SSO LDAP Synchronization: Enable
    Applications SSO Enable OID Identity Add Event: Enable
    Link Applications user with OID user with same username: Enable

 Re-register Ebiz with OID/SSO  and re-test.
Interactive mode:
$FND_TOP/bin/txkrun.pl \
-script=SetSSOReg \
-registerinstance=yes$FND_TOP/bin/txkrun.pl \
-script=SetSSOReg \
-registerinstance=yes
Non-interactive mode
$FND_TOP/bin/txkrun.pl \
-script=SetSSOReg \
-registerinstance=yes \
-infradbhost= \
-ldapport= \
-ldapportssl= \
[-ldaphost= \]
[-oidadminuser=cn= \]
-oidadminuserpass= \
-appspass=

HOW to enable OID tracing: HOWTO set OID Debug/Trace Levels?

$
0
0


Steps to enable OID tracing.

 1. Create the following files

 debugon.ldif

 --cut here---
 dn: cn=oid1,cn=osdldapd,cn=subconfigsubentry
 changetype: modify
 replace: orcldebugop
 orcldebugop: 511

 dn: cn=oid1,cn=osdldapd,cn=subconfigsubentry
 changetype: modify
 replace: orcldebugflag
 orcldebugflag: 1
 ----cut here-------

 debugoff.ldif

 --cut here---
 dn: cn=oid1,cn=osdldapd,cn=subconfigsubentry
 changetype: modify
 replace: orcldebugop
 orcldebugop: 0

 dn: cn=oid1,cn=osdldapd,cn=subconfigsubentry
 changetype: modify
 replace: orcldebugflag
 orcldebugflag: 0
 ----cut here-------

 2. Enable tracing
 ldapmodify -h <host> -p <port> -D "cn=orcladmin" -w <password> -f debugon.ldif

 3. Please replicate the issue.

 4. Disable tracing
 ldapmodify -h <host> -p <port> -D "cn=orcladmin" -w <password> -f debugoff.ldif


To verify that your debug levels have been set, run the following ldapsearch command:

$ORACLE_HOME/bin/ldapsearch -h <OIDHOST> -p <PORT> -D cn=orcladmin -w <PWD> -b "cn=oid1,cn=osdldapd,cn=subconfigsubentry" -s base objectclass=* orcldebugflag orcldebugop

The logs can be found at $MW_HOME\asinst_1\diagnostics\logs\OID\oid1

We can have other values for ORCLDEBUGOP and ORCLDEBUGFLAG.

“401 Unauthorized” error when tried to loging into SSO application

$
0
0


When tried to login into sso integrated application getting “401 Unauthorized” error.

Environment details:  Oracle Application Server Single Sign-On – Version: 10.1.4.3 and OAM 10.1.4.3 running in same node. OAM_OSSO are integrated.

I got 401 Unauthorized error when I tried to access the oiddas application and I have seen the below exception in ssoServer.log

        [ERROR] AJPRequestHandler-ApplicationServerThread-9 Could not get attributes for user, orcladmin

        oracle.ldap.util.NoSuchUserException: User does not exist – SIMPLE NAME = orcladmin

        at oracle.ldap.util.Subscriber.getUser_NICKNAME(Subscriber.java:1160)

        at oracle.ldap.util.Subscriber.getUser(Subscriber.java:923)

        at oracle.ldap.util.Subscriber.getUser(Subscriber.java:870)

        at oracle.security.sso.server.ldap.OIDUserRepository.getUserProperties(OIDUserRepository.java:537)

        at oracle.security.sso.server.auth.SSOServerAuth.authenticate(SSOServerAuth.java:508)

        at oracle.security.sso.server.ui.SSOLoginServlet.processSSOPartnerRequest(SSOLoginServlet.java:1076)

        at oracle.security.sso.server.ui.SSOLoginServlet.doPost(SSOLoginServlet.java:547)

        at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)

        at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)

        at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:826)

        at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:332)

        at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:830)

        at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:224)

        at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:133)

        at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:192)

        at java.lang.Thread.run(Thread.java:534)

        Please find the workaround as below


        I have found one metalink Id(987877.1) which is deals the same issue. It’s says, issue in custom plugin configured for Oracle SSO (OSSO) and I have executed the following action plan.
Recompile the custom plugin with a different name e.g. SSOSMAuth, so that file SSOSMAuth.class is created instead of SSONeteAuth.class
       
        Copy file SSOSMAuth.class to $ORACLE_HOME/sso/plugin
        Edit file $ORACLE_HOME/sso/conf/policy.properties and set the following:
        MediumSecurity_AuthPlugin = oracle.security.sso.server.auth.SSOSMAuth
        Restart OC4J_SECURITY:
        $ORACLE_HOME/opmn/bin/opmnctl stopproc process-type=OC4J_SECURITY
        $ORACLE_HOME/opmn/bin/opmnctl startproc process-type=OC4J_SECURITY

    I got the same error when tried to login again.
    After some workaround I found the root cause for this issue. The problem is common user search base (orclcommonusersearchbase attribute) was modified or new values were added.  More details as follow,
    At least one of the search bases configured (orclcommonusersearchbase attribute in the  cn=Common,cn=Products,cn=OracleContext,<realm DN> entry) does not exist in OID or wrongly configured. Here, Configured searchbase not exist in OID.
    The entries set for orclcommonusersearchbase attribute are used by SSO as search bases to locate the user entry. If the base does not exist in OID, the ldap search operation is failing with “Ldap error code 32: LDAP_NO_SUCH_OBJECT”.
    Ldap error code 32 means the base specified for the operation does not exist.
    Login to oidadmin tool and navigate to cn=Common,cn=Products,cn=OracleContext,<realm DC>
    Go to orclcommonusersearchbase attribute and correct / delete the incorrect values. All the entries defined in the orclcommonusersearchbase attribute must exist in OID.

EBS-OAM Integration: OAMSSA-20142 : Authentication Failure for OID user

$
0
0


This post covers issue encountered by one of our trainee login to EBS R12.2 environment integrated with OAM/OID for Single Sign-On (SSO) encountered by one of the trainee in EBS-OAM Integration Training.

Issue:


Trainee was hitting issue while logging to OAM Console using user in OID after integrating OAM to OID and changing OAM’s System Store to OID. It was showing below error. log file is under $DOMAIN_HOME/servers/<OAM_Server>/logs/*diagntotics*.log

<Error> <oracle.oam.user.identity.provider> <OAMSSA-20142> <Authentication Failure for user oamadmin, user not found in idstore UserIdentityStore1 with exception oracle.igf.ids.EntityNotFoundException: Entity not found for the search filter (&(objectclass=person)(uid=oamadmin)).>

Root Cause:


Before we come to fix, lets understand what this issue is. If you look at oam-config.xml (Under <DOMAIN_HOME>/config/fmwconfig), idStore UserIdentityStore1 is pointing to embedded LDAP Server

Note: OAM can have multiple Identity Store but Identity Store designeated as System Store (IsSystem=True) is used to login to OAMConsole
Setting Name=”UserIdentityStore” Type=”htf:map”>
<Setting Name=”SECURITY_PRINCIPAL” Type=”xsd:string”>cn=Admin</Setting>
<Setting Name=”GROUP_SEARCH_BASE” Type=”xsd:string”>ou=groups,ou=myrealm,dc=base_domain</Setting>
<Setting Name=”USER_NAME_ATTRIBUTE” Type=”xsd:string”>uid</Setting>
<Setting Name=”Type” Type=”xsd:string”>LDAP</Setting>
<Setting Name=”IsSystem” Type=”xsd:boolean”>false</Setting>
<Setting Name=”IsPrimary” Type=”xsd:boolean”>false</Setting>
<Setting Name=”Name” Type=”xsd:string”>UserIdentityStore1</Setting>
<Setting Name=”SECURITY_CREDENTIAL” Type=”xsd:string”>{AES}F8E3A9FAD9D662F753D842979423ED3D</Setting>
<Setting Name=”LDAP_PROVIDER” Type=”xsd:string”>EMBEDDED_LDAP</Setting>
<Setting Name=”USER_SEARCH_BASE” Type=”xsd:string”>ou=people,ou=myrealm,dc=base_domain</Setting>
<Setting Name=”ENABLE_PASSWORD_POLICY” Type=”xsd:boolean”>false</Setting>
<Setting Name=”LDAP_URL” Type=”xsd:string”>ldap://ldap-host:7001</Setting>
<Setting Name=”UserIdentityProviderType” Type=”xsd:string”>OracleUserRoleAPI</Setting>
</Setting>

Now the question is “Why is this going to embedded LDAP server even though in OAM-config.xml IsSystem is pointing to OID (System Store is pointing to OID)?”

<Setting Name=”LDAP” Type=”htf:map”>
<Setting Name=”3FD25D70107FDEF319″ Type=”htf:map”>
<Setting Name=”SECURITY_PRINCIPAL” Type=”xsd:string”>cn=orcladmin</Setting>
<Setting Name=”GROUP_SEARCH_BASE” Type=”xsd:string”>cn=Groups,dc=hussain,dc=net</Setting>
<Setting Name=”ConnectionRetryCount” Type=”xsd:integer”>3</Setting>
<Setting Name=”USER_NAME_ATTRIBUTE” Type=”xsd:string”>uid</Setting>
<Setting Name=”Type” Type=”xsd:string”>OID</Setting>
<Setting Name=”IsSystem” Type=”xsd:boolean”>true</Setting>
<Setting Name=”GroupCacheEnabled” Type=”xsd:boolean”>false</Setting>
<Setting Name=”IsPrimary” Type=”xsd:boolean”>true</Setting>
<Setting Name=”ConnectionWaitTimeout” Type=”xsd:integer”>120</Setting>
<Setting Name=”Name” Type=”xsd:string”>OID1</Setting>
<Setting Name=”SECURITY_CREDENTIAL” Type=”xsd:string”>{AES}488ED2E6384ACFB3027B13355AEC1A4E</Setting>
<Setting Name=”NATIVE” Type=”xsd:boolean”>false</Setting>
<Setting Name=”SearchTimeLimit” Type=”xsd:integer”>0</Setting>
<Setting Name=”MIN_CONNECTIONS” Type=”xsd:integer”>10</Setting>
<Setting Name=”LDAP_PROVIDER” Type=”xsd:string”>OID</Setting>
<Setting Name=”USER_SEARCH_BASE” Type=”xsd:string”>cn=Users,dc=hussain,dc=net</Setting>
<Setting Name=”ENABLE_PASSWORD_POLICY” Type=”xsd:boolean”>false</Setting>
<Setting Name=”LDAP_URL” Type=”xsd:string”>ldap://oid01.hussain.net:3060</Setting>
<Setting Name=”ReferralPolicy” Type=”xsd:string”>follow</Setting>
<Setting Name=”MAX_CONNECTIONS” Type=”xsd:integer”>50</Setting>
<Setting Name=”GroupCacheTTL” Type=”xsd:integer”>0</Setting>
<Setting Name=”UserIdentityProviderType” Type=”xsd:string”>OracleUserRoleAPI</Setting>
<Setting Name=”GroupCacheSize” Type=”xsd:integer”>10000</Setting>

Answer is because in your WebLogic Config File ($DOMAIN_HOME/config/config.xml) you still have IAMSuiteAgent in place . You need to remove IAMSuiteAgent from OAM WebLogic Domain Authentication Providers .
<sec:authentication-provider xmlns:ext=”http://xmlns.oracle.com/weblogic/security/extension” xsi:type=”ext:oam-servlet-authentication-filter-ia-providerType”>
<sec:name>IAMSuiteAgent</sec:name>
</sec:authentication-provider>

Because of this SSO for OAMConsole is coming into picture and OAMConsole is protected by ATN scheme LDAP and LDAP is pointing to Embedded LDAP Server (and not to OID) .

Fix:

 

You have two options to fix it:

1. Either remove IAMSuiteAgent from OAM WebLogic Domain with below steps and re-start services.

    Access OAM console (http://<hostname>:<oam_port>/console) and click on Security realms.
    Click on My Realm and select Providers tab.
    Click on Lock and Edit. Now select the IAMSuiteAgent and then click delete.

    Click yes and it will delete IAMSuiteAgent. Now restart the services and try again.

or

2. Change LDAP ATN Module in OAM from Embedded LDAP to OID server.

Error : ORA-00942 while querying for AV$ALERT_STORE

$
0
0
CAUSE :

AVSYS.AV$ALERT_STORE does not exist in version Audit Vault version 12.2.

SOLUTION :

Use AVSYS.ALERT_STORE instead of AVSYS.AV$ALERT_STORE in Audit Vault version 12.2.

Error : failError!! Could not deliver the output for Delivery channel:null

$
0
0
Error : failError!! Could not deliver the output for Delivery channel:null

On : 12.1.3 version, XML Publisher Report Bursting Program is getting Error Out

CAUSE :

Bursting control file had PDF instead of pdf. Bursting control file is case sensitive that's the reason for failure.

SOLUTION :

Modify the bursting control file
output-type="PDF"
to
output-type="pdf

12c Database mount fails with error ORA-65093

$
0
0
ERROR :

12c Database mount fails with error ORA-65093

CAUSE :

An attempt was made to open a multitenant container database (CDB) without the correct parameter set for a multitenant container database in the initialization parameter file (enable_pluggable_database = false (Default value)).


SOLUTION :

Set the 'enable_pluggable_database=true' parameter for the multitenant container database in the initialization parameter file and restart the database.



SQL> startup nomount

SQL> alter system set enable_pluggable_database=true scope=spfile;

SQL> shut

SQL> startup

Adop Fails With Error: ssh is not enabled for the following nodes

$
0
0
ERROR :

Adop Fails With Error: ssh is not enabled for the following nodes

CAUSE :

SSH is not enabled in one node

SOLUTION :

Use the txkRunSSHSetup.pl command to enable SSH:

Sample Run [Enable SSH equivalence]:

perl <AD_TOP>/patch/115/bin/txkRunSSHSetup.pl enablessh -contextfile=<CONTEXT_FILE> -hosts=h1,h2,h3
Sample Run [Verify SSH]:

perl <AD_TOP>/patch/115/bin/txkRunSSHSetup.pl verifyssh -contextfile=<CONTEXT_FILE> -hosts=h1,h2,h3 -invalidnodefile=<file to report ssh verification failed nodes list>

ASM Unable to Recognize New Devices if Existing Disk Search String does not include them ORA-15014

$
0
0
ERROR :

ASM Unable to Recognize New Devices if Existing Disk Search String does not include them ORA-15014

CAUSE :

ASM discovery string was set to only '/dev/hdb*'

SOLUTION :

Add new disk discovery string in path

alter system set asm_diskstring=’/dev/hdb*’,'/dev/hdc*’
Adding new device will work now.

Instance Provisioning For Database Cloud Service Fails With Cloud Backup Module Errors

$
0
0
ERROR :

Instance Provisioning For Database Cloud Service Fails With Cloud Backup Module Errors

CAUSE :

If we specify backup destination as "Both Cloud storage and Local storage" while provisioning the database instance, we need to enter Cloud Storage-Container, Cloud Storage-user name and Cloud Storage-password details in the GUI.
if we have special characters (most importantly a $ sign) in the password for Cloud Storage User, we face this issue

SOLUTION :



1. Change the password for Cloud Storage User such that it does not have any special character.

2. Wait for some time to get the change reflected and try to provision a new service instance on the Oracle Database Cloud Service dashboard

3. Change the password for Cloud Storage User: dksh.dbuser as per your security requirements

4. In the database service instance, reflect the password change by running:

# /var/opt/oracle/bkup_api/bkup_api update_wallet --password=new-password

FDPSTP failed due to ORA-20002: 3150: Process 'OEOL/15486876' is being worked upon.

$
0
0
Issue:

When Scheduling sales orders using the SCHORD module: Schedule Orders concurrent request, the request is taking hours, and sometimes failing with the following error:
Cause: FDPSTP failed due to ORA-20002: 3150: Process 'OEOL/15486876' is being worked upon. Please retry the current request on the process later.
ORA-06512: at "APPS.WF_ENGINE", line 5774
ORA-06512: at "APPS.OE_SCH_CONC_REQU

CAUSE

This issue is caused by the setting of profile 'MSC: ATP Debug Mode'.
The customer confirmed that the debug was enabled by 'MSC: ATP Debug Mode'. This same issue is outlined in the following note:
Note 473754.1 - Suggestions to Improve Performance when Booking Orders
The reason for the trace not capturing any poorly performing SQL is because the issue is coming from ATP and capturing of the performance issue in ATP would require profile 'MSC: ATP Debug Mode' to be set to 'Trace Only' and the rest of the steps outlined in the following note:
Note 122372.1 - Getting ATP Debug Files - The ATP Session Files

SOLUTION

1. Set MRP: Calculate Supply Demand to "No".
2. Set MSC: ATP Debug Mode to "None".
3. Ensure following tables are purged frequently based on data volume:
  - MRP_ATP_SCHEDULE_TEMP
  - MRP_ATP_DETAILS_TEMP
4. Ensure "MSC" schema is analyzed at frequent interval

Syscheck error with leaky bucket cache file

$
0
0
Syscheck error with leaky bucket cache file

GOAL:
Found issue during a syscheck on MP on 5.0.0-50.21.0:

ERROR: Syscheck::LBucket::getBucketFilehandle()
ERROR: Failed locking leaky bucket cache file
ERROR: FILE: /var/TKLC/log/syscheck/hardware_temp.cache
ERROR: SYS ERR: Resource temporarily unavailable
One or more module in class "hardware" FAILED
* temp: FAILURE:: MINOR::5000000000040000 -- Platform Health Check Failure
* temp: FAILURE:: The hardware temp cache bucket was locked.
* temp: FAILURE:: MINOR::5000000000040000 -- Platform Health Check Failure
* temp: FAILURE:: ...skipping test.

Solution:

The error message seen is related to a race condition that can occur when syscheck is
run manually and repeatedly which can conflict with the existing syscheck daemon that is running
on the system all the time.

This race condition can occur with the following files.

/var/TKLC/log/syscheck/hardware_fan.cache
/var/TKLC/log/syscheck/hardware_psu.cache
/var/TKLC/log/syscheck/hardware_temp.cache

You can also use "alarmMgr --alarmStatus" to see alarms on the system as this would not conflict with existing daemon. This is typically a temporary error.

How To Validate ASM Instances And Diskgroups On A RAC Cluster (When CRS Does Not Start).

$
0
0

GOAL

   This document provides the steps to validate the ASM Instances and diskgroups on a RAC Cluster (when CRS does not start).

SOLUTION

       In order to confirm & validate the ASM instances and diskgroups are in good shape (ASM instances start and diskgroups are mounted), please perform the next steps (bypassing CRS):
1) Shutdown CRS on all the nodes:
# crsctl stop crs
2) Then start the clusterware in exclusive mode on node #1:
# crsctl start crs -excl -nocrs
Note: On release 11.2.0.1, you need to use the next command
# crsctl start crs -excl
3) Connect to the +ASM1 instance and then make sure all the diskgroups are mounted including the OCRVOTE diskgroup:
SQL> select name, state from v$asm_diskgroup;
4) If not, then mount them (example):
SQL> alter diskgroup OCRVOTE mount;
SQL> select name, state from v$asm_diskgroup;
5) Then shutdown the clusterware on node #1:
# crsctl stop crs -f
6) Now, start  the the clusterware in exclusive mode on node #2:
# crsctl start crs -excl -nocrs
Note: On release 11.2.0.1, you need to use the next command:
# crsctl start crs -excl
7) Connect to the +ASM2 instance and then make sure all the diskgroups are mounted including the OCRVOTE diskgroup:
SQL> select name, state from v$asm_diskgroup;
8) If not, then mount them:
SQL. alter diskgroup OCRVOTE mount;
SQL> select name, state from v$asm_diskgroup;
9) Then shutdown the clusterware on node #2:
# crsctl stop crs -f
10) Please repeat the same steps on the additional nodes.


Block Change Tracking

$
0
0
Block Change Tracking:


Rman's change tracking feature for incremental backups improves incremental backup performance by recording changed blocks in each datafile in a change tracking file. If change tracking is enabled,Rman uses the change traching file to identify changed blocks for incremental backup,thus avoiding the need to scan every block in the datafile. 

One change tracking file is created for the whole database. By default, the change tracking file is created as an Oracle managed file in DB_CREATE_FILE_DEST. We can also specify the name of the block change tracking file, placing it in any desired location. 

Using change tracking in no way changes the commands used to perform incremental backups, and the change tracking files themselves generally require little maintenance after initial configuration. 

From Oracle 10g, the background process Block Change Tracking Writer (CTWR) will do the job of writing modified block details to block change tracking file. In a Real Applications Clusters (RAC) environment, the change tracking file must be located on shared storage accessible from all nodes in the cluster. 

Enabling and Disabling Change Tracking We can enable or disable change tracking when the database is either open or mounted. To alter the change tracking setting, we must use SQL*Plus to connect to the target database with administrator privileges. To store the change tracking file in the database area, set DB_CREATE_FILE_DEST in the target database.

 Then issue the following SQL statement to enable change tracking: 

SQL> alter database enable block change tracking; 

create the change tracking file in a desired location, using the following SQL statement: 

SQL> alter database enable block change tracking using file
         '/home/oracle/dg_1/oradata/data/bc_track.dbf';

Database altered. 

The REUSE option tells Oracle to overwrite any existing file with the specified name. 

SQL> alter database enable block change tracking using file '/home/oracle/dg_1/oradata/data/bc_track.dbf' reuse; 

To disable change tracking, use this SQL statement:

SQL> alter database disable block change tracking;

Database altered. 

Checking Whether Change Tracking is enabled

SQL> select status from V$BLOCK_CHANGE_TRACKING;

STATUS
----------
DISABLED

 SQL> alter database enable block change tracking using file
'/home/oracle/dg_1/oradata/data/bc_track.dbf';

Database altered. 

SQL> select * from V$BLOCK_CHANGE_TRACKING; 

STATUS                    FILENAME                                                  BYTES
---------             -----------------------------------------                      ------------
ENABLED    /home/oracle/dg_1/oradata/data/bc_track.dbf         11599872

Moving the Change Tracking File If you need to move the change tracking file, the ALTER DATABASE RENAME FILE command updates the control file to refer to the new location.

 If necessary, determine the current name of the change tracking file: 

step1

SQL> select FILENAME from V$BLOCK_CHANGE_TRACKING;
FILENAME
 --------------------------------------------------------------------- /home/oracle/dg_1/oradata/data/bc_track.dbf 

SQL> select FILENAME from V$BLOCK_CHANGE_TRACKING;
FILENAME
---------------------------------------------------------------------
/home/oracle/dg_1/oradata/data/bc_track.dbf 

step 2

Shutdown the database.
SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down. 

step 3 

Using host os commands, move the change tracking file to its new location.
 [oracle@ramkumar ~]$ mv /home/oracle/dg_1/oradata/data/bc_track.dbf
/home/oracle/dg_1/oradata/bc_new_trac.dbf 

step 4 

Mount the database and move the change tracking file to a location that
has more space. 
SQL> startup mount
ORACLE instance started. Total System Global Area 939495424 bytes
Fixed Size 2258840 bytes
Variable Size 570427496 bytes
Database Buffers 360710144 bytes
Redo Buffers 6098944 bytes
Database mounted. SQL> alter database rename file '/home/oracle/dg_1/oradata/data/bc_track.dbf'
to '/home/oracle/dg_1/oradata/bc_new_trac.dbf'; Database altered. 

step 5 

Open the database. 
SQL> alter database open; 
SQL> select FILENAME from V$BLOCK_CHANGE_TRACKING;
 FILENAME
---------------------------------------------------------------------
/home/oracle/dg_1/oradata/bc_new_track.dbf



ORA-59303: The attribute compatible.asm (10.1.0.0.0) of the diskgroup being

$
0
0
Scenario : 
 I was assigned to do 12.2.0.1 upgrade from 12.1.0.1  for a Stand alone database .
After upgrading ASM instance to 12.2.0.1, we got the below issue.
SQL> startup
ASM instance started
Total System Global Area 1140850688 bytes
Fixed Size                  8629704 bytes
Variable Size            1107055160 bytes
ASM Cache                  25165824 bytes
ORA-15032: not all alterations performed
ORA-59303: The attribute compatible.asm (10.1.0.0.0) of the diskgroup being
mounted should be 11.2.0.2.0 or higher.
ORA-15221: ASM operation requires compatible.asm of 11.1.0.0.0 or higher
ORA-59303: The attribute compatible.asm (10.1.0.0.0) of the diskgroup being
mounted should be 11.2.0.2.0 or higher.
ORA-15221: ASM operation requires compatible.asm of 11.1.0.0.0 or higher
ORA-59303: The attribute compatible.asm (10.1.0.0.0) of the diskgroup being
mounted should be 11.2.0.2.0 or higher.
ORA-15221: ASM operation requires compatible.asm of 11.1.0.0.0 or higher

After checking the compatible for asm and DB using the below code.


  1* select name,DATABASE_COMPATIBILITY ,COMPATIBILITY from  v$asm_diskgroup
SQL> /
NAME                       DATABASE_COMPATIBILITY            COMPATIBILITY              
 DATA02                         0.0.0.0.0                       0.0.0.0.0                    REDO01                         0.0.0.0.0                       0.0.0.0.0              ARCHLOG01                      0.0.0.0.0                       0.0.0.0.0                  DATA01                         10.1.0.0.0                      12.1.0.0                           

Planned to change the compatible parameter to 12.2., We changed for all the disk group.
alter diskgroup ARCHLOG01 mount restricted;
alter diskgroup ARCHLOG01 set attribute 'compatible.asm'='12.2';
alter diskgroup ARCHLOG01 dismount;
alter diskgroup ARCHLOG01 mount;
SQL>  select name,DATABASE_COMPATIBILITY ,COMPATIBILITY from  v$asm_diskgroup;
NAME                           DATABASE_COMPATIBILITY        COMPATIBILITY     
 ARCHLOG01                      0.0.0.0.0                    0.0.0.0.0                          DATA01                         10.1.0.0.0                   12.1.0.0.0                          DATA02                         10.1.0.0.0                   12.2.0.0.0           REDO01                         10.1.0.0.0                   12.2.0.0.0                                                                                                                                                                                        

SQL> alter diskgroup ARCHLOG01 mount restricted;
Diskgroup altered.

SQL> alter diskgroup ARCHLOG01 set attribute 'compatible.asm'='12.2';
Diskgroup altered.

SQL> alter diskgroup ARCHLOG01 dismount;
Diskgroup altered.

SQL> alter diskgroup ARCHLOG01 mount;
Diskgroup altered.

VIRTUAL INDEX

$
0
0
VIRTUAL INDEX :

A virtual index is a 'fake'index whose definition exists in the data dictionary,
but has no index tress association. It is user by oracle developers to test wheter 
a specific index is going to use useful whthout having to user the disk space 
associated with the realindex. The hidden parameter "_use_nosegment_indexes" is
 userd by quest tools and is also used.


SQL> create table emp as select * from all_objects;

Table created.

SQL> alter table emp add(constraint prim_1 primary key(object_id));

Table altered.


SQL> set autotrace traceonly explain;
SQL> select * from emp where object_id=10;

Execution Plan
----------------------------
Plan hash value: 744524945

--------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

--------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 158 | 2 (0)| 00:00:01 |

| 1 | TABLE ACCESS BY INDEX ROWID| EMP | 1 | 158 | 2 (0)| 00:00:01 |

|* 2 | INDEX UNIQUE SCAN | PRIM_1 | 1 | | 1 (0)| 00:00:01 |

--------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OBJECT_ID"=10)


If we query the table using a non-indexed column, we see a full table scan:

SQL> set autotrace traceonly explain;
SQL> select * from emp where object_name='USER_TABLES';

Execution Plan
----------------------------------------------------------
Plan hash value: 3956160932

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 316 | 22 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| EMP | 2 | 316 | 22 (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OBJECT_NAME"='USER_TABLES')

Note
-----
- dynamic sampling used for this statement (level=2)



If we query the table using a non indexed column,full table scan.
To create the virtual index on this column,simply add the Nosegment clause to the
create index statement;


SQL> create index vi_ind on emp (object_name) nosegment;

Index created.

If we repeat the previous query we can see the virtual index is not visible to the
optimizer.

SQL> set autotrace traceonly explain;
SQL> select * from emp where object_name='USER_TABLES';

Execution Plan
----------------------------------------------------------
Plan hash value: 3956160932

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 316 | 22 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| EMP | 2 | 316 | 22 (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OBJECT_NAME"='USER_TABLES')

To make the virtual index available we must set the _use_nosegment_indexes parameter.

SQL> alter session set "_use_nosegment_indexes"=true;

Session altered.

If we repeat the query we can see that the virtual index is now used.

SQL> select * from emp where object_name='USER_TABLES';

Execution Plan
----------------------------------------------------------
Plan hash value: 3917735323

--------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

--------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 2 | 316 | 5 (0)| 00:00:01 |

| 1 | TABLE ACCESS BY INDEX ROWID| EMP | 2 | 316 | 5 (0)| 00:00:01 |

|* 2 | INDEX RANGE SCAN | VI_IND | 24 | | 1 (0)| 00:00:01 |

--------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OBJECT_NAME"='USER_TABLES')

The virtual index does not appear in the USER_INDEXES view, but it present in the
USER_OBJECTS view.

SQL> SET AUTOTRACE OFF
SQL> select index_name from user_indexes;

INDEX_NAME
------------------------------
PRIM_1

SQL> select object_name from user_objects where object_type = 'INDEX';

OBJECT_NAME
--------------------------------------------------------------------------------
PRIM_1
VI_IND

Statistics can be gathered on virtual indexes in the same way as regular indexes,
but as we have seen previously, there will be no record of this in the USER_INDEXES 
view.

SQL> exec dbms_stats.gather_index_stats(USER,'VI_IND');

PL/SQL procedure successfully completed.

dbms_scheduler insufficient privileges,ORA-01031

$
0
0


Below error will occur while User accessing DBMS_SCHEDULER, so necessary permission need to give.

Error code:

ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_ISCHED"
ORA-06512: at "SYS.DBMS_ISCHED"
ORA-06512: at "SYS.DBMS_SCHEDULER"




Solution:

GRANT CREATE JOB TO <user name>
GRANT CREATE EVALUATION CONTEXT <username>
GRANT CREATE RULE <user name>

CRS startup ORA-21561 : OID generation failed

$
0
0

Error :-

While starting crs / Adding the node the crs alert log will show the error ORA-21561 : OID generation failed

Solution:-
This is due to misconfiguration in following files,
1. /etc/hosts file is not having proper entries i.e localhost and the IP address that should resolve to dns and local
2. /etc/resolv.conf should have match in all nodes
3.Check hostname and ipaddress be correct in all nodes



Viewing all 1640 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>