Quantcast
Channel: Doyensys Allappsdba Blog..
Viewing all 1640 articles
Browse latest View live

12.1.0.2 Cluster alert log and trace/logfiles

$
0
0

Alert log and Trace Location centralized for Oracle 12.1.0.2 cluster environment :-


Troubleshooting a RAC issue is now made easier as we can easily locate all the required log/trace files under a centralized location. We need not jump around different directories to find the Clusterware related log files

From 12c,the Clusterware log/trace files are stored in the Automatic Diagnostic Repository (ADR) under the ADR_HOME location $ADR_BASE/diag/crs/`hostname`/crs.

#---// Finding ADR_HOME for Clusterware logs //---
myrac1 {/home/oracle}: adrci

ADRCI: Release 12.1.0.2.0 - Production on Wed Jan 13 04:01:00 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

ADR base = "/app/oracle"
adrci> show homes
ADR Homes:
diag/tnslsnr/rac01/mgmtlsnr
diag/rdbms/_mgmtdb/-MGMTDB
diag/crs/rac1/crs

New centralized Clusterware log location to identify the related Clusterware log/trace files.
Oracle stores all the Clusterware diagnostic log/trace files under trace directory of Clusterware ADR_HOME.
#---// Locating Clusterware centralized log repository //----

rac1 {/home/oracle}: ls -ld $ADR_BASE/diag/crs/`hostname`/crs/trace
drwxrwxr-x 2 oracle oinstall 876544 Jan 13 04:02 /app/oracle/diag/crs/rac1/crs/trace


#---// moving to Clusterware centralized log repository //----

myrac01 {/home/oracle}: cd $ADR_BASE/diag/crs/`hostname`/crs/trace

Clusterware alert log file

#---// Clusterware alert log file //----

myrac01 {/app/oracle/diag/crs/rac1/crs/trace}: ls -lrt alert.log
-rw-rw---- 1 oracle oinstall 1024107 Jan 13 04:04 alert.log

Cluster Ready Services (CRS) daemon/agent trace files
#---// crs daemon and agent trace files //----

rac1 {/app/oracle/diag/crs/rac1/crs/trace}: ls -lrt crs* | tail -6
-rw-rw---- 1 root oinstall 1340371 Jan 13 04:03 crsd.trm
-rw-rw---- 1 root oinstall 7238701 Jan 13 04:03 crsd.trc
-rw-rw---- 1 oracle oinstall 739563 Jan 13 04:04 crsd_oraagent_oracle.trm
-rw-rw---- 1 oracle oinstall 4371314 Jan 13 04:04 crsd_oraagent_oracle.trc
-rw-rw---- 1 root oinstall 650733 Jan 13 04:04 crsd_orarootagent_root.trm
-rw-rw---- 1 root oinstall 3565651 Jan 13 04:04 crsd_orarootagent_root.trc

Cluster Synchronization Services (CSS) daemon trace files

#---// css daemon trace files //----

rac1 {/app/oracle/diag/crs/rac1/crs/trace}: ls -lrt ocssd* | tail -2
-rw-rw---- 1 oracle oinstall 443750 Jan 13 04:04 ocssd.trm
-rw-rw---- 1 oracle oinstall 2868877 Jan 13 04:04 ocssd.trc

High Availability Service (HAS) daemon/agent trace files

#---// has daemon and agent trace files //----

rac1 {/app/oracle/diag/crs/rac1/crs/trace}: ls -lrt ohasd* | tail -6
-rw-rw---- 1 root oinstall 783540 Jan 13 04:04 ohasd_orarootagent_root.trm
-rw-rw---- 1 root oinstall 4805435 Jan 13 04:04 ohasd_orarootagent_root.trc
-rw-rw---- 1 root oinstall 211308 Jan 13 04:04 ohasd.trm
-rw-rw---- 1 root oinstall 1225033 Jan 13 04:04 ohasd.trc
-rw-rw---- 1 oracle oinstall 1382492 Jan 13 04:04 ohasd_oraagent_oracle.trm
-rw-rw---- 1 oracle oinstall 6877368 Jan 13 04:04 ohasd_oraagent_oracle.trc

Event Management (EVM) daemon trace files.

#---// evm daemon trace files //----
rac1 {/app/oracle/diag/crs/rac1/crs/trace}: ls -lrt evm* | tail -2
-rw-rw---- 1 oracle oinstall 1562665 Jan 13 04:04 evmd.trm
-rw-rw---- 1 oracle oinstall 10052660 Jan 13 04:04 evmd.trc

Grid Interprocess Communication (GIPC) daemon trace files
#---//
#---// gipc daemon trace files //----
#---//

rac1 {/app/oracle/diag/crs/rac1/crs/trace}: ls -lrt gip* | tail -2
-rw-rw---- 1 oracle oinstall 763055 Jan 13 04:04 gipcd.trm
-rw-rw---- 1 oracle oinstall 5507597 Jan 13 04:04 gipcd.trc

Cluster System Monitor daemon trace files
#---//
#---// sysmon daemon trace files //----
#---//

rac1 {/app/oracle/diag/crs/rac1/crs/trace}: ls -lrt osysmon* | tail -2
-rw-rw---- 1 root oinstall 557926 Jan 13 03:46 osysmond.trm
-rw-rw---- 1 root oinstall 2867760 Jan 13 03:46 osysmond.trc


Cluster Logger Service daemon trace files
#---//
#---// logger daemon trace files //----
#---//

rac1 {/app/oracle/diag/crs/rac1/crs/trace}: ls -lrt ologgerd* | tail -2
-rw-rw---- 1 root oinstall 410752 Jan 13 03:48 ologgerd.trm
-rw-rw---- 1 root oinstall 2742627 Jan 13 03:48 ologgerd.trc

Apart from the Clusterware service (daemon) trace files, the diagnostic location also contains the client utility trace files as shown below
OCRCHECK utility trace files
#---//
#---// ocrcheck client trace files //----
#---//

rac1 {/app/oracle/diag/crs/rac1/crs/trace}: ls -lrt ocrcheck* | tail -2
-rw-rw---- 1 oracle oinstall 137 Jan 13 01:30 ocrcheck_31338.trm
-rw-rw---- 1 oracle oinstall 310 Jan 13 01:30 ocrcheck_31338.trc

OCRCONFIG utility trace files.
#---//
#---// ocrconfig client trace files //----
#---//

rac1 {/app/oracle/diag/crs/rac1/crs/trace}: ls -lrt ocrconfig* | tail -2
-rw-rw---- 1 oracle oinstall 137 Jan 13 01:30 ocrconfig_30866.trm
-rw-rw---- 1 oracle oinstall 312 Jan 13 01:30 ocrconfig_30866.trc

CRSCTL utility trace files.
#---//
#---// crsctl client trace files //----
#---//

rac1 {/app/oracle/diag/crs/rac1/crs/trace}: ls -lrt crsctl* | tail -2
-rw-rw---- 1 oracle oinstall 175 Jan 13 01:31 crsctl_1103.trm
-rw-rw---- 1 oracle oinstall 729 Jan 13 01:31 crsctl_1103.trc

Cluster Monitor utility (oclumon) trace files
#---//
#---// oclumon client trace files //----
#---//

rac1 {/app/oracle/diag/crs/rac1/crs/trace}: ls -lrt oclumon* | tail -2
-rw-rw---- 1 oracle oinstall 166 Dec 17 03:54 oclumon_2175.trm
-rw-rw---- 1 oracle oinstall 387 Dec 17 03:54 oclumon_2175.trc

Grid Plug and Play utility (GPNP) trace files
#---//
#---// gpnp client trace files //----
#---//

rac1 {/app/oracle/diag/crs/rac1/crs/trace}: ls -lrt gpn* | tail -2
-rw-rw---- 1 oracle oinstall 205 Jan 13 01:31 gpnptool_664.trm
-rw-rw---- 1 oracle oinstall 608 Jan 13 01:31 gpnptool_664.trc

Therefore,Clusterware trace/log files are placed under a single centralized log repository. From 12c Release 1,we can troubleshooting Clusterware related issues easily under one location.

Steps To modify CSS Misscount, Reboottime and Disktimeout

$
0
0

Set parameters to avoid node eviction in RAC environment :- 

Node eviction is quite sometimes happening in Oracle RAC environment on any platform and troubleshooting and finding root cause for node eviction is very important for DBAs to avoid same in the future.
To avoid node eviction,adjust some parameters of CSS (cluster synchronization service) as per our network connectivity.

MissCount: (default 30 seconds)The MissCount is the maximum time, in seconds, that a cluster heartbeat (messages sent between nodes over the network interconnect or through voting disk; the prime indicator of connectivity), can be missed before entering into a cluster reconfiguration to evict the node.

RebootTime:(default 3 seconds) -the amount of time allowed for a node to complete a reboot after the CSS daemon has been evicted. (I.E. how long does it take for the machine to completely shutdown when you do a reboot -f -n)
DiskTimeout:(default 200 seconds) -the maximum amount of time allowed for a voting file I/O to complete; if this time is exceeded the voting disk will be marked as offline. Note that this is also the amount of time that will be required for initial cluster formation, i.e. when no nodes have previously been up and in a cluster.
MISSCOUNT DEFINITION AND DEFAULT VALUES
The CSS misscount parameter represents the maximum time, in seconds, that a network heartbeat 
can be missed before entering into a cluster reconfiguration to evict the node.

 The following are the default values for the misscount parameter and their respective versions
  when using Oracle Clusterware* in seconds:


* By default Misscount is less than Disktimeout seconds


From 11gR2, these settings can be changed online without taking any node down:
Execute crsctl as root to get the values:

[oracle@rac2 ~]$ crsctl get css misscount CRS-4678: Successful get misscount 60 for Cluster Synchronization Services. [oracle@rac2 ~]$ crsctl get css reboottime CRS-4678: Successful get reboottime 3 for Cluster Synchronization Services. [oracle@rac2 ~]$ crsctl get css disktimeout CRS-4678: Successful get disktimeout 200 for Cluster Synchronization Services.


Execute crsctl as root to modify the misscount:

$CRS_HOME/bin/crsctl set css misscount n
$CRS_HOME/bin/crsctl set css reboottime n
$CRS_HOME/bin/crsctl set css disktimeout n


Check and fix file permissions on Grid Infrastructure environment

$
0
0

Steps to fix the GRID_HOME permissions :-

1. Validate the <GRID_HOME> by using cluvfy tool.

$ cluvfy comp software -n all -verbose
This though only validates the binary files and theirs permissions

2. Check the permissions from the following 2 files which are created during Grid Infrastructure installation.

In $GRID_HOME/crs/utl (for 11.2 and 12.1.0.1) and <GRID_HOME>/crs/utl/<hostname> (for 12.1.0.2) directory:

- crsconfig_dirs which has all directories listed in <GRID_HOME> and their permissions

- crsconfig_fileperms which has list of files and their permissions and locations in <GRID_HOME>.

3. The permissions can be reverted back to original values with rootcrs.pl or roothas.pl.

We need to stop the clusterware related resources to change the permissions for the GRID_HOME.

For 11.2.0.4:

For clustered Grid Infrastructure, as root user

# cd <GRID_HOME>/crs/install/
# ./rootcrs.pl -init

For Standalone Grid Infrastructure, as root user

# cd <GRID_HOME>/crs/install/
# ./roothas.pl  -init

For 12.1.0.2:

For clustered Grid Infrastructure, as root user

# cd <GRID_HOME>/crs/install/
# ./rootcrs.sh -init


Now,check the GRID_HOME binaries are under correct permissions and group.

Start the cluster,

crsctl start crs


Article 1

$
0
0

      ASM configuration error "ASM Instance already exists"


1- When checked, ASM instance was not running
   ps -ef|grep smon
--- no asm_smon_+ASM instance there

2- While starting ASM instance manually get following error:

export ORACLE_SID=+ASM
sqlplus /as sysasm

SQL*Plus: Release 12.1.0.1.0 Production on Mon Apr 21 08:37:49 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORA-01078: failure in processing system parameters
ORA-29701: unable to connect to Cluster Synchronization Service
SQL> exit


1- Checked the cluster services:

./crsctl stat res -t

2- CSSD was not running. Started ora.cssd resource using root:

./crsctl start resource ora.cssd

CRS-2672: Attempting to start 'ora.cssd' on 'GMIDB'
CRS-2672: Attempting to start 'ora.diskmon' on 'GMIDB'
CRS-2676: Start of 'ora.diskmon' on 'GMIDB' succeeded
CRS-2676: Start of 'ora.cssd' on 'GMIDB' succeeded

3- Tried starting the ASM manually again:

sqlplus / as sysasm

SQL*Plus: Release 12.1.0.1.0 Production on Mon Apr 21 08:56:49 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORA-00099: warning: no parameter file specified for ASM instance
Total System Global Area 1135747072 bytes
Fixed Size                  2260728 bytes
Variable Size            1108320520 bytes
ASM Cache                  25165824 bytes
ORA-15110: no diskgroups mounted

PFILE was not found to instace for startup. when checked from $GRID_HOME/dbs, there was no pfile with name init+ASM.ora which needs to be created to resolve this issue

4- checked the v$asm_diskgroup view for ASm groupd for their status, and found no ASM group found no rows retrieved:

SQL> select name, state from v$asm_diskgroup;
no rows found

5- Start ASMCA gui and check if it was showing diskgroups there. No gorup found.
Create new ASM diskgroups DBFILES & ARCH using asmca utility.
It will show the success of diskgroup creation and then will mount the disk groups.

6- Check again from v$asm_diskgroup view:

SQL> select name, state from v$asm_diskgroup;

NAME                           STATE
------------------------------ -----------
DBFILES                         MOUNTED
ARCH                            MOUNTED

7- After restarting ASM instance, it will again give the same error as in point
3. But this time, v$asm_diskgroup shows the diskgroups but in dismounted state:

SQL> select name, state from v$asm_diskgroup;

NAME                           STATE
------------------------------ -----------
DBFILES                         DISMOUNTED
ARCH                            DISMOUNTED

8- Mount them manually like:

SQL> alter diskgroup DBFILES mount;
SQL> alter diskgroup RAC mount;

which will successfully mount it.

9- Now create pfile from momory. It will create init+ASM.ora file in $GRID_HOME/dbs

SQL> create pfile from memory;

10- Alter the init+ASM.ora file and add following parameters:

[oracle@GMIDB:/u01/app/oracle/product/12.2.0/grid/dbs]vi init+ASM.ora

instance_type=ASM
diagnostic_dest='/u01/app/oracle/'
asm_diskgroups='DBFILES'
asm_diskgroups='ARCH'
asm_power_limit=1
large_pool_size=12M
remote_login_passwordfile='EXCLUSIVE'

11- shutdown the ASM instance and start it using parameter file like :

[oracle@GMIDB dbs]$ sqlplus / as sysasm

SQL*Plus: Release 12.2.0.1.0 Production on TueApr 21 12:45:37 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL>  startup pfile='/u01/app/oracle/product/12.2.0/grid/dbs/init+ASM.ora'
ASM instance started

Total System Global Area 1135747072 bytes
Fixed Size                  2260728 bytes
Variable Size            1108320520 bytes
ASM Cache                  25165824 bytes
ASM diskgroups mounted
SQL>

**** here another issue resolved. As now disks are mounted automatically

12- Now create SPFILE from pfile for ASM instance to automatically select the SPfile we get following error ORA-29786: SIHA attribute GET failed with error

SQL> create spfile from pfile;
create spfile from pfile
*
ERROR at line 1:
ORA-29786: SIHA attribute GET failed with error [Attribute 'SPFILE' sts[200]
lsts[0]]

SQL> create spfile='/u01/app/oracle/product/12.2.0/grid/dbs/spfile+ASM.ora' from pfile='/u01/app/oracle/product/12.2.0/grid/dbs/init+ASM.ora'
  2  ;
create spfile='/u01/app/oracle/product/12.2.0/grid/dbs/spfile+ASM.ora' from pfile='/u01/app/oracle/product/12.2.0/grid/dbs/init+ASM.ora'
*
ERROR at line 1:
ORA-29786: SIHA attribute GET failed with error [Attribute 'SPFILE' sts[200] lsts[0]]

13- Check the status of diskgroups again:

SQL> select status,instance_name from v$instance;

STATUS       INSTANCE_NAME
------------ ----------------
STARTED      +ASM

SQL> select name,state from v$asm_diskgroup;

NAME                           STATE
------------------------------ -----------
DBFILES                         MOUNTED
ARCH                            MOUNTED

diskgroups are fine

14- Exit from the SQL console and check if ASM resource is online in cluster services:

[oracle@GMIDB dbs]$ srvctl config asm
PRCR-1001 : Resource ora.asm does not exist

It means, no ASM resource created that has to communicate with cluster services

15- Add ASM resource using srvctl command as :

[oracle@GMIDB dbs]$ srvctl add asm
[oracle@GMIDB dbs]$ srvctl config asm
ASM home: /u01/app/oracle/product/12.2.0/grid
ASM listener: LISTENER
Spfile:
ASM diskgroup discovery string: ++no-value-at-resource-creation--never-updated-through-ASM++

16- Now check the status of newly created ASM resource and start it using srvctl command:

[oracle@GMIDB dbs]$ srvctl status asm
ASM is not running.
[oracle@GMIDB dbs]$ srvctl start asm
[oracle@GMIDB dbs]$
[oracle@GMIDB dbs]$ srvctl status asm
ASM is running on GMIDB

17- Now connect with sqlplus as sysasm and create spfile:

[oracle@GMIDB dbs]$ sqlplus / as sysasm

SQL*Plus: Release 12.2.0.1.0 Production on Tue Apr 21 13:01:37 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle DBFILESbase 11g Enterprise Edition Release 12.2.0.1.0 - 64bit Production
With the Automatic Storage Management option

SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string

SQL> create spfile from pfile;

File created.

SQL> shutdown
ASM diskgroups dismounted
ASM instance shutdown
SQL> startup
ASM instance started

Total System Global Area 1135747072 bytes
Fixed Size                  2260728 bytes
Variable Size            1108320520 bytes
ASM Cache                  25165824 bytes
ASM diskgroups mounted

Article 0

$
0
0

             Renaming ASM DiskGroup in 11gR2


In Oracle 11gR2, we can use new utility RENAMEDG to rename the ASM disk group. 
This tool works in 2 phases. 
In first phase it generates a configuration file and in 
2nd phase it discover the disk and rename the diskgroup.

Below are the steps to rename an ASM disk group:

Set the oracle environment to ASM instance


 [oracle@ovm-HPSEB RMAN]$ . oraenv

ORACLE_SID = [HPSEB] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle

Check the available diskgroups

[oracle@ovm-HPSEB RMAN]$ asmcmd
ASMCMD> ls
DATA/
RECO/

List the ASM disks status:

ASMCMD> lsdsk
Path
ORCL:ASM_DATA1
ORCL:ASM_REC1


Check the current disk stats :

[oracle@ovm-HPSEB RMAN]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Wed May 28 11:20:07 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Automatic Storage Management option

SQL> select group_number, name, TOTAL_MB, FREE_MB from V$asm_disk_stat;

GROUP_NUMBER NAME                             TOTAL_MB    FREE_MB
------------ ------------------------------          ----------         ----------
           1 ASM_DATA1                                     25600              20292
           2 ASM_REC1                                        25600              25550

Dis-mount the Diskgroup needs to be renamed (In case of cluster, dismount from all nodes)

ASMCMD> ls
DATA/
RECO/

ASMCMD> umount RECO

ASMCMD> ls
DATA/

Note:  In case of Cluster, verify the diskgroup is dismounted

$ crsctl status resource ora.RECO.dg

NAME=ora.RECO.dg
TYPE=ora.diskgroup.type
TARGET=OFFLINE, OFFLINE
STATE=OFFLINE, OFFLINE

Use renamedg command to rename the dismounted ASM diskgroup:

[oracle@ovm-HPSEB RMAN]$ renamedg phase=both dgname=RECO newdgname=REC verbose=true

Parsing parameters..

Parameters in effect:

         Old DG name       : RECO
         New DG name          : REC
         Phases               :
                 Phase 1
                 Phase 2
         Discovery str        : (null)
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=both dgname=RECO newdgname=REC verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:ASM_REC1 with disk number:0 and timestamp (33002988 -959425536)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:ASM_REC1 with disk number:0 and timestamp (33002988 -959425536)
Checking if the diskgroup is mounted or used by CSS
Checking disk number:0
Generating configuration file..
Completed phase 1
Executing phase 2
Looking for ORCL:ASM_REC1
Modifying the header
Completed phase 2
Terminating kgfd context 0x7f42391990a0

Mount the renamed diskgroup with new name:

[oracle@ovm-HPSEB RMAN]$ asmcmd mount REC
[oracle@ovm-HPSEB RMAN]$ asmcmd
ASMCMD> ls
DATA/
REC/

Article 0

$
0
0

DG Broker Configuration for Physical Standby & Switch OVER


It is assumed that Primary and Standby is already configured and both are sync where archive transfer and apply is working fine.

1- On both Primary and standby databases, set the DG_BROKER_START=TRUE to start the DG Broker process:

SQL> ALTER SYSTEM SET DG_BROKER_START=TRUE SCOPE=BOTH;

2- Edit the Listener.ora file on both nodes and add DGMGRL static listener entry to avoid ORA-12154 which can be observed on startup of the standby database after performing a switchover.

Ensure that the GLOBAL_NAME is set to the db_unique_name_dgmgrl

SID_LIST_LISTENER =
  (SID_LIST =
   (SID_DESC =
        (GLOBAL_DBNAME = HPFMDM_dgmgrl)
        (ORACLE_HOME = /u01/app/oracle/HPFMDMuct/11.2.0/dbhome_1)
        (SID_NAME = HPFMDM)
    )
   )
  
3- In TNSNAMES.ORA on both sides, use already added to_primary & to_standby tns entries

Create Configuration:

4- Connect DGMGRL on Primary and connect using SYS/****

[oracle@HPFMDMdb agent12c]$ dgmgrl
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit HPFMDMuction

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL>
DGMGRL> connect sys/********
Connected.

DGMGRL> create configuration 'HPFMDM' as primary database is 'HPFMDM' connect identifier is 'to_primary';
Configuration "HPFMDM" created with primary database "HPFMDM"
DGMGRL>

5- ADD the STANDBY database to the configuration:

DGMGRL> ADD DATABASE 'DRHPFDMDDB' as CONNECT IDENTIFIER IS 'to_standby';
Database "DRHPFDMDDB" added
DGMGRL>
DGMGRL>

6- Check the configuration

DGMGRL> show configuration

Configuration - HPFMDM

  Protection Mode: MaxPerformance
  Databases:
    HPFMDM   - Primary database
    DRHPFDMDDB - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
DISABLED

DGMGRL>

7- Enable the configuration

DGMGRL> enable configuration
Enabled.

DGMGRL> show configuration

Configuration - HPFMDM

  Protection Mode: MaxPerformance
  Databases:
    HPFMDM   - Primary database
    DRHPFDMDDB - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL>

8- View the Primary and standby database properties

For Primary:

DGMGRL> show database 'HPFMDM'

Database - HPFMDM

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    HPFMDM

Database Status:
SUCCESS

DGMGRL>

For Standby:

DGMGRL> show database 'DRHPFDMDDB'

Database - DRHPFDMDDB

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       0 seconds (computed 1 second ago)
  Apply Rate:      342.00 KByte/s
  Real Time Query: OFF
  Instance(s):
    DRHPFDMDDB

Database Status:
SUCCESS

DGMGRL>
DGMGRL> show database verbose  'HPFMDM'

Database - HPFMDM

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    HPFMDM

  Properties:
    DGConnectIdentifier             = 'to_primary'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'auto'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = '+DATA/DRHPFDMDDB, +DATA/HPFMDM'
    LogFileNameConvert              = '+DATA/DRHPFDMDDB, +DATA/HPFMDM'
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    ApplyLagThreshold               = '0'
    TransportLagThreshold           = '0'
    TransportDisconnectedThreshold  = '30'
    SidName                         = 'HPFMDM'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.160)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=HPFMDM_dgmgrl.HBMSU.AC.AE)(INSTANCE_NAME=HPFMDM)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = '+REC/HPFMDM/archivelog/'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
SUCCESS

DGMGRL>

***** LogXptMode='ASYNC' means DataGuard is in MaxPerformance mode. If it required to be changed from MaxPerformance to MaxAvailability Mode, then LogXptMode must be changed to 'SYNC'  using:

DGMGRL> edit database standby set property logxptmode=SYNC;
Property "logxptmode" updated
DGMGRL>


9- Monitoring the DG Broker configuration

DGMGRL> show database 'HPFMDM' statusreport
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT

DGMGRL> show database 'DRHPFDMDDB' statusreport
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT

DGMGRL> show database 'HPFMDM' logxptstatus
LOG TRANSPORT STATUS
PRIMARY_INSTANCE_NAME STANDBY_DATABASE_NAME               STATUS
                HPFMDM               DRHPFDMDDB                     

DGMGRL>
DGMGRL> show database 'HPFMDM' InconsistentProperties
INCONSISTENT PROPERTIES
   INSTANCE_NAME        PROPERTY_NAME         MEMORY_VALUE         SPFILE_VALUE         BROKER_VALUE


10- Performing a switchover

Note: In this case, currently the Primary Database is HPFMDM and the Standby database is DRHPFDMDDB.

DGMGRL> switchover to 'DRHPFDMDDB'
Performing switchover NOW, please wait...
New primary database "DRHPFDMDDB" is opening...
Operation requires shutdown of instance "HPFMDM" on database "HPFMDM"
Shutting down instance "HPFMDM"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "HPFMDM" on database "HPFMDM"
Starting instance "HPFMDM"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "DRHPFDMDDB"

DGMGRL> SHOW CONFIGURATION;

Configuration
  Name:                HPFMDM
  Enabled:             YES
  Protection Mode:     MaxAvailability
  Databases:
    DRHPFDMDDB - Primary database
    HPFMDM - Physical standby database

Fast-Start Failover: DISABLED

Current status for "HPFMDM":
SUCCESS

Clonig a PDB in Same CDB in 12c

$
0
0
The Hot cloning is ability to clone a active database, and create a copy of that database in multi tenant database environment.
      In Oracle 12c cloing allows the capability and flexibiltiy of rapid provisioning of database.
     The Hot Cloning of a existing PDB in same CDB using create
pluggable database statement.
     Existing PDB         - PSDBT
     New Clone PDB    - PCLONE

step 1. Connect to the CDB
    CON_NAME
    ------------
    CDB$ROOT
step 2. Check the available PDB
    SQL> show pdbs
    CON_ID CON_NAME  OPEN MODE     RESTRICTED
    ----------- ------------------ ---------------------- -------------------
               2 PDB$SEED    READ ONLY       NO
               3 PSDBT           READ WRITE     NO
               4 APPS             READ WRITE     NO
step 3. Check the datafile path 
        A PDB named as PSDBT is going to be cloned. So before going to cloing that PDB we must check there physical path of the datafile.
  
    SQL> select name from v$datafile where con_id=3;
  
    NAME
    --------------------------------------------------------------------------------
    /u02/app/oracle/oradata/acs/psdbt/system01.dbf
    /u02/app/oracle/oradata/acs/psdbt/sysaux01.dbf
    /u02/app/oracle/oradata/acs/psdbt/undotbs01.dbf
    /u02/app/oracle/oradata/acs/psdbt/user01.dbf
    /u02/app/oracle/oradata/acs/psdbt/user02.dbf
step 4. create a directory 
        Create a Physical directory in Operating System responisble
    for the clonig PDB.
    SQL> !
    [oracle@ramkumar ~]$ mkdir /u02/app/oracle/oradata/acs/pclone
    [oracle@ramkumar pclone]$ exit
    exit
step 5. Clone the PDB 
        The PSDBT Pluggable Database is cloned to PCLONE  Pluggable database using below statement.
  
    SQL> create pluggable database pclone from psdbt
      2 file_name_convert=('psdbt','pclone');
    Pluggable database created.
step 6. Check the PDB status
        The cloned Pluggable database is always in mounted stage.
    SQL> show pdbs
     CON_ID CON_NAME       OPEN MODE  RESTRICTED
    ------- --------------------------- ------------------- ----------
          2 PDB$SEED               READ ONLY    NO
          3 PSDBT                      READ WRITE  NO
          4 APPS                        READ WRITE  NO
          5 PCLONE                   MOUNTED
step 7. Open the PDB and Check the Status
    SQL> alter pluggable database pclone open;
    Pluggable database altered.

    SQL> show pdbs
     CON_ID CON_NAME        OPEN MODE  RESTRICTED
    -------- --------------------------- ------------------- ----------
          2 PDB$SEED                   READ ONLY    NO
          3 PSDBT                          READ WRITE  NO
          4 APPS                            READ WRITE  NO
          5 PCLONE                       READ WRITE  NO

Hot Cloning of a Remote PDB in 12c

$
0
0
 The remote CDB is in local undo mode and archivelog mode, we don't need to turn the remote database into read-only mode
CDB1 - The remote container database which used for cloing a particular PDB named PSDBT.
CDB2 - The container database where we going to clone a PDB.
Step 1. Connect to the CDB.
    cdb1> select name, log_mode, cdb from v$database;
    NAME      LOG_MODE               CDB
    ---------           ------------                  ---
    CBD1      ARCHIVELOG              YES

Step 2. check Local Undo Mode of Remote CDB.
    cdb1> select property_name, property_value
      2  from database_properties
      3  where property_name='LOCAL_UNDO_ENABLED';
    PROPERTY_NAME         PROPERTY_VALUE
    ---------------------                      -----------------
    LOCAL_UNDO_ENABLED            TRUE

Step 3. Connect to the another CDB.
    cdb2> select name, log_mode, cdb from v$database;
    NAME          LOG_MODE                CDB
    --------             ------------                      ---
    CDB2      NOARCHIVELOG             YES

Step 4. Check Local Undo Mode of source CDB.
    cdb2> select property_name, property_value
      2  from database_properties
      3  where property_name='LOCAL_UNDO_ENABLED';
    PROPERTY_NAME             PROPERTY_VALUE
    ---------------------                          -----------------
    LOCAL_UNDO_ENABLED             TRUE

Step 5. Check tnsping between network.
  
    chech net configuration between source and remote server.
     tnsping cdb1 2
     tnsping cdb2 2
     [oracle@ramkumar ~]$ tnsping psdbt 2
    TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 25-JUN-2018 09:56:14
    Copyright (c) 1997, 2016, Oracle.  All rights reserved.
    Used parameter files:
    /u02/app/oracle/product/12.2.0.1/db_1/network/admin/sqlnet.ora

    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST =   
acs.localdomain)(PORT = 1521))) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME =   
psdbt.localdomain)))
    OK (0 msec)
    OK (0 msec)

Step 6. Create database link in source CDB.
        Create database link name as cdb_ln using
    a user and password of existing user in target CDB.
    
    cdb2>create database link cdb_ln
      2  connect to c##ram
      3  identified by oracle
      4  using 'psdbt';
    Database link created.
Step 7. Check the link
    cdb2> column DB_LINK format a10
    cdb2> column username format a10
    cdb2> column host format a10
    cdb2> select db_link,username,host from dba_db_links;
    DB_LINK    USERNAME   HOST
    ---------- ---------- ----------
    SYS_HUB               CDB2
    CDB_LN     C##RAM    psdbt

Step 8. Check the PDB datafile path using link.
    cdb2>select name from v$datafile@cdb_ln;
    NAME
    --------------------------------------------------------------------------------
    /u02/app/oracle/oradata/ram/system01.dbf
    /u02/app/oracle/oradata/ram/sysaux01.dbf
    /u02/app/oracle/oradata/ram/undotbs01.dbf
    /u02/app/oracle/oradata/ram/user01.dbf
    /u02/app/oracle/oradata/ram/user02.dbf
    
Step 9. Create a directory in responsible place in OS.
    cdb2>!
    [oracle@ramkumar ~]$ mkdir /u02/app/oracle/oradata/pdbnew
    [oracle@ramkumar ~]$ exit
    exit
Step 10. Clone the Remote PDB using database link.
    cdb2>create pluggable database pdbnew
      2  from psdbt@cdb_ln
      3  file_name_convert=  
('/u02/app/oracle/oradata/ram/','/u02/app/oracle/oradata/pdbnew/');
    Pluggable database created.
  
Step 11. Check the Pluggable database mode
    cdb2>show pdbs
  
        CON_ID         CON_NAME                      OPEN MODE          RESTRICTED
    ----------         ------------------------------                ----------                     ----------
             2            PDB$SEED                       READ ONLY                    NO
             3            LONEPDB                         MOUNTED
             4            LTWOPDB                         MOUNTED
             5            PDBNEW                          MOUNTED
Step 12. Open the Cloned PDB in read only mode
    cdb2>alter pluggable database pdbnew open read only;
  
    Pluggable database altered.
  
    cdb2>show pdbs
        CON_ID       CON_NAME                       OPEN MODE                RESTRICTED
    ---------------    ------------------------                   ----------                              ----------
             2          PDB$SEED                              READ ONLY                         NO
             3          LONEPDB                                MOUNTED
             4          LTWOPDB                                MOUNTED
             5          PDBNEW                                  READ ONLY                        YES
Step 13. Change the session to Newly cloned PDB.

    cdb2>alter session set container=pdbnew;
    Session altered.

Step 14. Check the Consistency of cloned records.
    cdb2>show con_name;
    CON_NAME
    ------------------------------
    PDBNEW

    cdb2>select count(*) from dba_tables;
      COUNT(*)
         ----------
          1751
    
Step 15. Connect session to CDB. 
  
    cdb2>alter session set container=cdb$root;
  
    Session altered.
  
  
Step 16. Check the consistency of records by dblinks.
    cdb2>select count(*) from dba_tables@cdb_ln;
    
      COUNT(*)

       ----------

         1751


Step 17. Close the read only mode and open it normaly.    
  
    cdb2>alter pluggable database pdbnew close immediate;
  
    Pluggable database altered.

Profile option with modification date and user in EBS

$
0
0
+=================================+
| Profile option with modification date and user  |
+=================================+

SELECT
t.user_profile_option_name
, profile_option_value
, v.CREATION_DATE
, v.LAST_UPDATE_DATE
, v.CREATION_DATE - v.LAST_UPDATE_DATE "Change Date"
, (SELECT UNIQUE user_name
FROM fnd_user
WHERE user_id = v.created_by) "Created By"
, (SELECT user_name
FROM fnd_user
WHERE user_id = v.last_updated_by) "Last Update By"
FROM
fnd_profile_options o
, fnd_profile_option_values v
, fnd_profile_options_tl t
WHERE
o.profile_option_id = v.profile_option_id
AND o.application_id = v.application_id
AND o.profile_option_name = t.profile_option_name
AND level_id = 10001
AND t.LANGUAGE IN (SELECT language_code
FROM fnd_languages
WHERE installed_flag = 'B'
UNION
SELECT nls_language
FROM fnd_languages
WHERE installed_flag = 'B')
ORDER BY user_profile_option_name;

Find all the profile options recently changed in EBS

$
0
0
+===============================================+
| Find all the profile options recently changed |
+===============================================+

select p.profile_option_name SHORT_NAME, n.user_profile_option_name "PROFILE NAME",
decode(v.level_id, 10001, 'Site', 10002, 'Application',
10003, 'Responsibility', 10004, 'User', 10005, 'Server',
10007, 'SERVRESP', 'UnDef') LEVEL_SET,
decode(to_char(v.level_id), '10001', '',
'10002', app.application_short_name, '10003', rsp.responsibility_key,
'10005', svr.node_name, '10006', org.name, '10004', usr.user_name,
'10007', 'Serv/resp', 'UnDef') "CONTEXT", v.profile_option_value VALUE, v.LAST_UPDATE_DATE 
from fnd_profile_options p,
fnd_profile_option_values v,
fnd_profile_options_tl n,
fnd_user usr,
fnd_application app,
fnd_responsibility rsp,
fnd_nodes svr,
hr_operating_units org
where p.profile_option_id = v.profile_option_id (+)
and p.profile_option_name = n.profile_option_name
--and upper(n.user_profile_option_name) like upper('BNE%')
--and trunc(v.LAST_UPDATE_DATE) > trunc(sysdate-170)
and usr.user_id (+) = v.level_value
and rsp.application_id (+) = v.level_value_application_id
and rsp.responsibility_id (+) = v.level_value
and app.application_id (+) = v.level_value
and svr.node_id (+) = v.level_value
and org.organization_id (+) = v.level_value
and v.LAST_UPDATE_DATE is not null
order by last_update_date desc, short_name, level_set;

How to remove conflict patches in Weblogic

$
0
0
How to remove conflict patches in weblogic by using bsu.sh

ISSUE:

When trying to apply new patch to the weblogic the following error shown up :-

[appl@ebsdev bsu]$ ./bsu.sh -prod_dir=/u01/oracle/EBSDEV/fs1/FMW_Home/wlserver_10.3/ -patch_download_dir=/u01/oracle/EBSDEV/fs1/FMW_Home/utils/bsu/cache_dir/                                 -patchlist=GFWX -verbose -install
Checking for conflicts...

Conflict(s) detected - resolve conflict condition and execute patch installation again
Conflict condition details follow:
Patch EJUW is mutually exclusive and cannot coexist with patch(es): 7Z9S,DEM4,Z62H,HSZB,5C71,U81V


Solution:

[appl@ebsdev bsu]$ ./bsu.sh -prod_dir=/u01/oracle/EBSDEV/fs1/FMW_Home/wlserver_10.3/ -patchlist=7Z9S,DEM4,Z62H,HSZB,5C71,U81V -verbose -remove

retest the patch again

Weblogic Smart Update (BSU) Hangs when Starting or Errors with Exception in thread "Main Thread" java.lang.OutOfMemoryError

$
0
0
ISSUE:-

Weblogic Smart Update utilily getting hangs and throwing below error

2018-05-23 09:29:30,344 DEBUG [main] com.bea.plateng.patch.gui.PatchMediator - java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded

java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded

REASON:-

The bsu utility uses Java to initialize and it is running out of heap space.

SOLUTION:-

Windows:

1. Edit the file <WL_HOME>\utils\bsu\bsu.cmd

2. Change the setting:

set MEM_ARGS=-Xms256m -Xmx512m

to

set MEM_ARGS=-Xms1024m -Xmx1024m

3. Start bsu.cmd it should now start Oracle Smart Update



Unix / Linux:

1. Edit the file $WL_HOME/utils/bsu/bsu.sh

2. Change the setting:

MEM_ARGS="-Xms256m -Xmx512m"

to

MEM_ARGS="-Xms1024m -Xmx1024m"


3. Start bsu.sh it should now start Oracle Smart Update


Reference:

E-WL: Weblogic Smart Update (BSU) Hangs when Starting or Errors with Exception in thread "Main Thread" java.lang.OutOfMemoryError (Doc ID 2002949.1)




adop pre-req admsi.pl not getting invoked

$
0
0
ISSUE:-

For applying application patch in R12 environment, there will be file in readme called "admsi.pl" which need to be run as pre-requisties

But when we running that we are facing below error.

Invoking Oracle Patch Application Assistant…. Please set the DISPLAY variable and re-run this script

SOLUTION:-

Use to run the script in below type
perl $AD_TOP/bin/admsi.pl -cli

ORA-38824: A CREATE OR REPLACE command may not change the EDITIONABLE property during Apex install in 12.2

$
0
0
Issue:

Fresh apex install failed with the below error.

ORA-38824: A CREATE OR REPLACE command may not change the EDITIONABLE property during Apex install in 12.2

This is due to previous version of flow_files installed in the the database.

Solution:

Check whether old version is installed and use drop schema and retry the installation.

select username from dba_users where username like '%FLOW%'

ERROR: ORA-00257: archiver error. Connect internal only, until freed

$
0
0
Cause:

The archiver process received an error while trying to archive a redo log due to archive location or FRA getting full.

Solution:

This can solved in two ways.

1. Increase FRA
2. Clear old archives.


Increase FRA:

SQL> show parameter db_recovery_file_dest;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      C:\oracle\product\12.2.0.1\flash_recovery_area
db_recovery_file_dest_size           big integer 2G

SQL> ALTER SYSTEM SET db_recovery_file_dest_size='10G' SCOPE=BOTH;

Clear old archives:

Find the space used in flash recovery area by using following SQL:

col ROUND(SPACE_LIMIT/1048576) heading “Space Allocated (MB)” format 999999
col round(space_used/1048576) heading “Space Used (MB)” format 99999
col name format a40
select name, round(space_limit/1048576) As space_limit,round(space_used/1048576) As space_used
from v$RECOVERY_FILE_DEST;

If Flashback log is using more space decrease and increase the FRA.

If archives are using more space move archives to different space or use the below command to remove archives.

 RMAN>delete archivelog until time 'SYSDATE-1';
 or,
 RMAN>delete archivelog all;


How to reinstate the old Primary as a Standby after Failover

$
0
0
Cause:

Primary database crash and failed over to standby.


Solution:

We can reinstate old primary as standby by performing flashback on the old primary database.

Step 1:

Please execute the below in new primary

SELECT TO_CHAR(STANDBY_BECAME_PRIMARY_SCN) FROM V$DATABASE;

Step 2:

Flashback the old primary to the above scn(taken from step 1).

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> FLASHBACK DATABASE TO SCN standby_became_primary_scn;

Step 3:

Convert to physical standby.

ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

Step 4:

Set up DG and verify log sync.
Start managed recovery.

Import fails with ORA-39126,ORA-06512,ORA-01403

$
0
0
Cause:

The issue happen when we migrate using export from 11gR2 windows and import it in 12cR2 Linux server.

Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PUT_DDLS [TABLE:"NOMU"."TIPOS_ACCON"]
ORA-01403: no data found

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6266

----- PL/SQL Call Stack -----
object line object
handle number name
c0000000da9e5770 14916 package body SYS.KUPW$WORKER
c0000000da9e5770 6293 package body SYS.KUPW$WORKER
c0000000da9e5770 12280 package body SYS.KUPW$WORKER
c0000000da9e5770 3278 package body SYS.KUPW$WORKER
c0000000da9e5770 6882 package body SYS.KUPW$WORKER
c0000000da9e5770 1259 package body SYS.KUPW$WORKER
c0000000da9cdf78 2 anonymous block


Solution:

The issue is due to trying to import all the schemas including sysman etc. Solution is to exclude all the system related schemas and import only the application schemas.

ORA-02266: unique/primary keys in table referenced by enabled foreign keys

$
0
0
Casue:

When issuing truncate on a table we get the below error.

SQL> truncate table OWNER.TABLE;
truncate table OWNER.TABLE
                         *
ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys

This is due to owner.table having reference with other table.

Use the below query to find the relationship.

set lines 200
col owner for a20
col status for a10
col deferrable for a15
col validated for a18
col generated for a15
col deferred for a15
select owner,constraint_name,constraint_type,table_name,status,DEFERRABLE,DEFERRED,VALIDATED,GENERATED from dba_constraints where table_name='TAB1' and constraint_type in ('R','P');


Solution:

Disable the constraints identified in the above query and execute the query again.

alter table OWNER_TABLE.TAB1 disable constraint PK_TAB1 cascade;
alter table OWNER_TABLE.TAB1 disable constraint FK_TAB1_30 cascade;
alter table OWNER_TABLE.TAB1 disable constraint FK_ERROR_3A cascade;

After that run the truncate.

SQL> truncate table OWNER.TABLE;




Submenu And Function Listing

$
0
0
+==============================+
| Submenu And Function Listing |
+==============================+

Purpose/Description:
By using this query you can check function and submenus attached to a specific menu

SELECT
a.user_menu_name,
c.prompt
, c.description
FROM
apps.fnd_menus_tl a
, fnd_menu_entries_tl c
WHERE
a.menu_id = c.menu_id
AND a.user_menu_name like '%Navigator Menu%System Administrator GUI%';

ORA-12154: TNS: could not resolve the connect identifier specified

$
0
0

ORA-12154 occurs when the transparent network substrate (TNS) cannot resolve the service name. The service name is specified in the TNSNames.ora file, which is located in your %ORACLE_HOME%\network\admin\ folder. Most often, this error occurs when information in the TNSNames.ora file is incorrect. For example:
  • The .world extension is not included on the database name.
  • The SERVICE_NAME or HOST variable is incorrect.
To resolve this issue, try one of the three following troubleshooting options, in the order listed below.
Option 1: Edit TNSNames.ora
 Provide the full database name, including the .world extension in both of the following locations:
  • The TNSNames.ora file.
And
  • The Server text box of the Connect page.
Option 2: Ensure that Tableau Server Run As User account has permissions to TNSNames.ora (Tableau Server only)
If you have Tableau Server installed, complete the procedure below to ensure that the Tableau Server Run As user account has permissions to the location of the TNSNames.ora file. If the Run As user account does not have permissions, Tableau Server is unable to access the Oracle Data source details.
Step 1 
Verify the location of the TNSNames.ora file, or the equivalent SQLNET.ora and LDAP.ora files on the machine.
Note:By default, the TNSNames.ora file is located in <oracle-directory>\network\admin directory. For example, C:\Oracle_Client\network\admin.
Step 2 
Confirm that the TNS_ADMIN variable points to the location of the file or files described in step 1.
Note:To check the TNS_ADMIN variable, click the Start button, and select Control Panel> System. Click Advanced system settings, click the Advanced tab, and click Environmental Variables button.
Step 3 
On the Windows Start button, select Tableau Server <version>> Configure Tableau Server.
Step 4 
Under Server Run As User, copy the information in the User text box.
Step 5 
Go to the folder where the TNSNames.ora file is located.
Step 6 
Right-click the folder and select Properties. Click the Security tab and click the Edit button.
Step 7 
Under Group or user names, click the Add button.
Step 8 
In the Enter the object names to select text box, paste the details of the Run As User account you copied in step 6.
Step 9 
When finished, click OK.
Step 10 
In the Permissions area,ensure that the Full control and Modify check boxes are selected.
Step 11 
Click OK to close the dialog boxes.

Option 3: Verify that all information in TNSNames.ora is correct
 If the above troubleshooting steps do not resolve the issue, continue reading and complete the procedure to verify the other information in the TNSNames.ora file is provided correctly.
An example of a TNSNames.ora file is shown here:
QAORCL10.world =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = MY_HOST_NAME)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = MY_SERVICE_NAME)
)
)
The three variables of interest in the file are HOST, PORT, and SERVICE_NAME. Copy these variables from the TNSNames.ora file and keep them available. These variables are case sensitive.The following steps describe how to provide these variables for your connection.
Step 1 
From the Tableau Desktop start page, select Connect to Data.
Step 2 
On the Connect page, click Oracle.
Step 3 
Provide the following information from the TNSNames.ora file:
  • In the Server name text box, type the HOST name.
  • In the Service text box, type the SERVICE_NAME.
  • In the Port text box, type the PORT number.
  • Specify whether to use Windows Authentication or a specific user name and password, and then click Connect.
Note: Variables are case sensitive.
Step 4 
Select a schema from the Schema drop-down list, drag a table to the join area, and then click Go to Worksheet.
Step 5 
Important: 
  • Make sure that you save the TNSNames.ora file you use in ASCII encoding. Any other encoding besides ASCII, for example UTF-8 or Unicode, causes the ORA-12154 error message.
  • These steps are usually required even if the Oracle software is already installed on the machine.
Step 6
Download and install the appropriate Oracle drivers from the Tableau Driverspage. Even if an Oracle driver is installed on your computer, it could be incompatible with Tableau and will require the version specified on the Drivers page.

Viewing all 1640 articles
Browse latest View live