Quantcast
Channel: Doyensys Allappsdba Blog..
Viewing all 1640 articles
Browse latest View live

Article 2

$
0
0

SQL SERVER – Missing Index Script



Performance Tuning is quite interesting and Index plays a vital role in it. A proper index can improve the performance and a bad index can hamper the performance. In this post we will discuss about Missing Index.

Please note, if you should not create all the missing indexes this script suggest. This is just for guidance. You should not create more than 5-10 indexes per table


-- Missing Index Script

SELECTTOP25
dm_mid.database_id ASDatabaseID,
dm_migs.avg_user_impact*(dm_migs.user_seeks+dm_migs.user_scans) Avg_Estimated_Impact,
dm_migs.last_user_seek ASLast_User_Seek,
OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) AS[TableName],
'CREATE INDEX [IX_'+ OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) + '_'
+ REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.equality_columns,''),', ','_'),'[',''),']','')
+ CASE
WHENdm_mid.equality_columns ISNOTNULL
ANDdm_mid.inequality_columns ISNOTNULLTHEN'_'
ELSE''
END
+ REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.inequality_columns,''),', ','_'),'[',''),']','')
+ ']'
+ ' ON '+ dm_mid.statement
+ ' ('+ ISNULL(dm_mid.equality_columns,'')
+ CASEWHENdm_mid.equality_columns ISNOTNULLANDdm_mid.inequality_columns
ISNOTNULLTHEN','ELSE
''END
+ ISNULL(dm_mid.inequality_columns, '')
+ ')'
+ ISNULL(' INCLUDE ('+ dm_mid.included_columns + ')', '') ASCreate_Statement
FROMsys.dm_db_missing_index_groups dm_mig
INNERJOINsys.dm_db_missing_index_group_stats dm_migs
ONdm_migs.group_handle = dm_mig.index_group_handle
INNERJOINsys.dm_db_missing_index_details dm_mid
ONdm_mig.index_handle = dm_mid.index_handle
WHEREdm_mid.database_ID = DB_ID()
ORDERBYAvg_Estimated_Impact DESC
GO

SQL SERVER – Unable to Start SQL Service Error: 17172 – SNIInitialize() Failed with Error 0x2

$
0
0


I was trying to failover the cluster. While doing that I was in a situation where I was unable to start SQL Service.
I looked into SQL Server ERRORLOG to see the cause of SQL Startup failure. Here were the last messages logged in the ERRORLOG.

2018-01-28 15:29:54.27 Server SNIInitialize() failed with error 0x2.
2018-01-28 15:29:54.27 Server SQL Server shutdown has been initiated

The error code mentioned is 0x2 which is 2 in decimal. Error Number 2 in windows means “The system cannot find the file specified.” Interesting, same error appears when I try to start SQL Service using services.msc.



Now, to find what is missing, I captured registry access while starting SQL Service using Process Monitor tool.



As we can see above, SQL Server was looking for a key called “ClusterName” in the registry and that is missing. Once we see “NAME NOT FOUND” then we see two “WriteFile” in SQL Server ERRORLOG followed by a “CloseFile” for ERRORLOG. This confirms that this is the right reason for SQL Startup failure. In my case, there was no SQL cluster and I was just trying to fool SQL Server by showing it as cluster but looks like there are more keys needed.


WORKAROUND/SOLUTION

Below is the registry key which should not be there if SQL is standalone. So we should take a backup and delete the “Cluster” key. Here is the complete path for my instance in case the image is not clear.
HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL14.MSSQLSERVER\Cluster




The path would change based on SQL Server version and instance name. Here is the list of SQL versions (relevant as of today) and the respective keys in the registry.

SQL Server 2008
MSSQL10
SQL Server 2008 R2
MSSQL10_50
SQL Server 2012
MSSQL11
SQL Server 2014
MSSQL12
SQL Server 2016
MSSQL13
SQL Server 2017
MSSQL14


We need to append Instance Name (MSSQLSERVER for default instance) to make the complete registry key. By going with about logic, for my server SQL 2017, default instance it is MSSQL14.MSSQLSERVER.

Hope this blog helps someone who faces the similar issue.


SQL SERVER – 2017 – Script to Clear Procedure Cache at Database Level

$
0
0


Let us learn about Script to Clear Procedure Cache at Database Level.

In earlier version when we have to clear cache for any database we had access to DBCC command which we could run to clear the cache for a particular plan or for the server.

Here is the script which you can run on SQL Server 2016 and earlier versions:

1
DBCC FREEPROCCACHE

In SQL Server 2017 we now have a new script which we can run at the database level to clear the cache for that particular database. This is indeed a blessing for many when you just want to clear the cache for the database and not for the entire server, you can run the following script.

1
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;

OSB CLOUD BACKUP FOR AMAZON S3

$
0
0





OSB CLOUD BACKUP FOR AMAZON S3


Steps to implement OSB cloud module for Amazon S3:
============================================

Before Running the OSB Cloud Module for Amazon s3 Installer,verify the following pre-requirements.
1)The OSB Cloud Module for Amazon S3 installer requires Java 1.5 or higher to run.
2)Download the OSB Cloud Module for Amazon S3 installer (osbws_installer.zip) from OTN to the database server.
http://www.oracle.com/technetwork/products/secure-backup/secure-backup-s3-484709.html
3)Copy and unzip the OSB Cloud Module for Amazon S3 installer to /home/oracle.
4)Create a directory for the secure Oracle wallet. The Oracle wallet will be created by the installer and used to store your AWS S3 credentials.
($ORACLE_HOME/dbs/osbws_wallet)

Install Oracle Secure Backup Cloud Module for Amazon S3:
=================================================
$ mkdir -p $ORACLE_HOME/dbs/osbws_wallet

$ cd /home/oracle

unzip osbws_installer.zip



java -jar osbws_install.jar -AWSID ************* -AWSKey **************** -otnUser r*********** -otnPass ******* -walletDir $ORACLE_HOME/dbs/osbws_wallet
-libDir $ORACLE_HOME/lib


 


[-AWSID] and [-AWSKey] ====>(Mandatory)

Supply your AWS Access Key and Secret Key which serve the purpose of ID and Password to access Amazon S3. To obtain your AWS Access Key and Secret Key from the AWS website, navigate to Security Credentials, click on the Access Keys tab under Access Credentials to create or view your Access Key ID and Secret Access Key.









Please refer the screenshot.

 


 


[-otnUser] =====> (Mandatory)

Your OTN username which the installer uses to identify the customer.

[-otnPass] =====> (Mandatory)

Your OTN password.

[-walletDir] ====> (Mandatory)

Directory where you want the installer to create a secure wallet containing your AWS S3 credentials.

[-libDir] =====> (Optional)

Directory where you want the installer to download the OSB Cloud Module for Amazon S3 software library.

[-configFile]

The name of the initialization parameter file that will be created by the install tool. This parameter file will be referenced during your RMAN jobs for a particular database. If this parameter is not specified then the initialization parameter file will be created in a system-dependent default location and filename based on the ORACLE_SID. For example: $ORACLE_HOME/dbs/osbws<ORACLE_SID>.ora.
Fix Media Management Library Loading Error:
======================================


ORA-19554: error allocating device, device type: SBT_TAPE, device name:
ORA-27211: Failed to load Media Management Library
Additional information: 2





The installer does not create the default media management library symbolic link for the OSB Cloud Module for Amazon S3 media management library. This results in the following RMAN error when attempting to allocate a channel of type sbt:

Manually create the following symbolic link for the default media management library before performing backups using the SBT.

$ ln -s $ORACLE_HOME/lib/libosbws12.so $ORACLE_HOME/lib/libobk.so


Modify Oracle Recovery Manager's Media Management Configuration:
==========================================================

run {
configure channel device type sbt parms="SBT_LIBRARY=/u01/app/oracle/product/11.2.0.3/db_1/lib/libosbws.so,
SBT_PARMS=(OSB_WS_PFILE=/u01/app/oracle/product/11.2.0.3/db_1/dbs/osbwsSANDBNEW.ora)";
}

 


 

Perform an Oracle Database Backup to the Cloud:
=========================================

testing the backup connection:

run
{
allocate channel ch1 type 'sbt_tape';
release channel ch1;
}

 


creating backup to flash recovery area (for testing):

RMAN> backup as compressed backupset datafile 4;


copying backup to amazon cloud(s3):


RMAN> run
{
allocate channel ch1 type 'sbt_tape';
backup recovery area;
release channel ch1;
}


 

verifying the backup information:

RMAN> list backup summary;

 




After moving oracle database backup to cloud:

the database backup has been copied from flash recovery area to cloud(S3).


 






Minimize System Contention

$
0
0




how to Minimize System Contention

Understanding Response Time

SQL> select metric_name, value
from v$sysmetric
where metric_name in ('Database CPU Time Ratio',
'Database Wait Time Ratio') and
intsize_csec =
(select max(INTSIZE_CSEC) from V$SYSMETRIC);

METRIC_NAME                 VALUE
————————————------------ -----------
Database Wait Time Ratio    11.371689
Database CPU Time Ratio     87.831890
SQL>





Identifying SQL Statements with the Most Waits

SQL> select ash.user_id,
u.username,
s.sql_text,
sum(ash.wait_time +
ash.time_waited) ttl_wait_time
from v$active_session_history ash,
v$sqlarea s,
dba_users u
where ash.sample_time between sysdate - 60/2880 and sysdate
and ash.sql_id = s.sql_id
and ash.user_id = u.user_id
group by ash.user_id,s.sql_text, u.username
order by ttl_wait_time


Examining Session Waits

the V$SESSION_WAIT view to get a quick idea about what a particular session is waiting

V$SESSION: This view shows the specific resource currently being waited for, as well as the
event last waited for in each session.
• V$SESSION_WAIT: This view lists either the event currently being waited for or the event last
waited on for each session. It also shows the wait state and the wait time.
• V$SESSION_WAIT_HISTORY: This view shows the last ten wait events for each current session.
• V$SESSION_EVENT: This view shows the cumulative history of events waited on for each
session. The data in this view is available only so long as a session is active.
• V$SYSTEM_EVENT: This view shows each wait event and the time the entire instance has waited
on that event since you started the instance.
• V$SYSTEM_WAIT_CLASS: This view shows wait event statistics by wait classes.


SQL>select event, count(*) from v$session_wait
group by event;
EVENT COUNT(*)
--------------------------------------------- --------
SQL*Net message from client 11
Streams AQ: waiting for messages in the queue 1
enq: TX - row lock contention 1
...


SQL> select event, state, seconds_in_wait siw
from v$session_wait
where sid = 81;
EVENT STATE SIW
----------------------------- ----------- ------
enq: TX - row lock contention WAITING 976

The V$SESSION_WAIT view shows the current or last wait for each session. The STATE column in this view tells you
whether a session is currently waiting. Here are the possible values for the STATE column:
• WAITING: The session is currently waiting for a resource.
• WAITED UNKNOWN TIME: The duration of the last wait is unknown. (This value is shown only
if you set the TIMED_STATISTICS parameter to false, so in effect this depends on the value
set for the STATISTICS_LEVEL parameter. If you set STATISTICS_LEVEL to TYPICAL or ALL, the
TIMED_STATISTICS parameter will be TRUE by default. If the STATISTICS_LEVEL parameter is
set to BASIC, TIMED_STATISTICS will be FALSE by default.)
• WAITED SHORT TIME: The most recent wait was less than a 100th of a second long.
• WAITED KNOWN TIME: The WAIT_TIME column shows the duration of the last wait.


SQL> select wait_class, sum(time_waited), sum(time_waited)/sum(total_waits)
2 sum_waits
3 from v$system_wait_class
4 group by wait_class
5* order by 3 desc;
WAIT_CLASS SUM(TIME_WAITED) SUM_WAITS
----------- --------------- ----------
Idle 249659211 347.489249
Commit 1318006 236.795904
Concurrency 16126 4.818046
User I/O 135279 2.228869
Application 912 .0928055
Network 139 .0011209


Do not worry if you see a very high sum of waits for the Idle wait class. You should actually expect to see a high
number of Idle waits in any healthy database



select sea.event, sea.total_waits, sea.time_waited, sea.average_wait
from v$system_event sea, v$event_name enb, v$system_wait_class swc
where sea.event_id=enb.event_id
and enb.wait_class#=swc.wait_class#
and swc.wait_class in ('Application','Concurrency')
order by average_wait desc



EVENT TOTAL_WAITS TIME_WAITED AVERAGE_WAIT
----------- ------------ ----------- ---------- ----------
enq: TX - index contention 2 36 17.8
library cache load lock 76 800 10.53
buffer busy waits 9 89 9.87
row cache lock 26 100 3.84
cursor: pin S wait on X 484 1211 2.5
SQL*Net break/reset to client 2 2 1.16
library cache: mutex X 12 13 1.10
latch: row cache objects 183 158 .86
latch: cache buffers chains 5 3 .69
enq: RO - fast object reuse 147 70 .47
library cache lock 4 1 .27
cursor: pin S 20 5 .27
latch: shared pool 297

You can see that the enqueue waits caused by the row lock contention are what’s causing the most waits under
these two classes. Now you know exactly what’s slowing down the queries in your database! To get at the session
whose performance is being affected by the contention for the row lock, drill down to the session level using the
following query:


select se.sid, se.event, se.total_waits, se.time_waited, se.average_wait
from v$session_event se, v$session ss
where time_waited > 0
and se.sid=ss.sid
and ss.username is not NULL
and se.event='enq: TX - row lock contention';

SID EVENT TOTAL_WAITS time_waited average_wait
---- --------------------------- ----------- ------------ -----------
68 enq: TX - row lock content 24 8018 298


The output shows that the session with SID 68 had waited (or still might be waiting) for a row lock that’s held by
another transaction.

changing the SYS password in a Data Guard environment

$
0
0




changing the SYS password in a Data Guard environment

The way to change the SYS password without breaking the redo transport service includes
copying the primary database's password file to the standby server after changing the
password. The following steps show how this can be done:

1. Stop redo transport from the primary database to the standby database. We can
execute the DEFER command to defer the log destination with the ALTER SYSTEM
statement:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = 'DEFER';
System altered.

If the Data Guard broker is being used, we can use the following statement:
DGMGRL> EDIT DATABASE TURKEY_UN SET STATE = 'LOG-TRANSPORT-OFF';
2. Change the SYS user's password in the primary database:
SQL> ALTER USER SYS IDENTIFIED BY newpassword;
User altered.

3. Copy the primary database's password file to the standby site:
$ cd $ORACLE_HOME/dbs
$ scp orapwTURKEY standbyhost:/u01/app/oracle/product/11.2.0/
dbhome_1/dbs/orapwINDIAPS
4. Try logging into the standby database from the standby server using the new SYS
password:
$ sqlplus sys/newpassword as sysdba

5. Start redo transport from the primary database to the standby database:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = 'ENABLE';
System altered.
If the Data Guard broker is being used, we can use the following statement:
DGMGRL> EDIT DATABASE TURKEY_UN SET STATE = 'ONLINE';
6. Check whether the redo transport service is running normally by switching the redo
logs in the primary database:
SQL> ALTER SYSTEM SWITCH LOGFILE;
System altered.
Check the standby database's processes or the alert log file to see redo transport
service status:
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS
FROM V$MANAGED_STANDBY ;

PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
--------- ------------ ---------- ---------- ---------- ----------
ARCH CLOSING 1 3232 1 275
ARCH CLOSING 1 3229 1 47
ARCH CONNECTED 0 0 0 0
ARCH CLOSING 1 3220 2049 1164
RFS IDLE 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 0 0 0 0
MRP0 APPLYING_LOG 1 3233 122 102400
RFS IDLE 1 3233 122 1

note:Also, if the password file of the standby database is somehow corrupted,
or has been deleted, the redo transport service will raise an error and we
can copy the primary password file to the standby site to fix this problem.


ORA-27300: OS system dependent operation:semget failed with status: 28

$
0
0
Getting below error while startup after adding process parameter to 20000
 
SQL> startup;
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORA-27154: post/wait create failed
ORA-27300: OS system dependent operation:semget failed with status: 28
ORA-27301: OS failure message: No space left on device
ORA-27302: failure occurred at: sskgpcreates

Solution:

SEMMNI should be increased to accomodate more semaphores.

1. Query the current semaphore values in the kernel
     # /sbin/sysctl -a | grep sem

2. Modify SEMMNI value in the /etc/sysctl.conf.

From
kernel.sem = 250 32000 100 128

To
kernel.sem = 250 32000 100 200

3. # /sbin/sysctl -p

Reference:

Database Startup Fails with ORA-27300: OS system dependent operation:semget failed with status: 28 (Doc ID 949468.1)

Adding disks online to existing diskgroups

$
0
0
STEPS:

1) Check the current ASM diskgroup size:
select name,total_mb,free_mb from v$asm_diskgroup;
2) Check the HEADER_STATUS shows as CANDIDATE or PROVISIONED for the ASM disks:
select group_number,disk_number,header_status,STATE,OS_MB,TOTAL_MB,FREE_MB,NAME,LABEL,PATH,VOTING_FILE from v$asm_disk order by GROUP_NUMBER,DISK_NUMBER;
3) Check the raw disk path and diskgroup name and status
col PATH format a10;
col name format a10;
SELECT MOUNT_STATUS,HEADER_STATUS,MODE_STATUS,STATE,TOTAL_MB,
FREE_MB,NAME,PATH,LABEL FROM V$ASM_DISK;
4) Adding the disks to existing diskgroup online
ALTER DISKGROUP DG08 ADD DISK '/dev/oracleasm/disks/DG09' NAME DG09_0000;
5) Post check status of diskgroup and HEADER STATUS as 'MEMBER'
col PATH format a10;
col name format a10;
SELECT MOUNT_STATUS,HEADER_STATUS,MODE_STATUS,STATE,TOTAL_MB,
FREE_MB,NAME,PATH,LABEL FROM V$ASM_DISK;
6) Check the diskgroup status and size in both nodes
asmcmd

ASMCMD> lsdg

ORA-39701: database must be mounted EXCLUSIVE for UPGRADE or DOWNGRADE

$
0
0
While doing database upgrade from 11.2.0.3 to 11.2.0.4 on a 2 node RAC instance.After Installation of 11.2.0.4 software and connecting sqlplus in the new environment, Got the below error when trying to startup the database in upgrade mode,

SQL> startup upgrade
ORACLE instance started.

Total System Global Area 1.3262E+11 bytes
Fixed Size                  2304584 bytes
Variable Size            2.2481E+10 bytes
Database Buffers         1.1006E+11 bytes
Redo Buffers               74080256 bytes
Database mounted.
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-39701: database must be mounted EXCLUSIVE for UPGRADE or DOWNGRADE
Process ID: 29206
Session ID: 2002 Serial number: 3


 SOLUTION
==========

I observed from the initialization parameter file, the below values are set for cluster parameters

cluster_database         = TRUE
cluster_database_instances= 2

We need to set the value of cluster_database to FALSE using the below command

SQL> alter system set cluster_database=FALSE scope=spfile sid='*' ;

Reboot the Database for the changes to be effective.
Now you will see that value of cluster_database_instances parameter is automatically changed to 1,after setting cluster_database to FALSE.

Now, the database can be started in upgrade mode without any errors.
Revert back the parameter cluster_database to TRUE, after the upgrade is completed.
The following command can be used to set cluster_database to TRUE

SQL> alter system set cluster_database=TRUE scope=spfile sid='*' ;

Restart the database.

FORFILES Command Line used in windows

$
0
0
SYNTAX:

FORFILES [/P pathname] [/M searchmask] [/S] [/C command] [/D [+ | -] {dd-MM-yyyy | dd}]

Help Command: FORFILES /?

/P pathname
/M searchmask
/S to list sub-directories
/C Command
 The default command is "cmd /c echo @file".

 The following variables can be used in the
 command string:
 @file    - returns the name of the file.
 @fname   - returns the file name without
            extension.
 @ext     - returns only the extension of the
            file.
 @path    - returns the full path of the file.
 @relpath - returns the relative path of the
            file.
 @isdir   - returns "TRUE" if a file type is
            a directory, and "FALSE" for files.
 @fsize   - returns the size of the file in
            bytes.
 @fdate   - returns the last modified date of the
            file.
 @ftime   - returns the last modified time of the
            file.
/D    date  
/?    Displays this help message.

Example:

To delete the files older than 10 days,

forfiles -p "E:" -s -m *arch* -d -10 -c "cmd /c del @file"

To print the list of files:

To find every text file on the C: drive
FORFILES -p "C:\ /S /M*.TXT /C"CMD /C Echo @file is a text file"

To show the path of every HTML file on the C: drive
FORFILES -p "C:\ -S -M*.HTML /C"CMD /C Echo @RELPATH is the location of @FILE"

List every folder on the C: drive
FORFILES -p "C:\ /S /M*. /C"CMD /C if @ISDIR==TRUE echo @FILE is a folder"

For every file on the C: drive list the file extension in double quotes
FORFILES -p "C:\ /S /M*.* /C"CMD /C echo extension of @FILE is 0x22@EXT0x22"

List every file on the C: drive last modified over 365 days ago
FORFILES -p "C:\ /S /M*.* -d-365 /C"CMD /C Echo @FILE : date >= 100 days"

Find files last modified before 01-Mar-2018
FORFILES -p "C:\ /S /M*.* -d-010318 -C"CMD /C Echo @FILE is quite old!"

XCOPY windows line command to copy files to remote server

$
0
0
The xcopy command is used to copy one or more files and/or folders from one location to another location.
The xcopy command, with its many options and ability to copy entire directories, is similar to, but much more powerful than, the traditional copy command.

SYNTAX:
XCOPY source [destination] [/A | /M] [/D[:date]] [/P] [/S [/E]] [/V] [/W] [/C] [/I] [/Q] [/F] [/L] [/H] [/R] [/T] [/U] [/K] [/N] [/O] [/X] [/Y] [/-Y] [/Z][/EXCLUDE:file1[+file2][+file3]...]

EXAMPLE:

xcopy  E:oracle\BACKUP\archbkps\PROD* /I/D/Y \\192.168.1.100\e\oracle\backup\archbkps\* >E:\oracle\BACKUP\SCRIPTS\remotedr_archbkps.log

/D:m-d-y     Copies files changed on or after the specified date.
               If no date is given, copies only those files whose

               source time is newer than the destination time.

 /I           If destination does not exist and copying more than                  one file,assumes that destination must be a directory.

 /-Y          Causes prompting to confirm you want to overwrite an

               existing destination file.

The switch /Y may be preset in the COPYCMD environment variable.

This may be overridden with /-Y on the command line.





Relocation of RAC One Node database fails

$
0
0
ERROR:

On a 2 node RAC cluster,the relocation of  single node RAC database fails with the following errors:

$ srvctl relocate database -d RACONDB -n racnode2
PRCD-1222 : Online relocation of database "RACONDB" failed but database was restored to its original state
PRCD-1129 : Failed to start instance RACONDB_2 for database RACONDB
PRCR-1064 : Failed to start resource ora.racondb.db on node racnode2
CRS-5017: The resource action "ora.racondb.db start" encountered the following error:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Process ID: 0
Session ID: 0 Serial number: 0
For details refer to "(:CLSN00107:)" in "/u01/app/oragrid/base/diag/crs/racnode1/crs/trace/crsd_oraagent_oracle.trc".
CRS-2674: Start of 'ora.racondb.db' on 'racnode2' failed

The alert_RACONDB_2.log on node racnode2 shows:


Thu Feb 20 09:30:11 2018

Starting ORACLE instance (normal) (OS id: 13044)
...
Thu Feb 20 09:30:29 2018
USER (ospid: 13044): terminating the instance due to error 304
Thu Feb 20 09:30:30 2015

Instance terminated by USER, pid = 13044

CAUSE:

Issue was caused by parameter 'instance_number"' which was set to 1 for all instances (sid='*') in the shared spfile - e.g.:

 *.instance_number = 1

During the relocation of a RAC One Node database - for a short time - there will be 2 active RAC instances, due the parameter setting both instances will try to use the same instance_number which will cause the failure of the 2nd instance to start and ultimately cause the relocation to fail.


SOLUTION:

1. Unset parameter 'instance_number' for all instances:
 alter system reset instance_number scope=spfile sid='*';

2. Stop & restart the database:

srvctl stop database -d RACONDB

srvctl start database -d RACONDB

3. Retry the relocation

srvctl relocate database -d RACONDB -n racnode2

4.SHOW PARAMETER INSTANCE_NAME in 2nd node(racnode2),

NAME                             TYPE                     VALUE
--------                         -------                 -------
instance_name                    string                  RACONDB_2

Creating and Altering ASM diskgroups

$
0
0

To create the partition

# fdisk –l
# fdisk  /dev/sdb

'n' for new partition
 enter default values
'w' for write the partition

Changing the ownership of the partition

# chown –R oracle:dba /dev/sdb*
# chmod -R 775 /dev/sdb*

 Starting the instance and creating the diskgroup 

$export ORACLE_SID=+ASM
$sqlplus / as sysasm

Sql> startup nomount
Sql> select name,path from v$asm_disk;

Creating of the diskgroup       
                                                                 
Sql> create diskgroup DG01 external redundancy disk ‘/dev/sdb1’,’/dev/sdb2’;
Sql> create diskgroup DG01 disk ‘/dev/sdb1’,’/dev/sdb2’;
Sql> create diskgroup DG01 high redundancy disk ‘/dev/sdb1’, ’/dev/sdb2’, ’/dev/sdb3’;
Sql> select name,state from v$asm_diskgroup;
Sql> create diskgroup DG01 normal redundancy
failgroup fg1 disk ‘/dev/sdb1’,’/dev/sdb2’
failgroup fg2 disk ‘/dev/sdb3’,’/dev/sdb4’;


Altering the diskgroup

sql> alter diskgroup DG01 mount;
sql>alter diskgroup DG01 dismount;
Sql> alter diskgroup DG01 add disk ‘/dev/sdb3’;
Sql> select name,path from v$asm_disk;
Sql> alter diskgroup DG01 drop disk DG01_0003;

Utility to manage the diskgroup

$asmcmd
$asmca


Relocating VIP status from INTERMEDIATE state back to ONLINE state

$
0
0
2 Node output:


[oracle@DEVDBRAC2 log]$ crsctl stat res -t

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DG01.dg
               ONLINE  ONLINE       devdbrac1             STABLE
               ONLINE  OFFLINE      devdbrac2             STABLE
ora.DG02.dg
               OFFLINE OFFLINE      devdbrac1             STABLE
               ONLINE  ONLINE       devdbrac2             STABLE
ora.LISTENER.lsnr
               ONLINE  OFFLINE      devdbrac1             STABLE
               ONLINE  ONLINE       devdbrac2             STABLE
ora.asm
               ONLINE  ONLINE       devdbrac1             Started,STABLE
               ONLINE  ONLINE       devdbrac2             Started,STABLE
ora.net1.network
               ONLINE  ONLINE       devdbrac1             STABLE
               ONLINE  ONLINE       devdbrac2             STABLE
ora.ons
               ONLINE  ONLINE       devdbrac1             STABLE
               ONLINE  ONLINE       devdbrac2             STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       devdbrac2             STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       devdbrac1             STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       devdbrac1             STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       devdbrac1            192.168.1.1
ora.mgmtdb
      1        ONLINE  ONLINE       devdbrac1             Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       devdbrac1             STABLE
ora.devdbrac1.vip
      1        ONLINE  INTERMEDIATE devdbrac2             FAILED OVER,STABLE
ora.devdbrac2.vip
      1        ONLINE  ONLINE       devdbrac2             STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       devdbrac2             STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       devdbrac1             STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       devdbrac1             STABLE


The above output shows the vip is failed over to the DEVDBRAC2 node.

Check the VIP name and status,

[oracle@DEVDBRAC2 log]$ srvctl config vip -n devdbrac1

VIP exists: network number 1, hosting node devdbrac2
VIP Name: devdbrac2b
VIP IPv4 Address: 10.111.222.333
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 

VIP is individually disabled on nodes:

So from this only VIP devdbrac2b is working

Relocate the VIP back to 1 st node,

[oracle@DEVDBRAC2 log]$ srvctl relocate vip -vip devdbrac1b -node devdbrac1 

After check status on both nodes using,


crsctl stat res -tit should shows ONLINE and STABLE state.

Import fails with ORA-39126,ORA-06512,ORA-01403

$
0
0
During migration from 11.2.0.4 windows to 12.2.0.1 Linux we faced the below issue.

ORA-01403: no data found
13-MAR-18 01:37:20.088: ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 12098
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 5202
ORA-06512: at "SYS.KUPW$WORKER", line 5045

This is expected behavior in 12c DB. Please exclude all oracle maintained users during export and retry the import.

select username from dba_users where oracle_maintained='Y' order by username; 

ERROR: ORA-00257: archiver error. Connect internal only, until freed.

$
0
0
This is due to FRA getting full.

Find the usage by using below query

col ROUND(SPACE_LIMIT/1048576) heading “Space Allocated (MB)” format 999999
col round(space_used/1048576) heading “Space Used (MB)” format 99999
col name format a40
select name, round(space_limit/1048576) As space_limit,round(space_used/1048576) As space_used
from v$RECOVERY_FILE_DEST;

After that please remove the archive or flash back logs to free up space.

delete archivelog until time 'SYSDATE-1';

Or decrease and increase the FRA size to remove flashback logs.

Or increase the FRA to free the archiver from errors.

ORA-38824: A CREATE OR REPLACE command may not change the EDITIONABLE property during Apex install in 12.2

$
0
0
create or replace trigger FLOWS_FILES.wwv_biu_flow_file_objects
                                      *
ERROR at line 1:
ORA-38824: A CREATE OR REPLACE command may not change the EDITIONABLE property
of an existing object.

Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

The above error occured when we try to install Apex after migration and upgrade to 12.2.0.1 using export and import.

The solution is to remove all the previous apex related DB  objects using drop user

drop user APEX_3xx cascade;

drop user flows_files cascade;


And then re try the apex installation

How to reinstate the old Primary as a Standby after Failover

$
0
0
Step 1:

Please execute the below in new primary

SELECT TO_CHAR(STANDBY_BECAME_PRIMARY_SCN) FROM V$DATABASE;

Step 2:

Flashback the old primary to the above scn(taken from step 1).

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> FLASHBACK DATABASE TO SCN standby_became_primary_scn;

Step 3:

Convert to physical standby.

ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

Step 4:

Set up DG and verify log sync.
Start managed recovery.

Database Upgrade Failed with Error "ORA-19809: limit exceeded for recovery files"

$
0
0
Upgrade failed with error "ORA-19809: limit exceeded for recovery files" during the post upgrade steps.

Below is the message from alert log.

ORA-19815: WARNING: db_recovery_file_dest_size of 4385144832 bytes is 100.00% used, and has 0 remaining bytes available.
Wed Aug 29 22:48:43 2017
************************************************************************
You have following choices to free up space from recovery area:
1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN
BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to
reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating
system command was used to delete files, then use RMAN CROSSCHECK and
DELETE EXPIRED commands.
************************************************************************

Use the below to find FRA usage.

col name format a32
col size_mb format 999,999,999
col used_mb format 999,999,999
col pct_used format 999

select
name,
ceil( space_limit / 1024 / 1024) size_mb,
ceil( space_used / 1024 / 1024) used_mb,
decode( nvl( space_used, 0),0, 0,
ceil ( ( space_used / space_limit) * 100) ) pct_used
from
v$recovery_file_dest
order by
name desc;

Create free space at location "db_recovery_file_dest_size" and retry the post upgrade steps manually.

Oracle GoldenGate 11gr2 Upgrade from 11gr1

$
0
0
Oracle GoldenGate 11gr2 Upgrade from 11gr1


Pre-Upgrade Tasks

1. Download the latest OGG software from support.oracle.com.

2. Copy the OGG software to the ggsource.doyensys.com(SOURCE) and ggtarget.doyensys.com(TARGET) servers under “/u01/app/goldengate”.

3. Unzip the OGG 11.2 software into a new directory “/u01/app/goldengate”.

4. Perform this step on both source and target server.

$ cd /u01/app/goldengate
$ mkdir 11.2_software
$ unzip p18322848_1121020_Linux-x86-64.zip

create a test table on the source database which will be used to check if the replication is working fine after the OGG upgrade.

SQL> create table user1.table1(NAME varchar2(20), ID number (5) primary key);
Table creted.

GGSCI (ggsource.doyensys.com) 1> dblogin userid gguser@ggsource.doyensys.com,password oracle
Successfully logged into database.

GGSCI (ggsource.doyensys.com) 2> add trandata user1.table1
Logging of supplemental redo data enabled for table user1.table1.

5. Update the Extract/Pump parameter filet on the ggsource.doyensys.com to include the user1.table1 table as part of replication.

6. Update the Replicat parameter file on the ggtarget.doyensys.com to include user1.table1.

Upgrade Steps


1. On the ggsource.doyensys.com, stop the OGG 11.1 extract process, use the LOGEND command and make a note of the stop timestamp. This timestamp will be used to re-position the extract to pickup the transactions from the archive or the redo logs, generated during the OGG upgrade.

GGSCI (ggsource.doyensys.com) 4> send Ext1, LOGEND
Sending LOGEND request to EXTRACT Ext1 ...
Yet.

GGSCI (ggsource.doyensys.com) 5> send Ext1, LOGEND
Sending LOGEND request to EXTRACT Ext1 ...
Yet.

GGSCI (ggsource.doyensys.com) 6> info Ext1
EXTRACT Ext1 Last Started 2018-01-10 09:22 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:05 ago)
Log Read Checkpoint Oracle Redo Logs
2018-01-10 02:36:19 Thread 1, Seqno 35671, RBA 535552
SCN 540.2935258147 (2322217597987)
See if the RBA # change stop which means Extract completed the last transaction log and No longer do extract.
Now stop extract process. Always stop the extract process first.

GGSCI (ggsource.doyensys.com) 7> stop Ext1
Sending STOP request to EXTRACT Ext1 ...
Request processed.

GGSCI (ggsource.doyensys.com) 6> info Ext1
EXTRACT Ext1 Last Started 2018-01-10 09:22 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:05 ago)
Log read Checkpoint Oracle Redo Logs
2018-01-10 02:36:19 Thread 1, Seqno 35671, RBA 535552
SCN 540.2935258147 (2322217597987)

2. Stop the OGG Pump and Replicat process on the ggsource.doyensys.com and ggtarget.doyensys.com enviornments respectively. Wait for some time and make sure that there is NO LAG at the PUMP and the REPLICAT processed. Then stop PUMP and REPLICAT processed.

ggsource.doyensys.com
GGSCI (ggsource.doyensys.com) 4> info dpump1

EXTRACT dpump1 Last Started 2018-01-10 02:20 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:02 ago)
Log Read Checkpoint File /u01/app/gghome/11.1/dirdat/Ext1/et000037
2018-01-10 02:35:27.000604 RBA 53062713

GGSCI (ggsource.doyensys.com) 8> stop dpump1

Sending STOP request to REPLICAT dpump1 ...
Request processed.

ggtarget.doyensys.com
GGSCI (ggtarget.doyensys.com) 4> info Rep1

REPLICAT Rep1 Last Started 2018-01-10 02:20 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:02 ago)
Log read Checkpoint File
/u01/app/gghome/11.1/dirdat/dpump1/ra000027
2018-01-10 02:35:27.000604 RBA 53062713

GGSCI (ggtarget.doyensys.com) 8> stop Rep1
Sending STOP request to REPLICAT Rep1 ...
request processed.

3. Stop the OGG Manager process on the ggsource.doyensys.com and ggtarget.doyensys.com.

ggsource.doyensys.com
GGSCI (ggsource.doyensys.com) 3> stop mgr
Manager process is required by other GGS processed.
Are you sure you want to stop it (y/n)? y
Sending STOP request to MANAGER ...
request processed.
Manager stopped.

ggtarget.doyensys.com:
GGSCI (ggtarget.doyensys.com) 3> stop mgr
Manager process is required by other GGS processed.
Are you sure you want to stop it (y/n)? y
Sending STOP request to MANAGER ...
request processed.
Manager stopped.

4. Backup the current OGG 11.1 directory. Perform this step on both ggsource.doyensys.com and ggtarget.doyensys.com.

$ cd /u01/app/gghome
$ df -h
$ cp -pR 11.1 11.1_bkp
$ ls  11.1_bkp

5. Rename the 11.1 directory to 11.2 and copy the contents from 11.2_software to the new 11.2 directory. Perform this step on both ggsource.doyensys.com and ggtarget.doyensys.com.

$ cd /u01/app/gghome
$ mv 11.1 11.2
$ cd 11.2
$ chmod -R u+rw *
$ cd /u01/app/gghome
$ cp * /u01/app/gghome/11.2/

6. Update the .bash_profile file for the GGS user with the new OGG location “/u01/app/gghome”. Perform this step on both ggsource.doyensys.com and ggtarget.doyensys.com.

export GGS_HOME=/u01/app/gghome/11.2
export LD_LIBRARY_PATH = $ORACLE_HOME/lib:$ORACLE_HOME/lib32:/u01/app/gghome/11.2

7. Start the Oracle GoldenGate Manager process on both the ggsource.doyensys.com and ggtarget.doyensys.com.

$ cd $GGS_HOME
$ ./ggsci
$ GGSCI> start mgr
Repett the above steps for ggtarget.doyensys.com as well.

8. create or Re-create the CHECKPOINT TABLE in the ggtarget.doyensys.com database.
Copy the script “chkpt_ora_create.sql” from 11.2 directory to /tmp or some other location.

9. Execute the script by connecting as SYSTEM or any other DBA privileged user account.

$ cp /u01/app/gghome/chkpt_ora_create.sql /tmp/
SQL> @/tmp/chkpt_ora_create.sql

10. Upgrade the CHECKPOINT TABLE by logging into the ggtarget.doyensys.com database from the GGSCI prompt.
$ cd $GGS_HOME
$ ./ggsci
$ GGSCI> dblogin userid gguser@ggggsource.doyensys.com, password oracle
$ GGSCI> upgrade checkpointtable ggs_checkpoint
$ GGSCI> upgrade checkpointtable ggs_checkpoint_lox


11. Re-create the OGG processed and trails. create the extract process to BEGIN at the timestamp captured in the previous steps. The trail filet need to be generated as the trail file location has got changed from “/u01/app/gghome/11.1/dirdat” to “/u01/app/gghome/11.2/dirdat”.

ggsource.doyensys.com:

GGSCI> dblogin userid gguser@ggggsource.doyensys.com, password oracle
GGSCI> delete EXTTRAIL /u01/app/gghome/11.1/dirdat/et
GGSCI> delete extract Ext1
GGSCI> add extract Ext1 tranlog, BEGIN 2018-01-10 09:22
GGSCI> add exttrail /u01/app/gghome/11.2/dirdat/et, extract Ext1, megabytet 100
GGSCI> delete extract dpump1
GGSCI> delete RMTTRAIL /u01/app/gghome/11.1/dirdat/rt
GGSCI> add extract dpump1, exttrailggsource.doyensys.com /u01/app/gghome/11.2/dirdat/et
GGSCI> add rmttrail /u01/app/gghome/11.2/dirdat/rt, extract dpump1, megabytet 100

ggtarget.doyensys.com

GGSCI> delete replicat Rep1
GGSCI> add replicat Rep1, exttrail /u01/app/gghome/11.2/dirdat/rt, checkpointtable GGS.GGS_CHECKPOINT

12. Rename or move the OLD trail filet available under the /u01/app/gghome/11.1/dirdat directory so that the extract/pump starts from the new trail sequence 000000.

ggsource.doyensys.com

$ cd /u01/app/gghome/11.1/dirdat/
$ mkdir bkup
$ mv et* bkup/

ggtarget.doyensys.com

$ cd /u01/app/gghome/11.1/dirdat/
$ mkdir bkup
$ mv et* bkup/

13. Start the OGG Extract/Pump processed on the ggsource.doyensys.com and replicat process on the ggtarget.doyensys.com.

ggsource.doyensys.com

If you have NOT already create extract process to BEGIN at the stop timestamp captured in previous steps you can now alter the extract to BEGIN at stop timestamp captured etrier. Finally start the extract process.

$ cd $GGS_HOME
$ ./ggsci
$ GGSCI> alter extract Ext1, BEING 2018-01-10 09:22
$ GGSCI> start extract Ext1
$ GGSCI> info dpump1
Start the PUMP process.
$ GGSCI> Start extract dpump1
$ GGSCI> info dpump1

ggtarget.doyensys.com

$ cd $GGS_HOME
$ ./ggsci
$ GGSCI> start replicat Rep1
$ info Rep1
$ info all

Post Upgrade Steps

After the successful upgrade of OGG to 11.2, we can now test to see if the replication is working fine.

Connect to the ggsource.doyensys.com database and insert few records in user1.table1 table.

SQL> insert into user1.table1 values ('test1’,1);
SQL> insert into user1.table1 values ('test2’,2);
SQL> commit;
SQL> select count(*) from user1.table1;
Connect to the ggtarget.doyensys.com database and see if the records are replicated.
SQL> select count(*) from user1.table1;

Viewing all 1640 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>