Quantcast
Channel: Doyensys Allappsdba Blog..
Viewing all 1640 articles
Browse latest View live

Article 1

$
0
0

ORA-39142: incompatible version number 5.1 in dump file


ORA-39142: incompatible version number 5.1 in dump file.


Recently I came across one issue while importing schema dump in 12c database.

My Scenario.
Schema Export taken from Database version 12.2.0.1.0
Schema Import needs to be done on Database version 12.1.0.1.0

While doing import to 12.1, I have received following error and import terminated.

ORA-39000: bad dump file specification
ORA-39142: incompatible version number 5.1 in dump file "/opt/dump/db_dump.dmp"

Further analysis.
As above error shows that there is some incompatibility with the versioning of Database and Dump File version.

Here are some facts as per oracle Doc related to DB versioning and Dump file compatibility.

Data Pump dumpfile compatibility


Export From Source Database With COMPATIBLE
 10.1.0.x.y
 10.2.0.x.y
 11.1.0.x.y
 11.2.0.x.y
 12.1.0.x.y
12.2.0.x.y
10.1.0.x.y
          -
          -
          -
          -
          -
         -
10.2.0.x.y
VERSION=10.1
          -
          -
          -
          -
         -
11.1.0.x.y
VERSION=10.1
VERSION=10.2
          -
          -
          -
         -
11.2.0.x.y
VERSION=10.1
VERSION=10.2
VERSION=11.1
          -
          -
         -
12.1.0.x.y
VERSION=10.1
VERSION=10.2
VERSION=11.1
VERSION=11.2
          -
         -
12.2.0.x.y
VERSION=10.1
VERSION=10.2
VERSION=11.1
VERSION=11.2
VERSION=12.1
         -


Data Pump client/server compatibility.




Connecting to Database version
expdp and impdp client version
10gR1
10.1.0.x
10gR2
10.2.0.x
11gR1
11.1.0.x
11gR2
11.2.0.x
12cR1
12.1.0.x
12cR2
12.2.0.x
10.1.0.x
supported
supported
supported
supported
supported
supported
10.2.0.x
no
supported
supported
supported
supported
supported
11.1.0.x
no
no
supported
supported
supported
supported
11.2.0.x
no
no
no
supported
supported
supported
12.1.0.x
no
no
no
no
supported
supported
12.2.0.x
no
no
no
no
no
supported


Solution To above problem.

While taking export from higher version of DB i.e. 12.2.0.1.0 use version parameter in expdp command.

Example
expdp sys/oracledba@crm  directory=EXPIMP schemas=scott Version=12.1 dumpfile=Exp_Scott.dmp logfile=Exp_Scott.log

Now, you can import schema without any error.

impdp sys/oracledba@MDM  directory=EXPIMP schemas=scott dumpfile=Exp_Scott.dmp logfile=Imp_Scott.log

Article 0

$
0
0

How to Plug non-CDB database to CDB database?

How to Plug non-CDB database oracle11g to CDB database?

This article is in continuation of my previous article i.e. Upgrade Oracle Database 11g to 12c .
One can use below listing without referring to previous notes. Just make sure here I have used my non-CDB database name as ORCL11G because it was migrated from 11g to 12c without changing SID of the database.

NOTE: ORCL11G instance is now our new 12c upgraded database. We haven't change SID while upgrading our database.  

There are several methods to migrate non-CDB database to PDB database. 

1) Clone a Remote Non-CDB
2) Using DBMS_PDB
3) Using Data Pump (expdb, impdp)
4) Using Replication

Here I will use DBMS_PDB to migrate non-CDB to PDB.

Setp 1) Generate .xml file.

In this step, we will generate Plugabledbl11g.xml file, which we will used further to create PDB database.
- Connect to your non-CDB database. [In our case it is ORCL11G.] 
- Shutdown the database
- Startup in read only mode
- Run DBMS_PDB.DESCRIBE to generate Plugabledbl11g.xml

Below is the listing for the same.

SQL> shu immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL> startup open read only;
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size                  2924928 bytes
Variable Size             520097408 bytes
Database Buffers         1073741824 bytes
Redo Buffers               13848576 bytes
Database mounted.
Database opened.
SQL>
SQL> BEGIN
  2  DBMS_PDB.DESCRIBE(pdb_descr_file => '/var/Plugabledbl11g.xml');
  3  END;
  4  /
PL/SQL procedure successfully completed.
SQL>


Step 2) Connect to an existing CDB and create a new PDB

Connect to an existing CDB and create a new PDB using the file describing the non-CDB database. Must use the FILE_NAME_CONVERT parameter to convert the existing files to the new location.

As you see in below screenshot, we are connected to “orcl” CDB container. “orcl” CDB container is already having PDBORCL as one PDB. We will add our recently upgraded database “orcl11g” in this CDB container as “Plugabledbl11g”.

Your can refer my previous article how to upgrade 11g to 12c. Click Here



Step 3) Create pluggable database PDBORCL11g

Create directory Plugabledbl11g under “/u01/aps/Oracle/oradata/orcl/” path, where all PDB’s data files be located in. 

Now, run below command to create new PDB named as Plugabledbl11g

CREATE PLUGGABLE DATABASE Plugabledbl11g USING '/var/Plugabledbl11g.xml' COPY FILE_NAME_CONVERT = ('/u01/aps/Oracle/oradata/orcl11g/ORCL11G/','/u01/aps/Oracle/oradata/orcl/Plugabledbl11g/');


As you can see in above screenshot newly created PDB i.e. PDBORCL11G is on mounted state.

Step 4) Run script noncdb_to_pdb.sql

Now, Switch to the PDB container and run the "$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql" script to clean up the new PDB, removing any items that should not be present in a PDB.

1) ALTER SESSION SET CONTAINER=Plugabledbl11g;
2) @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql



Above script output is attached here.


Step 5) Startup the PDB and check the open mode.

Now, Start the PDB and check the open mode.

1) ALTER SESSION SET CONTAINER=Plugabledbl11g;
2) ALTER PLUGGABLE DATABASE OPEN;


Here, we are done with migration from non-CDB to PDB of Oracle Database 12c.


Step 6) Check all PDBs and PMON Process.
Now check all PDB status from the root container 
SQL> alter session set container=cdb$root;
Session altered.
SQL>
SQL> select con_id,dbid, name, open_mode from v$containers;

CON_ID       DBID    NAME                                OPEN_MODE
----------      ----------     ------------------------------  ----------
         1   1441782042   CDB$ROOT                      READ WRITE
         2   2980200325   PDB$SEED                       READ ONLY
         3       85573530   PDBORCL                         READ WRITE
         4   1021306567   Plugabledbl11g                  READ WRITE

4 rows selected.
SQL>



Check the Database processes
-bash-4.1$ ps -ef | grep pmon
db1212   10033  8449  0 12:46 pts/3    00:00:00 grep pmon
db1212   12145     1  0 Jun08 ?        00:00:37 ora_pmon_orcl
-bash-4.1$

Our new Plugabledbl11g is now in read write mode. Now you can connect to your new PDB for your regular operations.

Article 0

$
0
0

Extended Data Type in Oracle 12c

Extended Data Type in Oracle 12c

Oracle 12c introduced extended data types, in which, VARCHAR2, NVARCHAR2, and RAW 
data types can store more data. Before 12c, there was a restriction as 4000 bytes for the 
VARCHAR2 and NVARCHAR2 data types, and 2000 bytes for the RAW data type.

Now, this size limitation increased by 32767 bytes for the VARCHAR2, NVARCHAR2, and 
RAW data types.

Steps to enable Extended Data Type

Step 1: Close PDB
Step 2: Open PDB in Upgrade mode
Step 3: Change init parameter max_string_size to “extended”
Step 4: Run utl32k.sql script to make data dictionary changes at system level
Step 5: Close PDB
Step 6: Open PDB in read write mode



Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> alter session set container=DB12C;

Session altered.

SQL> show parameter max_string_size

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
max_string_size                      string      STANDARD
SQL>
SQL>
SQL>
SQL> create table char_test(c1 varchar2(32767));
create table char_test(c1 varchar2(32767))
                                   *
ERROR at line 1:
ORA-00910: specified length too long for its datatype
SQL> 

 max_string_size default value is standard, hence one cannot crate table with varchar2(32767).

Let's change max_string_size to extended by following above steps.

Step 1: Close PDB

SQL> ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
Pluggable database altered.

Step 2: Open PDB in Upgrade mode

SQL> ALTER PLUGGABLE DATABASE OPEN UPGRADE;
Pluggable database altered.

Step 3: Change init parameter max_string_size to “extended”

SQL> ALTER SYSTEM SET max_string_size=extended;
System altered.

Step 4: Run utl32k.sql script to make data dictionary changes at system level

SQL> @?/rdbms/admin/utl32k
SP2-0042: unknown command "aRem" - rest of line ignored.

Session altered.

DOC>#######################################################################
DOC>#######################################################################
DOC>   The following statement will cause an "ORA-01722: invalid number"
DOC>   error if the database has not been opened for UPGRADE.
DOC>
DOC>   Perform a "SHUTDOWN ABORT"  and
DOC>   restart using UPGRADE.
DOC>#######################################################################
DOC>#######################################################################
DOC>#

no rows selected

DOC>#######################################################################
DOC>#######################################################################
DOC>   The following statement will cause an "ORA-01722: invalid number"
DOC>   error if the database does not have compatible >= 12.0.0
DOC>
DOC>   Set compatible >= 12.0.0 and retry.
DOC>#######################################################################
DOC>#######################################################################
DOC>#

PL/SQL procedure successfully completed.
Session altered.
0 rows updated.
Commit complete.
System altered.
PL/SQL procedure successfully completed.
Commit complete.
System altered.
Session altered.
Session altered.
Table created.
Table created.
Table created.
Table truncated.
0 rows created.
PL/SQL procedure successfully completed.

STARTTIME
--------------------------------------------------------------------------------
09/25/2018 16:01:44.423000000

PL/SQL procedure successfully completed.
No errors.

PL/SQL procedure successfully completed.
Session altered.
Session altered.
0 rows created.
no rows selected
no rows selected

DOC>#######################################################################
DOC>#######################################################################
DOC>   The following statement will cause an "ORA-01722: invalid number"
DOC>   error if we encountered an error while modifying a column to
DOC>   account for data type length change as a result of enabling or
DOC>   disabling 32k types.
DOC>
DOC>   Contact Oracle support for assistance.
DOC>#######################################################################
DOC>#######################################################################
DOC>#

PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
Commit complete.
Package altered.
Package altered.

Step 5: Close PDB

SQL> ALTER PLUGGABLE DATABASE CLOSE;
Pluggable database altered.

Step 6: Open PDB in read write mode

SQL> ALTER PLUGGABLE DATABASE OPEN;
Pluggable database altered.


Let's check parameter and create table with VARCHAR2(32767)

SQL> show parameter max_string_size
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
max_string_size                      string      EXTENDED
SQL>
SQL> create table char_test(c1 varchar2(32767));
Table created.

SQL>
SQL> desc char_test;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 C1                                                 VARCHAR2(32767)

ORA-02266: unique/primary keys in table referenced by enabled foreign key

$
0
0


ORA-02266: unique/primary keys in table referenced by enabled foreign key

Cause: An attempt was made to truncate a table with unique or primary keysreferenced by foreign keys enabled in another table. Other operations not allowed aredropping/truncating a partition of a partitioned table or an ALTER TABLEEXCHANGE PARTITION.61-12 Error Messages



Action: Before performing the above operations the table, disable the foreign keyconstraints in other tables. You can see what constraints are referencing a table byissuing the following command: SELECT * FROM USER_CONSTRAINTS WHERETABLE_NAME = "tabnam";

ORA-02304: invalid object identifier literal

$
0
0
ORA-02304: invalid object identifier literal



Cause: An attempt was made to enter an object identifier literal for CREATE TYPEthat is either: - not a string of 32 hexadecimal characters - an object identifier thatalready identifies an existing object - an object identifier different from the originalobject identifier already assigned to the type



Action:Do not specify the object identifier clause or specify a 32 hexadecimal-character object identifier literal that is unique or identical to the originally assignedobject identifier. Then retry the operation.

ORA-00609: could not attach to incoming connection ORA-609: opiodr aborting process unknown ospid ORA-609: opiodr aborting process unknown ospid Cause and Solution

$
0
0

ORA-00609: could not attach to incoming connection  


This error is usually a secondary error. This means that the actual cause of the issue is another error in the stack trace or list of errors.
If you get this error, take a look at the other errors that you can see.



Cause: could not attach to incoming connection
This error is usually due to timeout issues. It indicates that a client connection has failed, or that a connection was aborted without finishing the connection.



Solution: To resolve this error, you can either look for the issue that is causing the timeout to occur, or increase the timeout limit.
If you want to increase the timeout limit, change the INBOUND_CONNECT_TIMEOUT value on both the listener and server side. This can be done in the sqlnet.ora file and the listener.ora file.
If you can’t resolve this error using any of these methods, then contact your database administrator or contact Oracle support. There could be a range of things specific to your environment that cause this error.

RMAN failed to connect target database with ORA-00020: maximum number of processes (150) exceeded

$
0
0


Solution:

Here, we can see in error as maximum number of processes has exceeded.
So, we need to Increase the number of processes from sys user with sysdba privilege which may help here as shown below:

SQL> show parameter processes;
NAME TYPE VALUE
--------------------- ----------- ---------
processes integer 150

SQL> alter system set processes=300 scope=spfile;
System altered.

SQL>
Bounce the database so changes will be reflected in database instance and then you can initiate RMAN backup.

ORA-00245: control file backup failed; target is likely on a local file system

$
0
0
ORA-00245: control file backup failed; target is likely on a local file system

$ cat rman_ORA-DATA_full-02Aug2017.log
Starting backup at 02-AUG-17
channel c1: starting full datafile backup set
channel c1: specifying datafile(s) in backup set
released channel: c1
released channel: c2
released channel: c3
released channel: c4
released channel: c5
RMAN-00571: ===========================================================
RMAN-00569: ======= ERROR MESSAGE STACK FOLLOWS ========
RMAN-00571: =========================================================
RMAN-03009: failure of backup command on c1 channel at 08/02/2017 21:22:25
ORA-00245: control file backup failed; target is likely on a local file system
Recovery Manager complete.


Solution:
As we can see that control file backup is failed. So we checked for RMAN configuration as shown below:
Before:
RMAN> show all;
using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name ORA-DATA are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '$ORACLE_HOME/dbs/snapcf_ORA-DATA.f'; # default
RMAN>
In above configuration, we can see that CONTROLFILE autobackup is OFF.
So, we enabled it as shown below command.
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

ORA-07445 - exception encountered: core dump

$
0
0
ORA-07445 - exception encountered: core dump


Cause : An operating system exception occurred which should result in the creation of a core file. This is an internal error.


Action : Contact Oracle Customer Support.
Common precipitators of the ORA-07445 include:
-- High volume user transactions
-- Software bugs (i.e. Bug 4098853). See note 342443.1 on MOSC.
-- Too-small RAM regions (shared_pool_size, java pool, large_pool_size), and a too-small application memory stack (e.g. PL/SQL array too small)
-- Too small undo and temp segments
-- Program errors (addressing outside of RAM region), e.g. S0C4. Improper NLS parameter settings
-- Hardware errors
-- Oracle block corruption and a host of other related issues.
-- When Oracle internal job failed with specific exception
Note: There are lots of reason for ORA-07445, Based of arguments and oracle document you can fix it. So may bugs are described in oracle document for ORA-07445.

ORA-00257: archiver error. Connect internal only, until freed.

$
0
0


Cause:The archiver process received an error while trying to archive a redo log. If the problem is not resolved soon, the database will stop executing transactions. The most likely cause of this message is the destination device is out of space to store the redo log file.


Action: Check archiver trace file for a detailed description of the problem. Also verify that the device specified in the initialization parameter ARCHIVE_LOG_DEST is set up properly for archiving.

ORA-01031: insufficient privileges

$
0
0
Cause: An attempt was made to change the current username or password without the appropriate privilege. This error also occurs if attempting to install a database without the necessary operating system privileges. This error may occur if the user was granted the necessary privilege at a higher label than the current login.


Action: Ask the database administrator to perform the operation or grant the required privileges. For Trusted Oracle users getting this error although granted the the appropriate privilege at a higher label, ask the database administrator to regrant the privilege at the appropriate label.
For the DBA, the ORA-01031 can happen if the target OS executables do not have read and execute permissions (e.g. (770) in UNIX/Linux), and ensure that the oracle user is a member of the dba group (e.g. /etc/group). There are similar permission in the Windows registry.

RMAN-06820: WARNING: failed to archive current log at primary database

$
0
0
CAUSE:

From 11.2.0.4 onward as per 'unpublished' Bug 8740124, We include the current standby redo log as part of an RMAN archivelog backup at the standby site.
This is achieved by forcing a log switch at the primary site. But the connection to the primary failed when attempting to do so.
This is due to below bug:
Bug 17580082

Solution:

Dont use operating system authentication to login with RMAN. Instead use username and password.
Dont use the beloq
$ rman target /
Instead put in the username and password for the SYSDBA user:
$ rman target sys/password@stby
Connecting as 'rman target sysdba_user/password@stby'
Note: The password, within the password file, for the primary and standby should be identical.

ORA-39165: Schema SYS was not found ORA-39166: Object AUD$ was not found.

$
0
0
When trying to backup the SYS.AUD$ table using datapump getting below error

expdp directory=EXPDR dumpfile=SYS_AUD_table.dmp logfile=exp_SYS_AUD_table.log tables=AUD$ exclude=statistics

Export: Release 11.2.0.4.0 - Production on Fri Jan 6 15:31:15 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA directory=EXPDR dumpfile=SYS_AUD_table.dmp logfile=exp_SYS_AUD_table.log tables=AUD$ exclude=statistics
Estimate in progress using BLOCKS method...
Total estimation using BLOCKS method: 0 KB
ORA-39166: Object SYS.AUD$ was not found.
ORA-31655: no data or metadata objects selected for job
Job "SYS"."SYS_EXPORT_TABLE_01" completed with 2 error(s) at 15:31:18

Cause:

There is a restriction on dataPump export.
It cannot be used to export schemas like SYS, ORDSYS, EXFSYS, MDSYS, DMSYS, CTXSYS, ORDPLUGINS, LBACSYS, XDB, SI_INFORMTN_SCHEMA, DIP, DBSNMP and WMSYS in any mode.


Solution:

Export the table SYS.AUD$ using the traditional export:

exp file=SYS_AUD_table.dmp log=exp_SYS_AUD_table.log tables=AUD$ statistics=none

Export: Release 11.2.0.4.0 - Production on Fri Jan 6 16:24:40 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Username: / as sysdba

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P15 character set (possible charset conversion)

About to export specified tables via Conventional Path ...
. . exporting table                           AUD$  504728389 rows exported
Export terminated successfully without warnings.

ORA-28017: The password file is in the legacy format.

$
0
0
SQL> alter user sys identified by iossKfGHsdsdUVu93xxswQ;
alter user sys identified by iossKfGHsdsdUVu93xxswQ
*
ERROR at line 1:
ORA-28017: The password file is in the legacy format.

Cause:

There are multiple possibilities for the cause of the error:

 * An attempt was made to grant SYSBACKUP, SYSDG or SYSKM.
 * These administrative privileges could not be granted unless the password file used a newer format ("12" or higher).
 * An attempt was made to grant a privilege to a user who has a large password hash which cannot be stored in the password file unless the password file uses a newer format ("12" or higher).
 * An attempt was made to grant or revoke a common administrative privilege in a CDB

Solution: 

Regenerate the password file in the newer format ("12" or higher).
Use the newer password file format ("12" or higher) if you need to grant a user the SYSBACKUP, SYSDG or SYSKM administrative privileges.

orapwd file=$ORACLE_HOME/dbs/orapwprod01 entries=5 force=y

Query to find audit information about dropped users

$
0
0
Find audit information about dropped users


"DROP USER" is audited by default from Oracle 12c onwards.
Use the below query to take information if you have not turned off standard auditing

select os_username,username,userhost,timestamp,obj_name,action_name,priv_used
from dba_audit_trail
where action_name='DROP USER'
and to_char(cast ( timestamp as date),'dd.mm.yyyy') > '08.02.2019'
AND OBJ_NAME IN ('JIM','DWIGHT')
order by timestamp desc;


OPATCHAUTO-72046: No Wallet Option Provided

$
0
0
Problem:

While applying latest PSU patch on oracle 12c RAC database, execution of opatchauto apply command got below error.

$ORACLE_HOME/OPatch/opatchauto apply /patch/Jan2018/26635880 -oh /oracle/app/oracle/product/12.2.0.1/grid

OPatchauto session is initiated at Wed Nov 8 11:48:21 2018
OPATCHAUTO-72046: No wallet option provided.
OPATCHAUTO-72046: Wallet option is not provided which is required during patching.
OPATCHAUTO-72046: Please provide a wallet option.

Solution:

opatchauto command always needs to be executed by root user.
If you need to use any other user please grant sudo privilges to that user for the opatchauto command.

Please login with root and rerun the command.

$ORACLE_HOME/OPatch/opatchauto apply /patch/Jan2018/26635880 -oh /oracle/app/oracle/product/12.2.0.1/grid

ORA-39358: Export dump file version 12.1.0.2.0 not compatible with target version 11.2.0.4.0

$
0
0
Problem:

While doing import got te below error.

ORA-39358: Export dump file version 12.1.0.2.0 not compatible with target version 11.2.0.4.0 .

Solution:

1. Check the compatible parameter of both source and target database.

Source:

SQL>show parameter compatible

NAME TYPE VALUE
------------------------------------ ----------- ---------
compatible string 12.1.0.2.0
noncdb_compatible boolean FALSE

Target:

SQL> show parameter compatible

NAME                                 TYPE        VALUE
------------------------------------ ----------- --------
compatible                           string      11.2.0.4.0

Here the source compatible parameter is higher version(12.1.0.2) and target is lower(11.2.0.4).
An export dump file generated from database with higher compatible parameter cannot be imported to a database with lower compatible value.
Either both should be same or target db compatible can be higher than the source compatible.

To solve this while taking export use VERSION=11.2 parameter

expdp dumpfile=test.dmp logfile=test.log directory=EXPDIR full=y version=11.2

Now we can import without any issues.

RMAN-20512: source database already registered in recovery catalog

$
0
0
Cause:

Source database was already registered in the recovery catalog.

Action:

If the source database is really registered, there is no need to register it again. Note that the recovery catalog enforces that all databases have a unique DBID. If the new database was created by copying files from an existing database, it will have the same DBID as the original database and cannot be registered in the same recovery catalog.

RMAN-20511: database name is ambiguous in source recovery catalog database

$
0
0


Cause:

Two or more databases in the source recovery catalog database match this name.


Action:


 Use DBID option in IMPORT CATALOG command to specify the source database.

 

RMAN-20507: some targets are remote – aborting restore

$
0
0


Cause:

During the restore process, one or more backup files were unavailable locally for the restore operation.


Action:

This message should be accompanied with the list of remote backup files.
Recall these backups from remote location and retry the RESTORE command.
Viewing all 1640 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>