Quantcast
Channel: Doyensys Allappsdba Blog..
Viewing all 1640 articles
Browse latest View live

Find user commits per minute in oracle database

$
0
0



Find user commits per minute in oracle database:
===================================
Below script is useful in getting user commit statistics information in the oracle database.
user commits is the number of commits happening the database.It will be helpful in tracking the number of transactions in the database.

col STAT_NAME for a20
col VALUE_DIFF for 9999,999,999
col STAT_PER_MIN for 9999,999,999
set lines 200 pages 1500 long 99999999
col BEGIN_INTERVAL_TIME for a30
col END_INTERVAL_TIME for a30
set pagesize 40
set pause on


select hsys.SNAP_ID,
       hsnap.BEGIN_INTERVAL_TIME,
       hsnap.END_INTERVAL_TIME,
           hsys.STAT_NAME,
           hsys.VALUE,
           hsys.VALUE - LAG(hsys.VALUE,1,0) OVER (ORDER BY hsys.SNAP_ID) AS "VALUE_DIFF",
           round((hsys.VALUE - LAG(hsys.VALUE,1,0) OVER (ORDER BY hsys.SNAP_ID)) /
           round(abs(extract(hour from (hsnap.END_INTERVAL_TIME - hsnap.BEGIN_INTERVAL_TIME))*60 +
           extract(minute from (hsnap.END_INTERVAL_TIME - hsnap.BEGIN_INTERVAL_TIME)) +
           extract(second from (hsnap.END_INTERVAL_TIME - hsnap.BEGIN_INTERVAL_TIME))/60),1)) "STAT_PER_MIN"
from dba_hist_sysstat hsys, dba_hist_snapshot hsnap
 where hsys.snap_id = hsnap.snap_id
 and hsnap.instance_number in (select instance_number from v$instance)
 and hsnap.instance_number = hsys.instance_number
 and hsys.STAT_NAME='user commits'
 order by 1;



   SNAP_ID BEGIN_INTERVAL_TIME            END_INTERVAL_TIME              STAT_NAME                 VALUE    VALUE_DIFF  STAT_PER_MIN
---------- ------------------------------ ------------------------------ -------------------- ---------- ------------- -------------
      6626 11-NOV-18 05.00.13.272 PM      11-NOV-18 06.00.29.527 PM      user commits          350001525     1,147,017        19,022
      6627 11-NOV-18 06.00.29.527 PM      11-NOV-18 07.00.14.759 PM      user commits          351130223     1,128,698        18,875
      6628 11-NOV-18 07.00.14.759 PM      11-NOV-18 08.00.02.845 PM      user commits          351987886       857,663        14,342
      6629 11-NOV-18 08.00.02.845 PM      11-NOV-18 09.00.22.109 PM      user commits          352829839       841,953        13,963
      6630 11-NOV-18 09.00.22.109 PM      11-NOV-18 10.00.07.076 PM      user commits          353478483       648,644        10,865
      6631 11-NOV-18 10.00.07.076 PM      11-NOV-18 11.00.24.303 PM      user commits          353939928       461,445         7,652
      6632 11-NOV-18 11.00.24.303 PM      12-NOV-18 12.00.11.904 AM      user commits          354335275       395,347         6,611
      6633 12-NOV-18 12.00.11.904 AM      12-NOV-18 01.00.29.406 AM      user commits          354604745       269,470         4,469
      6634 12-NOV-18 01.00.29.406 AM      12-NOV-18 02.00.17.332 AM      user commits          354955934       351,189         5,873
      6635 12-NOV-18 02.00.17.332 AM      12-NOV-18 03.00.03.228 AM      user commits          356918293     1,962,359        32,815
      6636 12-NOV-18 03.00.03.228 AM      12-NOV-18 04.00.20.577 AM      user commits          357821672       903,379        14,981
      6637 12-NOV-18 04.00.20.577 AM      12-NOV-18 05.00.09.204 AM      user commits          358154880       333,208         5,572
      6638 12-NOV-18 05.00.09.204 AM      12-NOV-18 06.00.25.507 AM      user commits          358296694       141,814         2,352
      6639 12-NOV-18 06.00.25.507 AM      12-NOV-18 07.00.09.734 AM      user commits          358692156       395,462         6,624
      6640 12-NOV-18 07.00.09.734 AM      12-NOV-18 08.00.01.047 AM      user commits          359373748       681,592        11,379
      6641 12-NOV-18 08.00.01.047 AM      12-NOV-18 09.00.17.981 AM      user commits          360418586     1,044,838        17,327
      6642 12-NOV-18 09.00.17.981 AM      12-NOV-18 10.00.04.542 AM      user commits          362476024     2,057,438        34,405
      6643 12-NOV-18 10.00.04.542 AM      12-NOV-18 11.00.22.732 AM      user commits          364469092     1,993,068        33,053
      6644 12-NOV-18 11.00.22.732 AM      12-NOV-18 12.00.09.693 PM      user commits          365611444     1,142,352        19,103
      6645 12-NOV-18 12.00.09.693 PM      12-NOV-18 01.00.27.672 PM      user commits          366866479     1,255,035        20,813
      6646 12-NOV-18 01.00.27.672 PM      12-NOV-18 02.00.14.537 PM      user commits          368466462     1,599,983        26,756



REUSE_DUMPFILES Parameter In EXPDP

$
0
0

REUSE_DUMPFILES Parameter In EXPDP:
===================================

If we try to export a dump file with the name, which is already present in that directory.
then we will get the error like ORA-27038: created file already exists

ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file “/export/home/oracle/dbaadmin_estim.dmp“
ORA-27038: created file already exists
Additional information: 1

So if the requirement is to overwrite the existing dump file, then REUSE_DUMPFILES parameter can be used with EXPDP


PARFILE WITH REUSE_DUMPFILES=Y

cat exp_reusedmp.par

dumpfile=dbaadmin_estim.dmp
logfile=dbaadmin.log
directory=EXPDIR
tables=dbaadmin.test_list
REUSE_DUMPFILES=Y



 At this point, we already have the dump file dbaadmin_estim.dmp . So the EXPDP job should overwrite this dump file


 expdp parfile=exp_redump.par

Export: Release 12.1.0.2.0 - Production on Mon Nov 19 12:53:54 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA parfile=exp_redump.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 29 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "dbaadmin"."test_list"                    24.69 MB  219456 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /export/home/oracle/dbaadmin_estim.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Mon Nov 13 12:54:01 2017 elapsed 0 00:00:03

Query Clause In Expdp(DATAPUMP)

$
0
0


Query Clause In Expdp(DATAPUMP):
================================

QUERY clause can be used in expdp or impdp to export/import subset of the data or data with specific conditions.

Export dump of a table from emp_tab WHERE created > sysdate -40 . The filter can be added on any column depending upon the requirement.

SQL> select count(*) from “DBAADMIN”."EMP” WHERE created > sysdate -40;

COUNT(*)
———-
1600

Create a parfile with query clause:


cat expdp_query.par

dumpfile=test.dmp
logfile=test1.log
directory=TEST
tables=dbaadmin.EMP
QUERY=dbaadmin.EMP:"WHERE created > sysdate -40"



Now run the expdp command with parfile. We can see, 1600 rows will be exported.



expdp parfile=expdp_query.par

Export: Release 12.1.0.2.0 - Production on Mon Jan 23 14:52:07 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA parfile=expdp_query.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 29 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "dbaadmin"."EMP"                        199.4 KB    1600 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /export/home/oracle/test.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Mon Jan 27 14:53:02 2018 elapsed 0 00:00:23

Login In Oracle Database Service Cloud as root user

$
0
0
Cause :


Oracle doesn't allow direct access to root on cloud machines. sudo is the only option users have to access the root privileges.



Solution for the above issue:



1. change the below file


/etc/ssh/sshd_config


change

PermitRootLogin no
to
PermitRootLogin yes


change

AllowUsers opc oracle
to
AllowUsers opc oracle root


2. copy the authorization keys from oracle home/opc/.ssh folder to /root/.ssh/

cp /home/opc/.ssh/authorized_keys /root/.ssh/

OR if there is a authorized_keys file already present in /root/.shh/ directory then append that file with the authorized_key present in /opc/.ssh/



3. service sshd restart


Try connecting to machine using the same private key as root user from putty.

Form did not come up after cloning. FRM-40010: Cannot read form FND_TOP/forms/US/FNDSCSGN

$
0
0
Issue : 

Form did not come up after cloning.
FRM-40010: Cannot read form FND_TOP/forms/US/FNDSCSGN

Solution :

1. Stop the application tier services

2. Take a backup of context file and make the below changes :


Before:
<formsfndtop oa_var="s_formsfndtop">FND_TOP >

After:
<formsfndtop oa_var="s_formsfndtop">/FIND01/apps_st/appl/fnd/12.0.0</formsfndtop>

3. Run Autoconfig.

4. Start the application tier services

Error : RMAN Active Database Duplicate For Standby Failing With ORA-15001 Diskgroup FRA Does Not Exist

$
0
0
Error : RMAN Active Database Duplicate For Standby Failing With ORA-15001 Diskgroup FRA Does Not Exist

Issue : In primary, SNAPSHOT CONTROLFILE NAME was wrongly configured to standby diskgroup location.
Solution : 

Set the snapshot controlfile name to the right location of primary.

CONFIGURE SNAPSHOT CONTROLFILE NAME TO '<DIR>/DB_UNIQUE_NAME/snapcf_<DBNAME>.f';

Error Running RMAN duplicate of Offline (Cold) Database Backup

$
0
0
Issue : Error Running RMAN duplicate of Offline (Cold) Database Backup 

Cause :

When the target database is running in ARCHIVELOG mode, RMAN will want to do some recovery even when the backup restored is an offline backup. This also applies to duplicate. When the target is running in archivelog mode, RMAN will want to apply redo. This is the case even if there is an offline backup taken after the latest archive redo log. i.e RMAN uses the SCN of the archive redo then looks for a backup earlier to this number and restores these files and then recovers the database.

Cause justification is RMAN functionality.


Solution :

1. Once RMAN fails.
2. Log into the AUX instance
3. select open_mode from v$database;
4. If not mounted, mount the aux instance.
5. select file#, status, checkpoint_change#, fuzzy from v$datafile_header;
-- check that checkpoint_change# is identical
-- check that fuzzy=NO

6. Assuming all is OK in checks of #5
7. recover database using backup controlfile until cancel;
-- type cancel;
/* You should get a message -- media recovery cancelleed */

If you receive ORA-1547 when recovery is cancelled, something is inconsistent about your files.

8. alter database open resetlogs;
9. shutdown immediate
10. startup mount
11. at operating system run dbnewid to change dbid:
nid TARGET=SYS/oracle

12. shutdown immediate
13. startup mount;
14. alter database open resetlogs;
15. Select dbid from v$database
/* confirm this number is different than target */

Error : FRM-40010 Cannot Read From /path/form.fmx Error in 12c

$
0
0
Error : FRM-40010 Cannot Read From /path/form.fmx Error in 12c 

Cause :
New Forms 12c installation.
Expected behavior since Forms 12c version.

Solution :

Starting in Forms 12c there is a new environment variable called FORMS_MODULE_PATH that restricts the directories from which Forms applications may be launched.

If the directory where the form is being launched is not defined in FORMS_MODULE_PATH the FRM-40010 error is expected.

This same approach to run forms works fine in 11gR2 version.

Error : ORA-00600:[KFDARELOCSET10] DURING ASM REBALANCE

$
0
0
Error : ORA-00600:[KFDARELOCSET10] DURING ASM REBALANCE 

Cause :

Rebalance operation can be initiated because of disk being added or manual rebalance.

Cause of the crash is observed to be the UNPROTECTED files created in the diskgroup.

Issue is verified in UNPUBLISHED Bug 12319359 - SOL-X64-SC:HIT ASM ORA-00600:[KFDARELOCSET10], [3], [256], [3], [1], [1], [65535]



Solution :



The use of UNPROTECTED files is not recommended in production environments.


If the file that is causing the issue is a tempfile, we can drop it first and then start the rebalance to avoid the error.

SQL> select group_number,file_number,blocks,bytes,space,redundancy,striped,REDUNDANCY_LOWERED,MODIFICATION_DATE
from v$asm_file where group_number=1 and file_number=290;

SQL> select * from v$asm_alias where group_number=1 and file_number=290;
-- change the group_number and file_number based on the arguments of the ORA-600 error.

SQL> alter tablespace TEMP drop tempfile <file_name>

Error : ORA-10459 "cannot start media recovery on standby database; conflicting state detected"

$
0
0
Error : ORA-10459 "cannot start media recovery on standby database; conflicting state detected"



Cause :

An attempt To Start Up The MRP Process In Physical Standby fails with error ORA-10459 "cannot start media recovery on standby database; conflicting state detected"

Solution :


Managed recovery can be run only from One instance of Rac Standby.

To check if Managed recovery is already running on a specific instance you can run the below query

Select * from gv$managed_standby ;

Or

Check the alert log.

If Managed recovery is already running on a instance and you want it to be run from specific instance:

Stop the Managed recovery from the instance it is running and start managed recovery from first/specific instance.

Error while activating 'Oracle ERP Cloud' Endpoint Integration error 'Integration "LLU_TEST_JOURNALIMP | 1.0" cannot be activated.

$
0
0
Error : 

Cannot active Integration : "LLU_TEST_JOURNALIMP | 1.0"
Fails with error:

ERROR
------

error 'Integration "LLU_TEST_JOURNALIMP | 1.0" cannot be activated. Incident has been created with ID 7. [Cause: ICS-20566]'



Cause :

Input file did not conform to associated schema (File was not in utf-8 format)

The following justifies how the issue is related to this specific customer:
Error from logfile showed error during showed error when loading the Input file.
Input file schema or format error was the cause


Solution :


Resolve schema issue within input file
Retest with new input file

Error: Your Oracle E-Business Suite account has not been linked with the Single Sign-On account that you just entered.

$
0
0
Error: Your Oracle E-Business Suite account has not been linked with the Single Sign-On account that you just entered.


Issue :

Gettig the Error below while logging into ERP:

Error:
---------
Your Oracle E-Business Suite account has not been linked with the Single Sign-On account that you just entered. Please enter your Oracle E-Business Suite information


SOlution :

Applications SSO Auto Link is not enabled.

Make sure the profile option "Applications SSO Auto Link User" is set to "Y" at "Site" level.

ICM Log Filled With: Could not contact Service Manager FNDSM_ The TNS alias could not be located on RAC with Multi Apps Tiers

$
0
0
Error :

Could not contact Service Manager FNDSM_SRVRAP2_PROD. The TNS alias could not be located, the listener process on SRVERPAP2 could not be contacted, or the listener failed to spawn the Service Manager process.


Cause :


The issue is caused by the existence of Service name: SYS$APPLSYS.WF_CONTROL..which should not exist.

See the following select:
1. SQL>select value from v$parameter where name='service_names';



VALUE
-------------------------------------------------------------------------------
SID>, SYS$APPLSYS.WF_CONTROL..

The WF Service name should not exist only the Service name as should exist.


SOlution :

1. Ensure backup has been taken before making the changes.

2. Shutdown the Applications Tier including Concurrent managers.

3. Run the following script:

SQL> sqlplus / @$FND_TOP/patch/115/sql/wfctqrec.sql


4. Bounce the 2 RAC nodes.

5. Edit the Database xml files from all RAC nodes and validate if the s_dbService only includes the (SID) and not the SYS$APPLSYS.WF_CONTROL.(SID). service name, correct if needed.

6. Run autoconfig at the Rac Nodes (only in case the xml file includes the wrong s_dbService name).

7. Run autoconfig on the Applications nodes.

8. Start the Applications Services.

Concurrent Manager : Standard Manager Going Down and Actual and Target Processes are Different

$
0
0
Issue :

Concurrent manager (includes Standard Manager) is not stable, actual and target processes are different.

Navigation :- Logging as Sysadmin > Concurrent > Manager > Administer Screen > Verify Actual & Target columns


Solution :

EBS 12.0.6 : Apply Patch 10113913.
EBS 12.1.X : Apply Patch 16602978.

Confirm Patch version :

afpgmg.o 120.3.12010000.10

You can use the commands like the following:

strings -a $FND_TOP/bin/FNDLIBR | grep Header | grep afpgmg

Internal Concurrent Manager logfile Error : Routine &ROUTINE has attempted to start the internal concurrent manager.

$
0
0
Internal Concurrent Manager logfile Error : Routine &ROUTINE has attempted to start the internal concurrent manager. 

Issue :

Users encounter the following error in the Internal Concurrent Manager logfile:

Routine &ROUTINE has attempted to start the internal concurrent manager.
The ICM is already running. Contact you system administrator for further assistance.

afpdlrq received an unsuccessful result from PL/SQL procedure or function FND_DCP.Request_Session_Lock.
Routine FND_DCP.REQUEST_SESSION_LOCK received a result code of 1 from the call
to DBMS_LOCK.Request.
Possible DBMS_LOCK.Request resultCall to establish_icm failed.
The Internal Concurrent Manager has encountered an error.


Cause :


The concurrent manager startup script is being executed on both nodes.
This causes two instances of the ICM (internal concurrent manager) to be running in one application instance, which relates to error message in manager logfile.
Moreover, FNDSM is not able to complete its job of starting respective processes on the defined nodes.


Solution :



To resolve the issue test the following steps in a development instance and then migrate accordingly

1. Ensure that APPLDCP is set to ON in your $APPL_TOP/.env file.
2. Echo environment variable on command line prior to starting concurrent managers.
3. Execute adcmctl.sh only on the primary node of the Internal Concurrent Manager.



The server is asking for your user name and password. The server reports that it is from XDB.

$
0
0


Issue:

“The server” “is asking for your user name and password”. “The server reports that it is from XDB”
Got the above error while accessing the apex application page and it’s not accepting any password.



Environment:

DB Version: 12.1.0.2
Apex version: 5.1 and configured using PL/SQL gateway.



Solution
  • Verify whether local_listener parameter is set, If not set this parameter and register.

     SQL> Alter system register
  • Make sure DB user ANONYMOUS account is unlocked.
  • Enableenable anonymous access to the XML DB repository 



CONN sys/password AS SYSDBA

SET SERVEROUTPUT ON
DECLARE
l_configxml XMLTYPE;
l_value VARCHAR2(5) := 'true'; -- (true/false)
BEGIN
l_configxml := DBMS_XDB.cfg_get();
IF l_configxml.existsNode('/xdbconfig/sysconfig/protocolconfig/httpconfig/allow-repository-anonymous-access') = 0 THEN
-- Add missing element.
SELECT insertChildXML
(
l_configxml,
'/xdbconfig/sysconfig/protocolconfig/httpconfig',
'allow-repository-anonymous-access',
XMLType('' ||
l_value ||
'
'),
'xmlns="http://xmlns.oracle.com/xdb/xdbconfig.xsd"'
)
INTO l_configxml
FROM dual;
DBMS_OUTPUT.put_line('Element inserted.');
ELSE
-- Update existing element.
SELECT updateXML
(
DBMS_XDB.cfg_get(),
'/xdbconfig/sysconfig/protocolconfig/httpconfig/allow-repository-anonymous-access/text()',
l_value,
'xmlns="http://xmlns.oracle.com/xdb/xdbconfig.xsd"'
)
INTO l_configxml
FROM dual;
DBMS_OUTPUT.put_line('Element updated.');
END IF;
DBMS_XDB.cfg_update(l_configxml);
DBMS_XDB.cfg_refresh;
END;
/

REST API for Oracle Database Cloud Service:

$
0
0

Start a Backup Operation
POST
/paas/service/dbcs/api/v1.1/instances/{identityDomainId}/{serviceId}/backups
Starts an on-demand backup for a Database Cloud Service instance.
Request
Supported Media Types
  • application/json
Path Parameters
Header Parameters
Body (
Request Body
)
Type: object
Title: Request Body
Response
202 Response
Accepted. See Status Codes for information about other possible HTTP status codes.
Headers
·         Location: string
Examples
The following example shows how to start a backup operation by submitting a POST request on the REST endpoint using cURL.
This example uses a traditional cloud account, so the {identityDomainId} path parameter and the X-ID-TENANT-NAMEheader parameter are set to the account's domain name, which is usexample. The service instance is db12c-xp-si and the Oracle Cloud user name of the user making the call is dbcsadmin.
Note that the required (but empty) request body is provided in the cURLcommand's --data option.
cURL Command
$ curl --include --request POST \
--user dbcsadmin:password \
--header "X-ID-TENANT-NAME:usexample" \
--header "Content-Type: application/json" \
--data '{}' \
https://dbaas.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/usexample/db12c-xp-si/backups
HTTP Status Code and Response Headers
HTTP/1.1 202 Accepted
Date: date-and-time-stamp
Server: Oracle-Application-Server-11g
Location: https://dbaas.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/usexample/status/backup/job/5744472
Content-Length: 0
X-ORACLE-DMS-ECID: id-string
X-ORACLE-DMS-ECID: id-string
X-Frame-Options: DENY
Service-URI: https://dbaas.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/usexample/db12c-xp-si
Vary: Accept-Encoding,User-Agent
Retry-After: 60
Content-Language: en
Content-Type: application/json

The best upcoming features in Oracle Database 19c

$
0
0

As we DBA's are always excited about the upcoming features, I will share below some of the main things that I've spotted on OOW. Please note that this can change, and we don't even have a beta release yet.

1 - Stability
First of all, it was very clear that Oracle's main focus for the 19c database will be stability. This will be the final release for the "12cR2" family. So it was repeated multiple times: "don't expect to see many new features in this release".

Since 12.1.0.1, Oracle has been implementing a lot of core changes in Oracle Database (like multi-tenancy, unified audit, etc) and it's still very hard nowadays to find a stable 12 release to recommend. 12.1.0.2 is my favorite one, however many bugs are unfixed and it lacks a secure PDB layout (PDB escape techniques are pretty easy to explore). 18c will probably be ignored by all as it was a "transition" release, so I hope that 19c becomes the real stable one, as 11.2.0.4 was for 11g release family. Let's see...

Now comes the real features...
2 - Automatic Indexing
This is indeed the most important and one of the coolest features I've even seen in Oracle DB. Once this kind of automation is implemented and released, it will open doors to many other product automations (like automatic table reorganization, automatic table encryption or anything you can imagine).

The automatic indexing methodology will be based on a common approach to manual SQL Tuning. Oracle will capture the SQL statements, identify the candidate indexes and evaluates the ones that will benefit those statements. The whole process is not something simple.

Basically, Oracle will first create those indexes as unusable and invisible (metadata only). Then, outside the application workflow, oracle will ask the optimizer to test if those candidate indexes improve the SQL performance. In case the performance is better for all statements when indexed is used, it will become visible. If performance is worse, it remains invisible. And if it only performs better for some statements, the index is only marked visible for those SQLs (via SQL Patch, maybe).

The automation will also drop the indexes that become obsoleted by the newly created indexes (logical merge) and also remove the indexes that were created automatically but have not been used in a long time. Everything is customizable. For more details, we need to wait for the Beta Release!

3 - Real-time Stats + Stats Only Queries
With those features, you will be able to turn on the real-time stats for some of your database objects and it will be possible to run some SQLs that will query only the object stats, not doing a single logical read! Cool, isn't it?

4 - Data-guard DML Redirect
When you have a physical standby, opened in read-only mode, and plug in it some type of report tool that needs to create an underlying table or insert some log lines to operate, you have a problem. With this feature, you can define some tables (or maybe schema, it's not clear yet) and you will be able to run DMLs on them. So Oracle will redirect that DML to your primary and reflect the changes on your standby, not impacting those tools. This can be dangerous if not configured properly but will also allow us to do many new things.

5 - Partial JSON Update support
When you update a JSON data column, currently Oracle needs to upload the whole new column value to the database and validates. With this, we will now be able to update just a part (like a tag) of a json data.

6 - Schema-only Oracle accounts
With Oracle 18c, it was introduced the passwordless accounts. This means that you could connect to your schema using only a sort of external authentication like Active Directory. Now oracle has gone further, creating the true concept of Schema Only account (meaning there will be no way to authenticate).

7 - DB REST API
Oracle is trying to make the whole Oracle Database "rest aware", meaning that in a very soon future you will be able to perform ALL kinds of DB operations using REST API (like creating a database, creating a user, granting a privilege or adding a new listener port).

8 - Partitioned Hybrid Tables
Remember in very old times when we didn't have partitioned table, and had to implement partition manually using views + UNION ALL of many tables? Thanks god since 8 (released in 1997) we don't need it. Now Oracle finally did one step further, and you can have a partitioned hybrid table, meaning each partition can be of a different type or source (like one partition is external table and other is a traditional data table).

With Oracle 18 XE limited to 12GB data, this feature will be cool as we will probably be able to offload some of the data externally.

9 - EZConnect Improvements
EZConnect is something very useful to make fast connection calls without requiring a TNS. Problem is that, until now, if you want to use some value pairs likes SDU, RETRY_COUNT, CONNECT_TIMEOUT, this wasn't possible and you would end-up using TNS. Now in 19c you will be able to write something like:

sqlplus soe/soe@//salesserver1:1521/sales.us.example.com?connect_timeout=60&transport_connect_timeout=30&retry_count=3

It will also allow to enable multiples hosts/ports in the connection string (typically used in load-balancing client connections).

10 - Some other cool features
There are many other features that we have to wait for the Beta release to understand better. Below are some of them:
Improvements for count distinct and group by queries
Sharding now supports multiple PDB shards in a CDB
SQL JSON Enhancements
RAT and ADDM at PDB level
Data Dictionary Encryption
Database Vault Operations Control
Web SQLDeveloper

"Verify Access Signing Certificate in Settings" in OCI when running curl REST calls

$
0
0

Those days I was trying to retrieve some OCI Json data using REST calls via curl. However, even though I've set up an application and Client ID and Secret accordingly, I was getting the error below:
{
  "httpStatusCode" : 401,
  "httpMessage" : "Unauthorized",
  "executionContextId" : "005WAjCLgER1FgyN06YBUF0003so0000FI,0:1:1",
  "errorCode" : "urn:oracle:cloud:errorcode:tas:unauthorized",
  "errorMessage" : "Invalid Bearer Token: java.lang.Exception: Cannot obtain Certificate. Verify Access Signing Certificate in Settings"
}
"Cannot obtain Certificate. Verify Access Signing Certificate in Settings"

Than I realize that there is an option under "Identity Cloud Service -> Default Settings" called "Access Signing Certificate". As IDCS can have multi-factor authentications, Active Direct link, etc, you must enable this option in order to allow an application service to bypass all those IDCS authentication using directly the Client ID and Client Secret.

After enabling this option, everything worked.

How to Add LDAP Users and Groups in OpenLDAP on Linux

$
0
0

To add something to the LDAP directory, you need to first create a LDIF file.
The ldif file should contain definitions for all attributes that are required for the entries that you want to create.
With this ldif file, you can use ldapadd command to import the entries into the directory as explained in this tutorial.

If you are new to OpenLDAP, you should first install OpenLDAP on your system.
Create a LDIF file for New User
The following is a sample LDIF file that will be used to create a new user.
# cat adam.ldif
dn: uid=adam,ou=users,dc=tgs,dc=com
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: adam
uid: adam
uidNumber: 16859
gidNumber: 100
homeDirectory: /home/adam
loginShell: /bin/bash
gecos: adam
userPassword: {crypt}x
shadowLastChange: 0
shadowMax: 0
shadowWarning: 0
Add a LDAP User using ldapadd
Now, use ldapadd command and the above ldif file to create a new user called adam in our OpenLDAP directory as shown below:
# ldapadd -x -W -D "cn=ramesh,dc=tgs,dc=com" -f adam.ldif
Enter LDAP Password:
adding new entry "uid=adam,ou=users,dc=tgs,dc=com"
Assign Password to LDAP User
To set the password for the LDAP user we just created above, use ldappasswd command as shown in the below example:
# ldappasswd -s welcome123 -W -D "cn=ramesh,dc=tgs,dc=com" -x "uid=adam,ou=users,dc=tgs,dc=com"
Enter LDAP Password:
In the above command:
§  -s specify the password for the username entry
§  -x The username entry for which the password is changed
§  -D specify your DN here. i.e Distinguished name to authenticate in the server
Create LDIF file for New Group
Similar to adding user, you’ll also need a ldif file to add a group.
To add a new group to the LDAP groups OU, you need to create a LDIF with the group information as shown in the example ldif file below.
# cat group1.ldif
dn: cn=dbagrp,ou=groups,dc=tgs,dc=com
objectClass: top
objectClass: posixGroup
gidNumber: 678
Add a LDAP Group using ldapadd
Just like adding user, use ldapadd command to add the group from the group1.ldif file that we created above.
# ldapadd -x -W -D "cn=ramesh,dc=tgs,dc=com" -f group1.ldif
Enter LDAP Password:
adding new entry "cn=dbagrp,ou=groups,dc=tgs,dc=com"
Create LDIF file for an existing Group
To add an existing user to a group, we should still create an ldif file.
First, create an ldif file. In this example, I am adding the user adam to the dbagrp (group id: 678)
# cat file1.ldif
dn: cn=dbagrp,ou=groups,dc=tgs,dc=com
changetype: modify
add: memberuid
memberuid: adam
Add an User to an existing Group using ldapmodify
To add an user to an existing group, we’ll be using ldapmodify. This example will use the above LDIF file to add user adam to dbagrp.
# ldapmodify -x -W -D "cn=ramesh,dc=tgs,dc=com" -f file1.ldif
Enter LDAP Password:
modifying entry "cn=dbagrp,ou=groups,dc=tgs,dc=com"
Verify LDAP Entries
Once you’ve added an user or group, you can use ldapsearch to verify it.
Here is a simple example to verify if the users exists in the LDAP database:
# ldapsearch -x -W -D "cn=ramesh,dc=tgs,dc=com" -b "uid=adam,ou=users,dc=tgs,dc=com""(objectclass=*)"
Enter LDAP Password:
# extended LDIF
#
# LDAPv3
# base <uid=adam,ou=users,dc=tgs,dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# adam, users, tgs.com
dn: uid=adam,ou=users,dc=tgs,dc=com
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: adam
uid: adam
uidNumber: 16859
gidNumber: 100
homeDirectory: /home/adam
loginShell: /bin/bash
gecos: adam
shadowLastChange: 0
shadowMax: 0
shadowWarning: 0
userPassword:: e1NTSEF9b0lPd3AzYTBmT2xQcHBPNDcrK0VHRndEUjdMV2hSZ2U=

# search result
search: 2
result: 0 Success

# numResponses: 2
# numEntries: 1
Delete an Entry from LDAP using ldapdelete
If you’ve made a mistake while adding an user or group, you can remove the entry using ldapdelete.
To delete an entry, you don’t need to create an ldif file. The following will delete user “adam” that we created earlier.
# ldapdelete -W -D "cn=ramesh,dc=tgs,dc=com"


Viewing all 1640 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>