Posts

Showing posts from March, 2021

Query Regression - Lateral View Oracle 19c (Bug for a Bug)

While upgrading database from version 12.1 to version 19.10 came across set of queries using lateral views (basically VW_LAT** shown in execution plan). Such queries was behaving slow than the response time we got on 12.1 database. Looking further into this we found it was due to a Bug 30786641 which was introduced after the fix to another Bug 23273859. To understand the problem we need to first cover Bug  23273859. Bug 23273859 which was introduced due to wrong result with join predicate pushed and correlated subquery using ANSI syntax. It affected all versions lesser than 12.2 and fix was introduced in database version 12.2.0.1 base releases. This bug introduced another behavior where lateral view shows performance degradation and oracle comes up another bug in the form of 30786641. Bug 30786641 for slow performance of lateral views was fixed with version 19.10 Jan 2021 RU and in 21.1 Base release. In a case where fixed for this bug already available in your binary, it needs to be ac

Query Regression - "OR" Transformation Oracle 19c

Recently while upgrading Oracle Database from version 12.1 to 19.10 found there were couple of queries which was running slow post upgrade. Usually these type of behavior is seen while upgrading to newer version and oracle provide various ways to fix them. The difference between good and bad plan was an introduction of  "MERGE JOIN CARTESIAN" (aka MJC) and that's in where regressed query in 19c taking most of the time. MJC sometimes works for queries but in this case it was clearly not helping.  We had following options here:     1. Copy the profile from 12c to 19c database and it worked, but the problem was not just to specific query but with many similar query. This means profile option is not very attractive in this case.     2. We also tried running query with OFE 12.1 to use the same optimizer features as previous version and query behaves well. Though this option can't be placed as we need to use 19c OFE and can be placed only for testing purpose.           alte

Resynchronizing an out-of-sync table - Oracle Golden Gate

 In this procedure, there are two Replicat groups: ● The original group, known as group A ● A new group that you will create, known as group B To resynchronize a table 1 . Comment out the out-of-sync table in the original Replicat parameter file. 2 . Stop the original Replicat and start it again so it continues processing the tables that are not out-of-sync. STOP REPLICAT <groupA> START REPLICAT <groupA> 3 .Write down the timestamp on the source system. 4. Try to resolve any long-running transactions on the source out-of-sync table. You will be copying the source data to the target, and if transactions are open while the copy is being made, they will not be captured. 5 . Start a copy of the source data. 6. Import the copy to the target table. 7. Create a new Replicat group for the out-of-sync table, using BEGIN to start at the source timestamp that you recorded step 3, and using the existing trail for EXTTRAIL. ADD REPLICAT <groupB>, EXTTRAIL <existing trail>,

Add tables to an Existing GoldenGate Configuration - Oracle Golden Gate

 1.     Log into the source database via GGSCI [oracle@blue ggate]$ ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.13 17435036 OGGCORE_11.2.1.0.0OGGBP_PLATFORMS_131002.1206_FBO Linux, x64, 64bit (optimized), Oracle 11g on Oct  3 2013 02:39:46 Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved. GGSCI (blue.color) 2> dblogin userid ggate, password ggate Successfully logged into database. 2.     Add trandata for the new table(s). GGSCI (blue.color) 3> add trandata song.stone 2013-12-05 15:17:07  WARNING OGG-00869  No unique key is defined for table 'STONE'. All viable columns will be used to represent the key, but may not guarantee uniqueness.  KEYCOLS may be used to define the key. Logging of supplemental redo data enabled for table SONG.STONE. GGSCI (blue.color) 4> add trandata song.greystreet 2013-12-05 15:17:15  WARNING OGG-00869  No unique key is defined for table 'GREYSTREET'. All viable columns will be used

Initial Load - Instantiation - Oracle Golden Gate using Datapump

Full database export using expdp Enable Minimal Supplemental Logging in Oracle on source alter database add supplemental log data ; Enable Supplemental Logging at Table Level on source GGSCI> dblogin userid xxxxx password xxxxxx GGSCI> add trandata <schema>.<tablename> Add Extract, Add Exttrail, Add Pump, Add Rmttrail on source Start Extract, Start Pump on source Create a database directory: SQLPLUS> create directory dumpdir as '<some directory>' ; Get the current SCN on the source database:  SQLPLUS> select current_scn from v$database ;  28318029 Run the Data Pump Export using the flashback SCN you obtained in the previous step. The following example shows running the expdp utility at a Degree Of Parallelism (DOP) of 4. If you have sufficient system resources (CPU, memory and IO) then running at a higher DOP will decrease the amount of time it takes to take the export (up to 4x for a DOP of 4). Note that expdp uses Oracle Database parallel executi

Connect to PDB Directly

With the inception of multitenancy with oracle database, there has been a new world of opportunities and features for databases has come up. Those who have been working with an earlier version of the database are usually used to logging into a database with "sqlplus / as sysdba" command. Now with multitenancy sqlplus with sysdba login would land you to container database and then you would need to navigate to your pluggable database using command   alter session set container=<pluggable database name>; But do you know there is a direct way to login into the pluggable database directly? Let's see how it works. Let's assume you are working on a Linux environment and your pluggable database name is pdb1 and your database name is orcl1 and the same instance name. With usual SID setting set following   export ORACLE_SID=orcl1   export ORACLE_HOME=/u01/oracle/product/db_1   export PATH=$ORACLE_HOME/bin:$PATH Once the shell environment variables are set we can log in u