Posts

Showing posts from April, 2022

How To Collect 10046 Trace (SQL_TRACE) Diagnostics for Performance Issues

Session Tracing alter session set tracefile_identifier='10046'; alter session set timed_statistics = true; alter session set statistics_level=all; alter session set max_dump_file_size = unlimited; alter session set events '10046 trace name context forever,level 12'; -- Execute the queries or operations to be traced here -- select * from dual; exit; alter session set events '10046 trace name context off'; Tracing a process after it has started The first step is to identify the session to be traced by some means: select p.PID,p.SPID,s.SID from v$process p,v$session s where s.paddr = p.addr and s.sid = &SESSION_ID / SPID is the operating system Process identifier (os pid) PID is the Oracle Process identifier (ora pid) Once the OS process id for the process has been determined then the trace can be initialised as follows: connect / as sysdba oradebug setospid 9834 oradebug unlimit oradebug event 10046 trace name context forever,level 12 oradebug event 10046 trac

ORA-03001 WHEN RUNNING MERGE ON A XMLTYPE COLUMN OVER A DBLINK FROM 19C TO 12C DATABASE

Symptoms When accessing a XMLType column(s) or XMLType table in a remote database via database link, an ORA-03001 is generated: ERROR at line 9: ORA-03001: unimplemented feature ORA-02063: preceding line from SOMEDBLINK XMLTYPE over DBLinks are not supported.  This is properly documented in Appendix B of the XDB Developer's Guide: Solution XMLTYPE over DBLinks are not supported, however, in earlier releases, it was not completely blocked by issuing an error like ORA-03001 for certain cases.  For those specific cases that should have been blocked in the old releases, patch.31385623 was created. Therefore, if available, download and apply patch.31385623 and see if it helps in your situation. No other patch will be created to circumvent the restriction on accessing XMLType over dblinks.

OEM Agent : The agentTZRegion value not in Agreement

 Due to DST  changes some days back we see couple of OEM Agents failed to start with below mentioned errors: oracle@machine[TEST01]> ./emctl start agent Starting agent ......... failed. The agentTZRegion value in /orcl/app/oracle/agent11g/sysman/config/emd.properties is not in agreement with what agent thinks it should be.Please verify your environment to make sure that TZ setting has not changed since the last start of the agent. If you modified the timezone setting in the environment, please stop the agent and exectute 'emctl resetTZ agent' and also execute the script mgmt_target.set_agent_tzrgn(<agent_name>, <new_tz_rgn>) to get the value propagated to repository. Consult the log files in: /orcl/app/oracle/agent11g/sysman/log OS level time as changed but it's not changed in Oracle Agent binaries. We have manaully fix it using below menthod: 1. Set the Correct Env variable to TZ parameter oracle@machine[TEST01]> echo $TZ Europe/Moscow export TZ=Etc/GMT-4

ORA-06502: "Numeric Or Value Error: Character String Buffer Too Small" from SQL Using DBMS_LOB.SUBSTR

On one occasion support teams reached out to us with an strange error ORA-06502: "Numeric Or Value Error: Character String Buffer Too Small" from SQL Using DBMS_LOB.SUBSTR.  They mentioned to get this error only when data is more than 4000 bytes and lesser values it works fine. MOS already have this problem listed under Doc ID 567574.1. Reason of this error: The DBMS_LOB.SUBSTR returns a VARCHAR2 and VARCHAR2 cannot be > 4000 bytes. The data being returned is from a UTF8 database and the Column BIGCOL contains multibyte characters which result in the converted size exceeding 4000 bytes. The following can be used as a work around:     create or replace function my_clob_substr( c in clob) return varchar2     as     v_characters number := 4001;     v_length number := 4001;     begin     while v_length > 4000 loop     v_characters := v_characters-1;     v_length := lengthb(dbms_lob.substr(c,v_characters,1));     end loop;     return dbms_lob.substr(c,v_characters,1);     en

Building a RPM for startup script on Linux

Lately, there comes a need where needed to configure a sysemctl startup script for stop and start of services on Linux servers. Usually, oracle services come with high availability and cluster-based services which provides this similar capability but in cases where we don't have this capability, we as an administrator may need to understand how to perform such operations. Now systemctl services can be created easily but if one wants to package that as an rpm and deploy on the various services we can use the following steps.  RPM stands for Red Hat Package Manager. It was developed by Red Hat and is primarily used on Red Hat-based Linux operating systems. These packages come up with a .rpm extension and use the following format:                     <name>-<version>-<release>.<arch>.rpm Each of these components is critical and will be tied up together as we create the package in the coming section of this blog. To build an RPM we first need to create the follo

SQL Profile - Play with Execution plan

 Although i don't have too much experience working with outlines, but these days i am working significantly with SQL Profile. My Favorite blogger Kerry Osborne has written a wonderful blog on this topic which gives me very clear idea of sql profile. Outlines basically locks the profile , but sql profile plays with empirical value of the sql plan which normally known as fudge factor. It's give the flexibility to optimizer to look for alternate plan. But we can say sql profile advantages will fade with time. Hints used in sql profile, these hints are very likely documents so wanted to give as much information as possible: 1. Setting Optimizer Mode : ALL_ROWS 2. Disable hints present in sql : IGNORE_OPTIM_EMBEDDED_HINTS 3.Setting OPTIMIZER_FEATURES_ENABLE to it's default Value i.e active all available feature :OPTIMIZER_FEATURES_ENABLE (default) 4.Adjusting number of rows returned from the table: OPT_ESTIMATE(@"SEL$1", TABLE, "L"@"SEL$1", SCALE_RO

Table Fragmentation - Oracle

Table fragmentation – If a table is only subject to inserts, there will not be any fragmentation. Fragmentation comes with when we update/delete data in table. The space which gets freed up during non-insert DML operations is not immediately re-used (or sometimes, may not get reuse ever at all). This leaves behind holes in table which results in table fragmentation. To understand it more clearly, we need to be clear on how oracle manages space for tables. “High water mark” of table actually defines the border line between (ever) used and unused (never) space. While performing full table scan, Oracle will always read the data up to HWM. And if there is lot of free space with-in HWM, that is read too, and hence degrading the performance of FTS. Now lets see how to identify HWM, unused (never used) space and free space (used but deleted/updated) and then take a call whether the concerned table is candidate for a reorganization or not. SQL> create table test as select * from dba_tables;

Values populated in DBA_TAB_MODIFICATIONS

 Goal : The goal is to explain why the view DBA_TAB_MODIFICATIONS  does sometimes have no values even when the parameter STATISTICS_LEVEL  is set to TYPICAL and  the specific schema has been analyzed successful using the package DBMS_STATS.GATHER_SCHEMA_STATS. In addition all the tables in that schema shows MONITORING=YES in the view dba_tables. Fix: The updates to the table *_tab_modifications are related to the volumne of updates for a table. There is a need of approximatly 10% of datavolumn changes. Just only on single update of the row for example might not lead to fill the *_tab_modifications. Example : SQL>  create table test ( num  varchar2(32)); Table created. Lets insert 100 rows to table test: SQL>  begin   2  for i in 1..100 loop   3  insert into sys.test values (i);   4  end loop;   5  commit; end;  6   7  / PL/SQL procedure successfully completed. SQL> select count(*) from test;   COUNT(*) ----------        100 Gather stats for this table: SQL> EXEC DBMS_STATS.

Invisible Indexes

Oracle 11g allows indexes to be marked as invisible. Invisible indexes are maintained like any other index, but they are ignored by the optimizer unless the OPTIMIZER_USE_INVISIBLE_INDEXES parameter is set to TRUE at the instance or session level. Indexes can be created as invisible by using the INVISIBLE keyword, and their visibility can be toggled using the ALTER INDEX command.  1. Create a table t1 with 2 columns n1 and n2 Hint : Create table t1(n1 number,n2 number); SQL> show parameter visible NAME                                 TYPE        VALUE ------------------------------------ ----------- ------------------------------ optimizer_use_invisible_indexes      boolean     FALSE SQL> Create table t1(n1 number,n2 number); Table created. 2. Populate Records     Begin           For i in 1..1000 loop            Insert into t1 values(i,i);           end loop;    end; / SQL>   Begin           For i in 1..1000 loop            Insert into t1 values(i,i);           end loop;    en

Row chaining and Row Migration - Perfomance Tuning

 Proof of Concept: There are two circumstances when this can occur, the data for a row in a table may be too large to fit into a single data block. This can be caused by either row chaining or row migration. Row Chaining: Occurs when the row is too large to fit into one data block when it is first inserted. In this case, Oracle stores the data for the row in a chain of data blocks (one or more) reserved for that segment. Row chaining most often occurs with large rows, such as rows that contain a column of datatype LONG, LONG RAW, LOB, etc. Row chaining in these cases is unavoidable. Row Migration: Occurs when a row that originally fitted into one data block is updated so that the overall row length increases, and the block’s free space is already completely filled. In this case, Oracle migrates the data for the entire row to a new data block, assuming the entire row can fit in a new block. Oracle preserves the original row piece of a migrated row to point to the new block containing th

Recreate Targets.xml File for Grid Control agent version

 Some when you targets.xml is corrupted or accidently dropped , you need to create it manually. Steps are as below: 1. Create a blank targets.xml file : cd $ORACLE_HOME/sysman/emd touch targets.xml 2. After created targets.xml file you need to copy values from $ORACLE_HOME/sysman/config/emd.properties files as below format: <Targets AGENT_SEED="agentseed from emd.properties">                <Target TYPE="oracle_emd" NAME="hostname and port from emd.properties EMD_URL"/>                <Target TYPE="host" NAME="hostname as it appears in the emd_url"/> </Targets Example : <Targets AGENT_SEED="256896864"> <Target TYPE="oracle_emd" NAME="hostname:3872"/> <Target TYPE="host" NAME="hostname"/> </Targets> First time you start the agent with above file it will itself convert AGENT_SEED parameter to AGENT_TOKEN and value comes in encrypted form 3. To f

Maintenance Of Archive logs On Standby Databases

 Maintenance Of Archivelogs On Standby Databases Maintenance of archivelogs which are already applied on standby database in Oracle 10g and above From Oracle 10g onwards, RMAN has a specific configuration for archivelog deletion policy i.e. ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY This parameter can be set on primary and standby databases.  To enable automatic maintenance of the archivelogs on standby perform the below steps on standby database : In a dataguard configuration, the archivelogs from the FRA will be automatically purged when the following conditions are met by the database. If you need it purged in at the Primary site, set it on Primary database. Step1: Prior to 11g, if not using mandatory archivelog destinations, the database (primary and standby) must be restarted with the following parameter:        SQL> alter system set "_log_deletion_policy"='ALL' scope=spfile; Step 2 : Enable the flash recovery area on the standby database so that all the

Oracle - Flush Execution Plan for Specific SQL

Sometimes, you need to invalidate a statement, for example because the current execution plan is not the right one (as consequence of bind variable peeking) and you want it to be re-optimized (hard parse). alter system flush shared_pool; flushes everything and can kill the performance of a production system for a few dozen of minutes because most of new statements will be hard parsed. Calculating statistics can invalidate the cursor, but you may have unexpected side effects. Since 10.2.0.4 there is the possibility to flush a single statement with dbms_shared_pool.purge, providing the address and the hash value of the statement that you get from v$sql. The syntax is: dbms_shared_pool.purge('<address>,<hash value>','C'); where <address> and <hash value> comes from V$SQLAREA and 'C' is for cursor. See ORACLE_HOME/rdbms/admin/dbmspool.sql for more information. The function will return when it is done (if the cursor us currently used - see USE

Oracle Timesten Overview and Datasore Refresh

 Overview of Timesten: Oracle TimesTen In-Memory Database (TimesTen) is a full-featured, memory-optimized, relational database with persistence and recoverability. It provides applications with the instant responsiveness and very high throughput required by database-intensive applications. Deployed in the application tier, TimesTen operates on databases that fit entirely in physical memory (RAM). Applications access the TimesTen database using standard SQL interfaces. For customers with existing application data residing on the Oracle Database, TimesTen is deployed as an in-memory cache database with automatic data synchronization between TimesTen and the Oracle Database. Timesten Datasource Refresh: Step 1: Take backup. (Source) /export/TimesTen/7.0.5_64/bin/ttbackup -dir /var/tmp -fname PROD_DATASTORE_081010_Bkp DSNNAME Step 2: Scp dumps to Target. Step 3: Destroy the DSN(Target) hostname% ./ttdestroy DSNNAME hostname% ./ttstatus TimesTen status report as of Mon Oct 11 08:20:16 2010

Hidden Parameter Datapump expdp/impdp

 Oracle Undocumented Parameters 1. METRICS You can use the parameter METRICS=Y to include additional logging information about the number of objects and the time it took to process them in the log file. With METRICS, additional information can be obtained about the number of objects that were processed and the time it took for processing them. Note that the METRICS parameter does not create any additional trace files. Instead, the extra details are logged in the logfile of the Export Data Pump or Import Data Pump job. Here is an example:  expdp \'/ as sysdba\' DUMPFILE=data_pump_dir:sysman_%U.dat SCHEMAS=sysman_apm JOB_NAME=sysman_pump METRICS=Y [oracle@zurich Desktop]$ expdp \'/ as sysdba\' DUMPFILE=data_pump_dir:sysman_%U.dat SCHEMAS=sysman_apm JOB_NAME=sysman_pump METRICS=Y    Export: Release 11.2.0.3.0 - Production on Tue Dec 20 10:46:08 2011    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.    Connected to: Oracle Database 11g Enterpr

Colored SQL

This new feature doesn't paint the SQL statement in color; rather, it sort of marks it as "important." Suppose you are troubleshooting a performance issue and you suspect a specific SQL statement contributing to the problem. You want the SQL statement to be captured in every Automatic Workload Repository (AWR) snapshot. But AWR snapshots do not capture all SQL statements; just the top ones. How can you force a specific SQL to be captured, regardless of its inclusion in the top SQLs? The procedure add_colored_sql() in the package dbms_workload_repository marks the SQL as "colored", or important enough to be captured in every AWR snapshot regardless of whether the SQL is in top SQLs. First, identify the SQL statement and then get its SQL_ID. To color it, use: begin   dbms_workload_repository.add_colored_sql(      sql_id => 'ff15115dvgukr'    ); end; To find out which SQLs have been colored, you can query the AWR table WRM$_COLORED_SQL: SQL> SELECT *

Datapump - Space Reduction

With the introduction of 11g feature with expdp/expdp compression parameter , we all are using it and greatly helps in reducing our dump size. But do we know this is even better then gzip and produce half size then gzip. here are details: The parameter has three options: METDATA_ONLY - only the metadata is compressed; the data is left as it is (available in Oracle Database 10.2 as well). DATA_ONLY - only the data is compressed; the metadata is left alone. ALL - both the metadata and data are compressed. NONE - this is the default; no compression is performed. Here is how you compress the export of the table UNITS_FACT: $ expdp global/global directory=dump_dir tables=units_fact dumpfile=units_fact_comp.dmp compression=all For comparison purposes, export without compression: $ expdp global/global directory=dump_dir tables=units_fact dumpfile=units_fact_uncomp.dmp Now if you check the files created: $ ls -l -rw-r----- 1 oracle dba 2576384 Jul 6 22:39 units_fact_comp.dmp -rw-r----- 1 oracl

LEAST amount of recovery that has to be done before a database restored

Q? What is the LEAST amount of recovery that has to be done before a database restored from a hot rman backup can be opened? A: ALL archivelogs from backup start to backup end must be applied before the database can be opened - this is the MINIMUM amount of recovery needed. To determine which log was current at the time the backup completed, note the COMPLETION time of the rman database backup - take this from the backup log. If the backup log is not longer available query the rman metadata but make sure you set NLS_DATE_FORMAT before invoking rman so that timestamps as well as date are returned: - for unix:     %export NLS_DATE_FORMAT='dd-mon-rr hh24:mi:ss'                 %rman target / - for windows:  >set nls_date_format=dd-mon-rr:hh24:mi:ss                 >rman target / To find your backup: RMAN>LIST BACKUP OF DATABASE COMPLETED AFTER '<date>'; or RMAN>LIST BACKUP OF DATABASE COMPLETED AFTER 'sysdate -n'; Set <date> to limit the out

How to quickly check that Database is consistent after incomplete recovery (Point in Time Recovery) before OPEN RESETLOGS

 At first, It's helpful to enable the session to display the timestamp for DATE type columns : SQL> alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS' ; Check 1: Objective: Verify that the datafiles are recovered to the intended point in time (PIT) and they are consistent (FUZZY=NO) Query the current status and PIT (P-oint I-n T-ime upto which the datafiles have been recovered) of datafiles by reading datafile headers directly from the physical datafiles: SQL> select fuzzy, status, error, recover, checkpoint_change#, checkpoint_time, count(*) from v$datafile_header group by fuzzy, status, error, recover, checkpoint_change#, checkpoint_time ; FUZ STATUS  ERROR           REC CHECKPOINT_CHANGE# CHECKPOINT_TIME        COUNT(*) --- ------- --------------- --- ------------------ -------------------- ---------- NO  ONLINE                                 5311260 31-AUG-2011 23:10:14          6 YES ONLINE                                 5311260 31-AUG-2011 23:10:14