Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

What is the purpose of SYS_OP_C2C internal function

Recently I was involved in one problem with CBO's selectivity estimations on NVARCHAR2 data type column. What I spotted in predicate information was the usage of internal Oracle function SYS_OP_C2C.

Here is an example of the run-time execution plan using bind variables:

SQL_ID 1fj17ram77n5w, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ *
from x1 where cn >= to_nchar(:a1) and cn <= to_nchar(:a2)

Plan hash value: 2189453339

-------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------
|* 1 | FILTER | | 1 | | 9999 |00:00:00.08 | 102 |
|* 2 | TABLE ACCESS FULL| X1 | 1 | 1 | 9999 |00:00:00.03 | 102 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(SYS_OP_C2C(:A1)<=SYS_OP_C2C(:A2))
2 - filter(("CN"<=SYS_OP_C2C(:A2) AND "CN">=SYS_OP_C2C(:A1)))

As you can see the bind variables are converted to national character set using TO_NCHAR function. Column X1.CN is of type NVARCHAR2. It is easy to spot the difference between the SQL statement and the predicate information from the execution plan: to_nchar(:a2) from SQL statement is transformed to SYS_OP_C2C(:A2) in predicate information. The internal Oracle function SYS_OP_C2C performs conversion from one character set to another character set - C(haracterSet)2C(haracterSet). There are situations when one will see this conversion going on without explicit command as in this case what should be a sign that the data types are not the same and implicit conversion is taking place and this might be also a problem from performance perspective as it may disable index usage.

Jonathan Lewis and Charles Hooper, my fellows from the OakTable, had a discussion about this internal function back in 2007.

A fairly unique opportunity...

CJ Date will be giving a seminar in Dallas Texas next month. I wish I could get to it myself - but I'm already fully committed that week.

Maybe we can get the answer to using NULLs. Or at least some more input :)

OakTable Book "Oracle Expert Practices"

I haven't had much time in the recent past to write interesting blog posts, and the main reason for this is that I was very busy during the last couple of months - in particular contributing to the latest OakTable book "Oracle Expert Practices: Oracle Database Administration from the OakTable" by APress. It has been a very interesting experience - I've been co-authoring two chapters about Performance Optimization Methods together with another OakTable member: Charles Hooper.

This was a real collaborative work, a joint-effort if you want to say so. We exchanged the chapter contents rather frequently via Internet and I think this approach worked out quite well. I also have to thank Charles for spending a lot of time in rewording most of my "german" English into something that was consistent with his style.

It actually worked so well that what was originally planned as a single chapter grew so fast that it was finally decided to split it into two chapters - so we ended up with two chapters each co-authored by two authors.

Although it is obvious that something as complex as Performance Optimization Methods can't be covered to a full extend in a chapter (or even two) and therefore sometimes only the surface could be scratched and countless omissions were necessary I hope that the two chapters help to get a good overview of the available Performance Monitoring and Optimization methods.

I guess that these two chapters are not an easy read - we have packed a lot of details into them, but they should really be worth spending some time to dig through. We have also prepared numerous scripts that can be downloaded at the APress website to reproduce the described methods.

For a full coverage of the Performance Optimization area to me personally Christian Antognini's "Troubleshooting Oracle Performance" is still the benchmark - a very remarkable book.

I really hope that the same will be true for the "Oracle Expert Practices" book - it is supposed to be shipping real soon now.

By the way - it is still a bit early I know, but Charles and myself plan to perform a presentation together about our book chapters at the "Michigan OakTable Symposium (MOTS)" which will take place right before the OOW 2010 on the 16th and 17th of September 2010. So if you're looking for a "technical" conference rather than the more marketing oriented stuff at OOW - this might be interesting for you.

We have some very good ideas about this presentation - it will probably be more or less "zero-slide" and cover lots of demonstrations I guess, but it's too early obviously to reveal too much.

notes from UKOUG

I am back from my third visit to UKOUG and as always, the trip was wonderful, the conference was excellent and if I could have stayed, I would have. Of course, after dealing with trains all morning and then being stuck in an airplane seat watching movies for hours on end, sleeping in my own bed last night was absolute heaven. But the UK must be a wonderful place to live: the people are lovely,

V$SQL_MONITOR and V$SQL_PLAN_MONITOR

In my recent presentation at UKOUG 2009 in Birmingham I also mentioned the new feature of Oracle11gR1 which is a neat solution for monitoring long running SQL statements. It captures statistics about SQL execution every second.

For parallel execution every process involved gets separate entries in V$SQL_MONITOR and V$SQL_PLAN_MONITOR.

It is enabled by default for long running statements if parameter CONTROL_MANAGEMENT_PACK_ACCESS if it is set to “DIAGNOSTIC+TUNING” and STATISTICS_LEVEL=ALL|TYPICAL

It can be enabled at statement level as well with /*+ MONITOR */ hint or disabled with /*+ NO_MONITOR */ hint.

There are some defaults defined which can be altered by setting hidden parameters:
_sqlmon_max_plan - Maximum number of plans entries that can be monitored. Defaults to 20 per CPU
_sqlmon_max_planlines - Number of plan lines beyond which a plan cannot be monitored (default 300)

Just for demonstration purposes I used a simple performance killing statement which runs quite for a while and could be used to test this new feature.

SQL_ID b0zm3w4h1hbff, child number 0
-------------------------------------
select count(*) from obj$,obj$,obj$ --performance killer

Plan hash value: 3679021907

----------------------------------------------------
| Id | Operation | Name | E-Rows |
----------------------------------------------------
| 1 | SORT AGGREGATE | | 1 |
| 2 | MERGE JOIN CARTESIAN | | 341T|
| 3 | MERGE JOIN CARTESIAN | | 4886M|
| 4 | INDEX FAST FULL SCAN | I_OBJ1 | 69901 |
| 5 | BUFFER SORT | | 69901 |
| 6 | INDEX FAST FULL SCAN| I_OBJ1 | 69901 |
| 7 | BUFFER SORT | | 69901 |
| 8 | INDEX FAST FULL SCAN | I_OBJ1 | 69901 |
----------------------------------------------------

SQL> SELECT status, KEY, SID, sql_id, elapsed_time, cpu_time, fetches, buffer_gets,
2 disk_reads
3 FROM v$sql_monitor where sql_id='b0zm3w4h1hbff';

STATUS KEY SID SQL_ID ELAPSED_TIME CPU_TIME FETCHES BUFFER_GETS DISK_READS
--------- ----------- --- ------------- ------------ ---------- ------- ----------- ----------
EXECUTING 21474836481 170 b0zm3w4h1hbff 674281628 624578125 0 0 0

SQL> SELECT plan_line_id, plan_operation || ' ' || plan_options operation,
2 starts, output_rows
3 FROM v$sql_plan_monitor where key=21474836481
4 ORDER BY plan_line_id;

PLAN_LINE_ID OPERATION STARTS OUTPUT_ROWS
------------ --------------------- ---------- -----------
0 SELECT STATEMENT 1 0
1 SORT AGGREGATE 1 0
2 MERGE JOIN CARTESIAN 1 4283731363
3 MERGE JOIN CARTESIAN 1 156731
4 INDEX FAST FULL SCAN 1 3
5 BUFFER SORT 3 156731
6 INDEX FAST FULL SCAN 1 70088
7 BUFFER SORT 156731 4283731363
8 INDEX FAST FULL SCAN 1 70088

The above result I got after running this statement for about 10 minutes. There are of course more statistics available which I have omitted here as they were useless for my demo case which had all data already in buffer cache.

One can use DBMS_SQLTUNE.REPORT_SQL_MONITOR() function to produce a formatted report of monitoring results. Unfortunately the below report is too wide to be cleary seen. I got it for a statement run by DBMS_STATS package

SQL> set long 10000000
SQL> set longchunksize 10000000
SQL> set linesize 200
SQL> select dbms_sqltune.report_sql_monitor from dual;

SQL Monitoring Report

SQL Text
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
select /*+ no_parallel_index(t, "I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST") dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring no_expand
index(t,"I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST") */ count(*) as nrw,count(distinct sys_op_lbid(461,'L',t.rowid)) as nlb,count(distinct
hextoraw(sys_op_descend("OBJ#")||sys_op_descend("INTCOL#")||sys_op_descend(SYS_EXTRACT_UTC("SAVTIME"))||sys_op_descend("COLNAME"))) as ndk,sys_op_countchg(substrb(t.rowid,1,15),1)
as clf from "SYS"."WRI$_OPTSTAT_HISTGRM_HISTORY" t where "OBJ#" is not null or "INTCOL#" is not null or SYS_EXTRACT_UTC("SAVTIME") is not null or "COLNAME" is not null
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Global Information

Status : DONE (ALL ROWS)
Instance ID : 1
Session ID : 127
SQL ID : 1d7qn8b9wam8h
SQL Execution ID : 16777216
Plan Hash Value : 3428983716
Execution Started : 12/04/2009 01:09:20
First Refresh Time : 12/04/2009 01:09:28
Last Refresh Time : 12/04/2009 01:09:30

--------------------------------------------------------------------
| Elapsed | Cpu | IO | Other | Fetch | Buffer | Reads |
| Time(s) | Time(s) | Waits(s) | Waits(s) | Calls | Gets | |
--------------------------------------------------------------------
| 8.65 | 1.94 | 6.58 | 0.13 | 1 | 1327 | 694 |
--------------------------------------------------------------------

SQL Plan Monitoring Details
=========================================================================================================================================================================
| Id | Operation | Name | Rows | Cost | Time | Start | Starts | Rows | Memory | Activity | Activity Detail |
| | | | (Estim) | | Active(s) | Active | | (Actual) | (Max) | (percent) | (sample #) |
=========================================================================================================================================================================
| 0 | SELECT STATEMENT | | | 1026 | 1 | +10 | 1 | 1 | 274K | | |
| 1 | SORT GROUP BY | | 1 | | 6 | +5 | 1 | 1 | 274K | 37.50 | Cpu (3) |
| 2 | INDEX FULL SCAN | I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST | 179K | 1026 | 8 | +3 | 1 | 179K | | 62.50 | db file sequential read (5) |
=========================================================================================================================================================================

UKOUG 2009

UKOUG 2009 conference will take place next week in Birmingham. I will speak on Wednesday, December 2nd about "Execution Plan Stability in Oracle11g". As the audience at UKOUG conference is very technical I will add more technical stuff to my presentation this week.

Here is the link for for the conference agenda.

On Sunday evening I will attend Oracle ACE dinner for the first time. I'm looking forward to meet some new people there although most of them I know already from OakTable or from other conference events.

Global Temporary Tables Share Statistics Across Sessions

In another blog posting, I asserted that statistics collected by one session on a Global Temporary table (GTT) would be used by other sessions that reference that table, even though each session has their own physical instance of the table. I thought I should demonstrate that behaviour, so here is a simple test.

We will need two database sessions. I will create a test Global Temporary table with a unique index.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
DROP TABLE t PURGE;
TRUNCATE TABLE t;

CREATE GLOBAL TEMPORARY TABLE t
(a NUMBER,b VARCHAR2(1000))
ON COMMIT PRESERVE ROWS;

CREATE UNIQUE INDEX t ON t(a);

In my first session, I’ll populate the table with 100 rows. The values in column A have the range 1 to 100.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
INSERT INTO t SELECT rownum, RPAD(TO_CHAR(TO_DATE(rownum,'J'),'Jsp') ,500,'.')
FROM dba_objects
WHERE rownum <= 100;

COMMIT;

And I’ll collect statistics on it.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
begin sys.dbms_stats.gather_table_stats(ownname=>user,tabname=>'T'); end;
/

The following settings are just to make the queries easy to read, and so I can use dbms_xplan to generate a plan.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
alter session set nls_date_format = 'hh24:mi:ss dd.mm.yyyy';
alter session set statistics_level = ALL;
set autotrace off pages 40 lines 100
column table_name format a1
column column_name format a1
column low_value format a32
column high_value format a32
column a format 999
column b format a30

So now let’s check the contents of the table. There are 100 rows in the range 1 to 100.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
select count(*), min(a), max(a) from t;

COUNT(*) MIN(A) MAX(A)
---------- ---------- ----------
100 1 100

And the statistics agree with this.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
select table_name, num_rows, last_analyzed
from user_tables
where table_name = 'T';

T NUM_ROWS LAST_ANALYZED
- ---------- -------------------
T 100 18:13:17 13.10.2009

select table_name, column_name, low_value, high_value
from user_tab_columns
where table_name = 'T';

T C LOW_VALUE HIGH_VALUE
- - -------------------------------- --------------------------------
T A C102 C202
T B 45696768742E2E2E2E2E2E2E2E2E2E2E 54776F2E2E2E2E2E2E2E2E2E2E2E2E2E
2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E 2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E

So now let’s try a test query. The database returns 42 rows using a full scan. The statistics in the execution plan1 also predict that there will be 42 rows. Perfectly reasonable.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
select * from t where a<=42;

A B
---- ------------------------------
1 One

42 Forty-Two

42 rows selected.

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------
SQL_ID 6g743z6c2m2ap, child number 0
-------------------------------------
select * from t where a<=42

Plan hash value: 1601196873
-------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------------
|* 1 | TABLE ACCESS FULL| T | 1 | 42 | 4 (0)| 42 |00:00:00.01 | 15 |
-------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("A"<=42)

Now, let’s start a second session, and insert some slightly different data into the same GTT. There are still 100 rows, but this time column A is in the range 43 to 142.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
INSERT INTO t SELECT rownum+42
, RPAD(TO_CHAR(TO_DATE(rownum+42,'J'),'Jsp') ,500,'.')
FROM dba_objects
WHERE rownum <= 100;

COMMIT;

I’ll collect statistics in the same way. First we will check that the data in the table is correct.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
select count(*), min(a), max(a) from t;

COUNT(*) MIN(A) MAX(A)
---------- ---------- ----------
100 43 142

And I can also see that the statistics have changed.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
select table_name, num_rows, last_analyzed
from user_tables
where table_name = 'T';

T NUM_ROWS LAST_ANALYZED
- ---------- -------------------
T 100 18:18:22 13.10.2009

select table_name, column_name, low_value, high_value
from user_tab_columns
where table_name = 'T';

T C LOW_VALUE HIGH_VALUE
- - -------------------------------- --------------------------------
T A C12C C2022B
T B 4569676874792D45696768742E2E2E2E 53697874792E2E2E2E2E2E2E2E2E2E2E
2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E 2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E

If I run the same query, it correctly returns no rows, and uses an index scan to so. The statistics predict one row, but Oracle actually doesn’t find any. Again perfectly reasonable.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
select * from t where a<=42;

no rows selected

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------
SQL_ID 6g743z6c2m2ap, child number 0
-------------------------------------
select * from t where a<=42

Plan hash value: 2795797496

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 1 | 2 (0)| 0 |00:00:00.01 | 1 |
|* 2 | INDEX RANGE SCAN | T | 1 | 1 | 1 (0)| 0 |00:00:00.01 | 1 |
-----------------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("A"<=42)

But now let’s go back to the first session and run the query again (having flushed the shared pool). Last time we ran it we got 42 rows with a full scan. We still get 42 rows, but now it is using the index range scan, the same execution plan as the other session. In fact, the costs in the execution plan are the same as in the other session. Oracle expected 1 row from the index, but this time it actually got 42. So changing the statistics in the other session has changed the plan in this session, and possibly not for the better. The two sessions are using the same execution plan for different sets of data.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
A B
---- ------------------------------

42 Forty-Two

42 rows selected.

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------
SQL_ID 6g743z6c2m2ap, child number 0
-------------------------------------
select * from t where a<=42

Plan hash value: 2795797496
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 1 | 2 (0)| 42 |00:00:00.01 | 10 |
|* 2 | INDEX RANGE SCAN | T | 1 | 1 | 1 (0)| 42 |00:00:00.01 | 4 |
-----------------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("A"<=42)

If I now delete stats, I will use optimizer dynamic sampling.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
begin sys.dbms_stats.delete_table_stats(ownname=>user,tabname=>'T'); end;
/

And the plan changes back to a full scan.

(correct version reposted 1.11.2009)

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
A B
---- ------------------------------

42 Forty-Two

42 rows selected.

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------
SQL_ID 6g743z6c2m2ap, child number 0
-------------------------------------
select * from t where a>=42

Plan hash value: 1601196873

-------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------------
|* 1 | TABLE ACCESS FULL| T | 1 | 42 | 4 (0)| 42 |00:00:00.01 | 15 |
-------------------------------------------------------------------------------------------------
 
 Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("A">=42)

Note
-----
- dynamic sampling used for this statement

Conclusion 

Different instances of the same GTT do share statistics because there is only one location in the data dictionary to store statistics for each table. Therefore, collecting statistics on a GTT in one session will affect other sessions using the same table name, possibly adversely!

(Added 1.11.2009) Optimizer Dynamic Sampling may be a better option for GTTs, and is enabled by default from Oracle 9i, but the behaviour changes slightly in 10g.  However, as Cokan points out in his comments below, if a query on a GTT from one session is still in the shared pool when it is used in a different session, then Oracle will not re-parse the statement, and will not choose a different execution plan.

Footnote 1: The execution plans in this posting have been obtained using dbms_xplan thus:

#eeeeee; border: 0px solid rgb(0, 0, 0); font-family: courier new; overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">select * from table(dbms_xplan.display_cursor(null,null,'IOSTATS +COST'));

I have also flushed the shared pool between each statement; otherwise the numbers are aggregated across multiple executions of the same statement.

Partition Maintenance with Global Indexes

(updated 3.12.2009 to deal with UPDATE GLOBAL INDEXES option)
When I decide to partition a table, I also need to consider whether to partition the indexes, and if so how. The easiest option is to locally partition the indexes.  I try to avoid globally partitioned indexes because they can become invalid when you do partition maintenance. However, where an index leads on a column other than the partitioning key, then you might have to scan all partitions of a locally partitioned index if you do not query by the partitioning column.  Sometimes, it is necessary to partition an index differently to the table, or not at all. However, I have found that there are some partition management operations that do not invalidate global indexes.

In the case of PeopleSoft Global Payroll I range partition tables to match the Payroll 'stream' definitions. Thus each concurrent payroll process only updates one partition, and more importantly, each partition is updated by one, and only one, payroll process. Thus eliminating consistent read on these objects. The number of payroll processes is usually a function of the hardware configuration. This doesn't change often, so I don't expect to do partition maintenance. If I need to change the payroll stream definitions, then I will completely rebuild the partitioned tables.

In the case of Time & Labor, I create periodic partitions (weekly, fortnightly, or monthly) for the reported time table, partitioning on the 'date under reporting' column. This column is used by many of the queries in T&L processing, resulting in partition elimination. However, it is also necessary to periodically add new partitions.

The Oracle documentation says this about
"Management of Global Partitioned Indexes
Global partitioned indexes are harder to manage than local indexes:

When the data in an underlying table partition is moved or removed (SPLIT, MOVE, DROP, or TRUNCATE), all partitions of a global index are affected. Consequently global indexes do not support partition independence."

The key point is that the global index is invalidated if rows of data in a partition table are affected by DDL on a partition. Here is a little demonstration. I will create a table with a locally partitioned unique index, and a second non-partitioned index.

#eeeeee; border: 0px solid rgb(0, 0, 0); font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">DROP TABLE t PURGE;

CREATE TABLE t
(a NUMBER NOT NULL
,b VARCHAR2(100) NOT NULL
)
PARTITION BY RANGE(a)
(PARTITION T1 VALUES LESS THAN (100)
,PARTITION T2 VALUES LESS THAN (200)
);

CREATE UNIQUE INDEX ia ON t (a)
LOCAL
(PARTITION i1
,PARTITION i2
);

CREATE INDEX ib ON t (b);

INSERT INTO t
SELECT ROWNUM, TO_CHAR(TO_DATE(ROWNUM,'j'),'Jsp')
FROM dba_objects
WHERE ROWNUM < 200;
COMMIT;

Now I will add and populate a third partition

#eeeeee; border: 0px solid rgb(0, 0, 0); font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">ALTER TABLE t
ADD PARTITION t3 VALUES LESS THAN (300);

INSERT INTO t
SELECT ROWNUM+199, TO_CHAR(TO_DATE(ROWNUM+199,'j'),'Jsp')
FROM dba_objects
WHERE ROWNUM+199 < 300 ; COMMIT;

And now I will check on the indexes.

#eeeeee; border: 0px solid rgb(0, 0, 0); font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">column table_name     format a5 heading 'Table|Name'
column index_name     format a5 heading 'Index|Name'
column partition_name format a4 heading 'Part|Name'
column partitioned              heading 'Part?#'
column status                   heading 'Index|Status'
SELECT table_name, index_name, partitioned, status
FROM   user_indexes where table_name = 'T' ORDER BY 1,2;
SELECT index_name, partition_name, status
FROM   user_ind_partitions WHERE index_name IN (SELECT index_name FROM user_indexes WHERE table_name = 'T') order by 1,2,3;

Note that the global index IB is still usable

#eeeeee; border: 0px solid rgb(0, 0, 0); font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">Table Index Index
Name  Name  Part?# Status
----- ----- ------ ------
T     IA    YES    N/A
T     IB    NO     VALID

Index Part Index
Name  Name Status
----- ---- ------
IA    I1   USABLE
IA    I2   USABLE
IA    T3   USABLE

The interesting thing is that if I delete all the rows from a partition then I can drop it, and the index does not go unusable.

#eeeeee; border: 0px solid rgb(0, 0, 0); font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">DELETE FROM t WHERE a>=200;
ALTER TABLE t DROP PARTITION t3;

Table Index Index
Name   Name Part?# Status
----- ----- ------ ------
T     IA    YES    N/A
T     IB    NO     VALID

Index Part Index
Name  Name Status
----- ---- ------
IA    I1   USABLE
IA    I2   USABLE

but if I didn't delete all rows from partition t3 before I drop it, then the global index goes unusable.

#eeeeee; border: 0px solid rgb(0, 0, 0); font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">ALTER TABLE t DROP PARTITION t3;

Table Index Index
Name  Name  Part?# Status
----- ----- ------ --------
T     IA    YES    N/A
T     IB    NO     UNUSABLE

Index Part Index
Name Name Status
----- ---- --------
IA I1 USABLE
IA I2 USABLE

If I truncate the table such that I remove rows, then the index immediately becomes unusable

#eeeeee; border: 0px solid rgb(0, 0, 0); font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">ALTER TABLE t TRUNCATE PARTITION t3;

Table Index Index
Name Name Part?# Status
----- ----- ------ --------
T IA YES N/A
T IB NO UNUSABLE

Index Part Index
Name Name Status
----- ---- --------
IA I1 USABLE
IA I2 USABLE

However, if I had deleted those rows first the truncate would not have invalidated the index.

Addendum As Randolf points out in his comment, it is possible to prevent the global index from being invalidated by specifying the UPDATE GLOBAL INDEXES clauses.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
ALTER TABLE t DROP PARTITION t1 UPDATE GLOBAL INDEXES;

However, this comes at a cost.  Oracle now full scans the partition being dropped and removes the entries from the index.  This recursive SQL statement can be found in an Oracle trace.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
insert /*+ RELATIONAL(T) PARALLEL(T,1) APPEND */ into "SYSADM"."T" partition ("T1")
select /*+ RELATIONAL(T) PARALLEL(T,1) */ *
from "SYSADM"."T" partition ("T1") delete global indexes


Conclusion

There are some things that I can do without invalidating global indexes

  • I can add a new partition
  • I can delete all rows from an existing partition and drop it.
  • I can truncate a partition from which all the rows have been deleted.
  • I can drop a partition with rows if I specify the UPDATE GLOBAL INDEXES clause, but this could take time.

But

  • If I truncate a partition that does contain rows then I will immediately invalidate the index.
  • If I do any DDL that removes rows from a partitioned table, I will invalidate global indexes because they contain rowids that are invalid.

Ironically, one of the advantages of partitioning is that you can archive a whole partition with a quick DDL operation, rather than working through each row. But this would invalidate a global index. However, if I use something like the generic PeopleSoft archiving tool, which purges data with DML, then when the partition is empty I can drop the table partition without invalidating any global indexes!

The Thing (I Don't Like) About Video

Video is awesome. I like high-bandwidth communication. Even on the cheapest, most un-produced videos, I can see facial expressions and body language that I'd never be able to pick up from text. I can see candidness that's not going to come through in a document, even a blog that's written pretty much off the cuff. And videos with high production value, ...well of course it's awesome to watch a great short movie right at the tips of your fingers.

But...

But when you send me a 7:48 video, I have to budget 7:48 to watch it. (Well, more actually, because of the latency required to buffer it up.) When you send me a 13-page document, I can "read" it in 10 seconds if I want to. I can skim the first and final paragraphs really quickly and look for pictures or sidebars or quotes, and it takes practically no time for me to do it.

With a video, it's just more difficult to do that. I can watch the first 10 seconds and usually know whether I want to watch the remainder. But skimming through the whole video—like skipping to the end—is more difficult, because I have to sit there un-utilized while the whole video buffers up. Then I have to sit there while words come at me aurally, which is annoyingly sequential compared to reading buckets of text in one eyeful.

So, the bottom line is that the first 10 seconds of your video need to convince me to watch the remainder.

Or I won't.

Is it just me?

What's out there to make video browsing a better, more time-efficient, and more fulfilling experience?

NEXTGRES Gateway SQL*Plus Teaser…

Do you love SQL*Plus but hate that you can’t use it with other, non-Oracle databases?  Do you wish you could more easily migrate some of your MySQL or Postgres applications to Oracle?  If so, stay tuned, because my next few blog entries are going to show you how to do just that.  But first, I […]