A couple of weeks ago I posted a reference list of links to the bug fix notes for several of the most recent versions of Oracle - and several of the links recorded a surprisingly large number of clicks very rapidly, especially the 18.104.22.168 link. As a follow-up on the difficulties of upgrading, then, and with an insight into the number of enhancements and fixes to the optimizer that take place I decided to take a look at recent developments in the “fix control” list, and the “optimizer environment” parameters. Here’s a breakdown of the number of entries in recent versions of Oracle.
Here’s an interesting little conundrum about subquery factoring that hasn’t changed in the recent (22.214.171.124) patch for subquery factoring. It came to me from Jared Still (a fellow member of Oak Table Network) shortly after I’d made some comments about the patch. It’s an example based on the scott/tiger schema – which I’ve extracted from the script $ORACLE_HOME/rdbms/admin/utlsampl.sql (though the relevant scripts may be demobld.sql or scott.sql, depending on version).
As usual I’ve used an 8KB block size, LMT with uniform 1MB extents, and no ASSM to hold the data. I won’t reproduce the code to generate the schema, just the versions of the query with, then without, subquery factoring:
Here’s a model of a little problem I came across recently. It’s something I wrote about many years ago, and I thought I’d seen a note on Metalink explaining that the issue had been addressed; but the problem is still there, even in 126.96.36.199.
We start with a little data set (and it’s my standard setup of 8KB blocks, LMTs, 1MB uniform extents, and no ASSM):
It’s always worth browsing through the list of Oracle’s bug fixes each time a new release or patch comes out because it can give you clues about where to look for problems in your production release – and how to anticipate problems on the upgrade. This article is an example of a fix that I found while while looking at the note for 188.8.131.52 quite recently.
You might think from the title that this little note is going to be about the index hash join – you would be half right, it’s also about how the optimizer seems to make a complete hash of dealing with index hash joins.
Let’s set up a simple data set and a couple of indexes so that we can take a closer look:
Another random note that I made during the sessions attended at OOW was about the SQL*Plus AUTOTRACE feature. As you're hopefully already aware of this feature has some significant shortcomings, the most obvious being that it doesn't pull the actual execution plan from the Shared Pool after executing the statement but simply runs an EXPLAIN PLAN on the SQL text which might produce an execution plan that is different from the actual one for various reasons.
Now the claim was made that in addition to these shortcomings the plan generated by the AUTOTRACE feature will stay in the Shared Pool and is eligible for sharing, which would mean that other statement executions could be affected by a potentially bad execution plan generated via AUTOTRACE rather then getting re-optimized on their own.
Here is a summary of the findings while evaluating Incremental Partition Statistics that have been introduced in Oracle 11g.
The most important point to understand is that Incremental Partition Statistics are not "cost-free", so anyone who is telling you that you can gather statistics on the lowest level (partition or sub-partition in case of composite partitioning) without any noticeable overhead in comparison to non-incremental statistics (on the lowest level) is not telling you the truth.
Although this might be obvious I've already personally heard someone making such claims so it's probably worth to mention.
In principle you need to test on your individual system whether the overhead that is added to each statistics update on the lowest level outweighs the overhead of actually gathering statistics on higher levels, of course in particular on global level.
Thought it was high time that I covered in a little detail the subject of Index Organized Tables (IOTs). When used appropriately, they can be an extremely useful method of storing and accessing data. Hopefully by the end of this series, you’ll have a better understanding of IOTs, their respective strengths and weaknesses and so perhaps [...]
In a recent OTN thread I've been reminded of two facts about Dynamic Sampling that I already knew but had forgotten in the meantime:
1. The table level dynamic sampling hint uses a different number of blocks for sampling than the session / cursor level dynamic sampling. So even if for both for example level 5 gets used the number of sampled blocks will be different for most of the 10 levels available (obviously level 0 and 10 are exceptions)
2. The Dynamic Sampling code uses a different approach for partitioned objects if it is faced with the situation that there are more partitions than blocks to sample according to the level (and type table/cursor/session) of Dynamic Sampling
Note that all this here applies to the case where no statistics have been gathered for the table - I don't cover the case when Dynamic Sampling gets used on top of existing statistics.
A few days ago I published an example of the optimizer failing to handle an updateable join view because it didn’t recognise that a particular type of aggregate subquery would guarantee key-preservation. Here’s another example where the human eye can see key-preservation, but the optimizer can’t (even in 184.108.40.206). As usual we’ll start with some sample data – in this case two tables since I want to update from one table to the other.
create table t1 ( id1 number, id2 number, val number, constraint t1_pk primary key (id1, id2) ); insert into t1 values (1,1,99); commit; create table t2 ( id1 number, id2 number, id3 number, val number, constraint t2_pk primary key (id1, id2, id3) ); insert into t2 values (1,1,1,200); insert into t2 values (1,1,2,200); commit;