Search

OakieTags

Who's online

There are currently 0 users and 36 guests online.

Recent comments

Affiliations

11gR2

View Data Volume Estimates

When the optimizer has to estimate the data volume (the BYTES column in the plan output), it usually bases this information on the column statistics, if applicable and available (think of complex expressions).However, whenever there is a VIEW operator in an execution plan, that represents an unmerged view, the optimizer obviously "loses" this information and starts applying defaults that are based on the column definition.Depending on the actual content of the columns this can lead to dramatic differences in data volume estimates.Both, under- and overestimates are possible, because for character based columns these defaults seem to be based on an assumed 50% fill grade, so a VARCHAR2(100 BYTE) column counts as 50 bytes data volume.For multi-byte character sets the same rule applies based on the maximum width of a column using the "char" semantics, so a VARCHAR2(1000 CHAR) column counts as 2000 byte

System Statistics Gathered in Exadata Mode – When Are They Relevant?

The aim of this post isn’t to explain what the “exadata mode” is. Hence, if you don’t know what it is, before continuing reading have a look to this post published on Kerry Osborne’s blog. The only thing I would like to add is that the “exadata mode” is available as of 11.2.0.4 or when a patch implementing the enhancement associated to bug 10248538 is installed.

The key information I would like to share with you is that, in some situations, gathering system statistics in “exadata mode” is pointless. Let me explain why… But, before doing so, it’s important to review how the query optimizer computes the cost of full scans.

The key formula used by the query optimizer to compute the I/O cost of a full scan is the following:

io_cost = ceil ( blocks / mbrc * mreadtim / sreadtim ) + 1

Where:

The Broken Statistics: "parse count (total)" and "session cursor cache hits"

The values provided by the “parse count (total)” and “session cursor cache hits” statistics are subject to several bugs. And, what’s worse, for years Oracle didn’t care to fix it. This is my impression, at least.

Then, when few weeks ago I read in the Oracle Support note 13837105.8 (Bug 13837105 – statistics “parse count (total)” and “session cursor cache hits” miscounted) that the bug introduced in 11.2.0.3 was fixed, I hoped that others bugs in this area were fixed as well.

Unfortunately, it’s not the case. What a disappointment!

Even though several posts were already wrote about this topic (e.g. here and here; check the comments as well…), I thought it could be useful to summarize what the current status is.

TIMESTAMP WITH TIME ZONE Aggregation

The TIMESTAMP WITH TIME ZONE data type that got introduced a long time ago is known for some oddities, for example Tony Hasler has a nice summary of some of them here.Here is another oddity that shows up when trying to aggregate on such a data type. Have a look at the following simple example:


create table t
as
select
rownum as id
, date '2000-01-01' + rownum - 1 as some_date
, cast(date '2000-01-01' + rownum - 1 as timestamp) as some_timestamp
, cast(date '2000-01-01' + rownum - 1 as timestamp with local time zone) as some_timestamp_with_local_tz
, cast(date '2000-01-01' + rownum - 1 as timestamp with time zone) as some_timestamp_with_timezone
from
dual
connect by

Named Notation and SEM_MATCH Table Function

This is a short post about a strange behavior I discovered today…

The fact is that it is not really possible to use the named notation with the SEM_MATCH table function. In fact, even though the parameter’s names can be specified, the order of the parameters overrides the specification done with the named notation!?!

Here is an example:

New Version Of XPLAN_ASH Tool - Video Tutorial

A new major release (version 3.0) of my XPLAN_ASH tool is available for download.

You can download the latest version here.

In addition to many changes to the way the information is presented and many other smaller changes to functionality there is one major new feature: XPLAN_ASH now also supports S-ASH, the free ASH implementation.

If you run XPLAN_ASH in a S-ASH repository owner schema, it will automatically detect that and adjust accordingly.

XPLAN_ASH was tested against the latest stable version of S-ASH (2.3). There are some minor changes required to that S-ASH release in order to function properly with XPLAN_ASH. Most of them will be included in the next S-ASH release as they really are only minor and don't influence the general S-ASH functionality at all.

ITL Deadlocks (script)

A reader of this blog, VijayS, asked me to share the script I use to demo ITL deadlocks that I mentioned in this comment. Since other readers might be interested, here is the script.

ASM AU Size And LMT AUTOALLOCATE

When using Locally Managed Tablespaces (LMT) with variable, system managed extent sizes (AUTOALLOCATE) and data files residing in ASM the Allocation Unit (AU) size can make a significant difference to the algorithm that searches for free extents.The corresponding free extent search algorithm when searching for free extents >= the AU size seems to only search for free extents on AU boundaries in order to avoid I/O splitting.Furthermore the algorithm seems to use two extent sizes when searching for free extents: A "desired" (for example 8MB) and a "minimum acceptable" (for example 1MB) extent size - however when performing the search the "desired" size seems to be relevant when limiting the search to free extents on AU boundaries.This can lead to some surprising side effects, in particular when using 4MB AUs.It effectively means that although you might have plenty o

Oracle Forms and Reports Services 11gR2 on Oracle Linux 6…

Last year I wrote an article about the installation of Oracle Forms and Reports Services 11gR2 on Oracle Linux 5. I’ve now written the article for Oracle Forms and Reports Services 11gR2 on Oracle Linux 6. The latest patch of F&RS is certified for OL6, along with JDK6 and JDK7.

In addition to the installation articles, I’ve compiled a collection of random notes about post-installation configuration into a separate article. I keep adding to it every time I come across a new (for me) issue.

"Cost-free" joins - 2

In the previous post I've demonstrated an unexpected Nested Loop Join caused by an extreme data distribution. Although unexpected at first sight, the performance of the execution plan selected by the optimizer is decent - provided the estimates are in the right ballpark.Here is another case of an unexpected execution plan, this time about Merge Joins.

Merge Joins

In order to appreciate why the execution plan encountered is unexpected, first a quick summary about how Merge Joins work:A Merge Join is essentially a Nested Loop operation from one sorted row source into another sorted row source.