Search

Top 60 Oracle Blogs

Recent comments

Upgrades

Upgrades

One of the questions that came up at the Optimizer Round Table this year was about minimizing the performance-related** hassle of upgrading from 11g to 12c. Dealing with changes in the optimizer is always an an interesting problem but in 12c this is made more challenging because of the automatic dynamic sampling that can introduce a significant amount of extra work at (hard) parse time, then generate SQL Directives, and finally generate extended (column group) statistics the next time you (or the automatic job) collect stats.

Closure

It’s been a long time since I said anything interesting about transitive closure in Oracle, the mechanism by which Oracle can infer that if a = b and b = c then a = c but only (in Oracle’s case) if one of a, b, or c is a literal constant rather than a column. So with that quick reminder in place, here’s an example of optimizer mechanics to worry you. It’s not actually a demonstration of transitive closure coming into play, but I wanted to remind you of the logic to set the scene.

Upgrades

I have a simple script that creates two identical tables , collects stats (with no histograms) on the pair of them, then executes a join. Here’s the SQL to create the first table:

Upgrades

One of the worst problems with upgrades is that things sometimes stop working. A particular nuisance is the execution plan that suddenly stops appearing, to be replaced by an alternative plan that is much less efficient.

Apart from the nuisance of the time spent trying to force the old plan to re-appear, plus the time spent working out a way of rewriting the query when you finally decide the old plan simply isn’t going to re-appear, there’s also the worry about WHY the old plan won’t appear. Is it some sort of bug, is it that some new optimizer feature has disabled some older optimizer feature, or is it that someone in the optimizer group realised that the old plan was capable of producing the wrong results in some circumstances … it’s that last possibility that I find most worrying.

MySQL Upgrades

I read Wim Coekaerts post about the MySQL 5.6.20-4 this morning. I logged on to my server and did the following command as root.

# yum update -y

That’ll be the upgrade done then… :)

If you are using MySQL on Linux you can use the MySQL Repository for your distribution, rather than using the bundled MySQL version, to make sure you stay up to date with the latest and greatest. As long as you stay within a point release (5.6, 5.7 etc.) of the latest version, upgrades should really be as simple as a “yum update”.

NVL() change

One of the problems of functions is that the optimizer generally doesn’t have any idea on how a predicate based on function(col) might affect the cardinality. However,  the optimizer group are constantly refining the algorithms to cover an increasing number of special cases more accurately. This is a good thing, of course – but it does mean that you might be unlucky on an upgrade where a better cardinality estimate leads to a less efficient execution plan. Consider for example the simple query (where d1 is column of type date):

select	*
from	t1
where	nvl(d1,to_date('01-01-1900','dd-mm-yyyy')) < sysdate

Now, there are many cases in many versions of Oracle, where the optimizer will appear to calculate the cardinality of

nvl(columnX,{constant}) operator {constant}

as if it were:

NVL() change

One of the problems of functions is that the optimizer generally doesn’t have any idea on how a predicate based on function(col) might affect the cardinality. However,  the optimizer group are constantly refining the algorithms to cover an increasing number of special cases more accurately. This is a good thing, of course – but it does mean that you might be unlucky on an upgrade where a better cardinality estimate leads to a less efficient execution plan. Consider for example the simple query (where d1 is column of type date):

select	*
from	t1
where	nvl(d1,to_date('01-01-1900','dd-mm-yyyy')) < sysdate

Now, there are many cases in many versions of Oracle, where the optimizer will appear to calculate the cardinality of

nvl(columnX,{constant}) operator {constant}

as if it were:

Subquery Anomaly

Here’s an oddity that appeared on the OTN database forum last night:

We have this query in our application which works fine in 9i but fails in 11gR2 (on Exadata) giving an “ORA-00937: not a single-group group function” error….

… The subquery is selecting a column and it doesn’t have a group by clause at all. I am not sure how is this even working in 9i. I always thought that on a simple query using an aggregate function (without any analytic functions / clause), we cannot select a column without having that column in the group by clause. So, how 11g behaves was not a surprise but surprised to see how 9i behaves. Can someone explain this behaviour?

Upgrades to 11g are finally complete

Just a little slice of reality to cut through all the 12c stuff that is floating around at the moment. I’ve just moved the last of our databases to 11g. Yay! As well as upgrading, we’ve been culling or consolidating old and unused stuff, which has drastically reduced and simplified our Oracle database landscape.

We currently have four projects running databases on HP-UX on Itanium (spit), one project on Solaris and the rest on Oracle Linux under VMware. If I had my way we would kick out HP-UX and Solaris and do everything on Oracle Linux.

We’ve still got one project on 11gR1, but that is being held back intentionally because of some issues with the vendor of the application that runs against it. Hopefully that will soon be on 11gR2 also.

Hash Clusters – 3

This note is a quick summary of an oddity that came to light after a twitter conversation with Christian Antognini yesterday. First a little test script to get things going: