Who's online

There are currently 0 users and 38 guests online.

Recent comments



Friday Philosophy – It’s not “Why Won’t It Work!” it’s “What Don’t I Understand?”

I had a tricky performance problem to solve this week. Some SQL was running too slow to support the business need. I made the changes and additions I could see were needed to solve the problem and got the code running much faster – but it would not run faster consistently. It would run like a dream, then run slow, then run like a dream again 2 or 3 times and then run like a wounded donkey 3 or 4 times. It was very frustrating.

For many this would provoke the cry of “Why won’t it work!!!”. But I didn’t, I was crying “What don’t I understand???”. {I think I even did a bit of fist-pounding, but only quietly as my boss was sitting on the desk opposite me.}

I think I’ve always been a bit like that in respect of How Things Work”, but it has been enhanced within me by being blessed to work with or meet people for whom it is more important for them to understand why something is not working than fixing it.


There are many little bits and pieces lurking in the Oracle code set that would be very useful if only you had had time to notice them. Here’s one that seems to be virtually unknown, yet does a wonderful job of eliminating calls to decode().

The nvl2() function takes three parameters, returning the third if the first is null and returning the second if the first is not null. This is  convenient for all sorts of example where you might otherwise use an expression involving  case or decode(), but most particularly it’s a nice little option if you want to create a function-based index that indexes only those rows where a column is null.

Here’s a code fragment to demonstrate the effect:

Shrinking Tables to Aid Full Scans

{This blog is about shrinking tables where the High Water Mark is higher than it needs to be. Prompted by one of the comments, I wrote a follow-up post on finding the High Water Mark and tables that consist mostly of empty space, which would be candidates for shrinking.}

This blog is about one of those things I do as almost an autonomous “not thinking about it” performance housekeeping task, one which I have been meaning to mention for ages.

There can be quite a lot to gain by checking out full scans on “small” tables and seeing if it is as efficient as it can be. Sometimes it is, sometimes it is not. Often it is “good enough”. Occasionally it is awful.

Recently I was just casting an eye over the “top 20″ SQL on a system for any code generating a lot of consistent gets. I came across the below:

Statspack quiz

I’ve a level 5 Statspack report from a real production database with the following Top 5 Timed events section:


March 28, 2012 A recent question on the OTN forums asked which PLAN_HASH_VALUE appears in V$SQLAREA when there are multiple child cursors for a single SQL_ID value, when some child cursors have a different execution plan.  Certainly, this bit of information must be in the Oracle Database documentation.  Let’s check the V$SQLAREA documentation for Oracle Database 11.2: [...]

Sqoop /*+ parallel */

It’s often useful to keep one’s ear to the ground when one is responsible for database performance and/or efficiency — it’s a talent that I wish more DBAs had: a sense of curiosity about how the database is being used to support particular application use cases.

Today’s post deals with something rather basic, but it speaks to this idea of “agile” collaboration between developers and DBAs around something that has a lot of buzz right now: processing “Big Data” with Hadoop / MapReduce. In particular, a process that someone has already deemed to be “too big” to be accomplished within an existing Oracle database.

Coalesce Subquery Transformation - COALESCE_SQ

Oracle 11.2 introduced a set of new Query Transformations, among others the ability to coalesce subqueries which means that multiple correlated subqueries can be merged into a number of less subqueries.

Timur Akhmadeev already demonstrated the basic principles in a blog entry, but when I was recently involved into supporting a TPC-H benchmark for a particular storage vendor I saw a quite impressive application of this optimization that I would like to share here.

Fast Analytics of AWR Top Events

I’ve been working on a lot of good schtuff lately on the area of capacity planning. And I’ve greatly improved my time to generate workload characterization visualization and analysis using my AWR scripts which I enhanced to fit on the analytics tool that I’ve been using.. and that is Tableau.

So I’ve got a couple of performance and capacity planning use case scenarios which I will blog in parts in the next few days or weeks. But before that I need to familiarize you on how I mine this valuable AWR performance data.

Let’s get started with the AWR top events, the same top events that you see in your AWR reports but presented in a time series manner across SNAP_IDs…

Hyper-Extended Oracle Performance Monitor 6.0 Beta

March 15, 2012 (Modified March 16, 2012) Several people expressed an interest in using the Beta version of my Hyper-Extended Oracle Performance Monitor 6.0 that has been under development for about a decade.  Rather than trying to find a way to deliver the Beta version of the program to those people who left comments in [...]

Thoughts on a Hyper-Extended Oracle Performance Monitor Beta

March 12, 2012 As long time readers of this blog might know, in my free time during roughly the last 10 years I have been working on a program named “Hyper-Extended Oracle Performance Monitor”.  Since 2005 or 2006 I have permitted a few people to try the beta versions of the program, thinking that I might [...]