Search

OakieTags

Who's online

There are currently 0 users and 41 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Friday Philosophy – Oracle Performance Silver Bullet


Silver Cartridge and Bullet

For as long as I have been working with Oracle technology {which is now getting towards 2 decades and isn’t that pause for thought} there has been a constant search for Performance Silver Bullets – some trick or change or special init.ora parameter {alter system set go_faster_flag=’Y'} you can set to give you a guaranteed boost in performance. For all that time there has been only one.

There are a few performance Bronze Bullets…maybe Copper Bullets. The problem is, though, that the Oracle database is a complex piece of software and what is good for one situation is terrible for another. Often this is not even a case of “good 90% of the time, indifferent 9% of the time and tragic 1% of the time”. Usually it is more like 50%:30%:20%.


Cartridge with copper bullet &spent round

I’ve just been unfair to Oracle software actually, a lot of the problem is not with the complexity of Oracle, it is with the complexity of what you are doing with Oracle. There are the two extremes of OnLine Transaction Processing (lots of short running, concurrent, simple transactions you want to run very quickly by many users) and Data Warehouse where you want to process a vast amount of data by only a small number of users. You may well want to set certain initialisation parameters to favour quick response time (OLTP) or fastest processing time to completion (DW). Favouring one usually means a negative impact on the other. Many systems have both requirements in one… In between that there are the dozens and dozens of special cases and extremes that I have seen and I am just one guy. People get their database applications to do some weird stuff.

Partitioning is a bronze bullet. For many systems, partitioning the biggest tables makes them easier to manage, allows some queries to run faster and aids parallel activity. But sometimes (more often than you might think) Partitioning can drop rather than increase query or DML performance. In earlier versions of Oracle setting optimizer_index_caching and optimizer_index_cost_adj was often beneficial and in Oracle 9/8/7 setting db_file_multiblock_read_count “higher” was good for DWs….Go back to Oracle 7 and doing stuff to increase the buffer cache hit ratio towards 98% was generally good {and I will not respond to any comments citing Connors magnificent “choose your BCHR and I’ll achieve it” script}.
You know what? There was an old trick in Oracle 7 you could maybe still look at as a bronze bullet. Put your online redo logs and key index tablespaces on the fastest storage you have and split your indexes/tables/partitions across the faster/slower storage as is fit. Is all your storage the same speed? Go buy some SSD and now it isn’t….


Cartridge with Wooden Bullet

Then there are bronze bullets that you can use that very often improve performance but the impact can be catastrophic {Let’s call them wooden bullets :-) }. Like running your database in noarchivelog mode. That can speed up a lot of things, but if you find yourself in the situation of needing to do a recovery and you last cold backup is not recent enough – catastrophe. A less serious but more common version of this is doing things nologging. “oh, we can just re-do that after a recovery”. Have you done a test recovery that involved that “oh, we can just do it” step? And will you remember it when you have a real recovery situation and the pressure is on? Once you have one of these steps, you often end up with many of them. Will you remember them all?

How many of you have looked at ALTER SYSTEM SET COMMIT_WRITE=’BATCH,NOWAIT’? It could speed up response times and general performance on your busy OLTP system. And go lose you data on crash recovery. Don’t even think about using this one unless you have read up on the feature, tested it, tested it again and then sat and worried about could possibly go wrong for a good while.

That last point is maybe at the core of all these Performance Bronze Bullets. Each of these things may or may not work but you have to understand why and you have to understand what the payback is. What could now take longer or what functionality have I now lost? {hint, it is often recovery or scalability}.

So, what was that one Silver Bullet I tantalizingly left hanging out for all you people to wait for? You are not going to like this…

Look at what your application is doing and look at the very best that your hardware can do. Do you want 10,000 IOPS a second and your storage consists of less than 56 spindles? Forget it, your hardware cannot do it. No matter what you tune or tweak or fiddle with. The one and only Performance Silver Bullet is to look at your system and your hardware configuration and work out what is being asked and what can possibly be delivered. Now you can look at:

  • What is being asked of it. Do you need to do all of that (and that might involve turning some functionality off, if it is a massive drain and does very little to support your business).
  • Are you doing stuff that really is not needed, like management reports that no one has looked at in the last 12 months?
  • Is your system doing a heck of a lot to achieve a remarkably small amount? Like several hundred buffer gets for a single indexed row? That could be a failure to do partition exclusion.
  • Could you do something with physical data positioning to speed things up, like my current blogging obsession with IOTs?
  • You can also look at what part of your hardware is slowing things down. Usually it is spindle count/RAID level, ie something dropping your IOPS. Ignore all sales blurb from vendors and do some real-world tests that match what you app is or wants to do.

It’s hard work but it is possibly the only Silver Bullet out there. Time to roll up our sleeves and get cracking…

{Many Thanks to Kevin Closson for providing all the pictures – except the Silver Bullet, which he only went and identified in his comment!}

Exadata: It’s The World’s Fastest Database Machine And The Best For Oracle Database – Part I. Do As I Say, Not As I Do!

Two days ago Oracle published a video about the world’s “fastest database machine.” This, of course, refers to Exadata. Although the video has been online for two days its view rate is very low (less than 1,000). So, I thought I’d refer to it and give it a boost.

This is actually an interesting video. The folks in the video—namely Juan, Kodi, Amit and Ron—are some of the class acts in the Exadata organization within Oracle Server Technologies. I have respect for these folks and I’ve known some of them for many, many years and the remainder at least dating back to the commencement of my tenure in Juan Loaiza’s organization back in 2007. They mean well and I mean it when I say they are a collective class act. Having respect for these gentlemen doesn’t mean I have to agree with everything they say. To that end I aim to respectfully offer some differing views on some of what has been said in the video.

I’d like to first offer my commentary regarding the messaging in the video after which I’ll provide a link.

  • The World’s Fastest Database Machine. The first point I’ll make is about the title of the video. Exadata is very good at a lot of things–obviously. Readers of this blog know my position on the matter quite well. That aside, I have a problem with the notion of referring to Exadata as “the world’s fastest database machine” without any data to back up the claim. That was a point of contention I had when I was still with Oracle. Exadata is not the established fastest machine in any database category as per the standard in the matter—which at this time is Transaction Processing Council (TPC) workloads. For that matter even the lower-level Storage Performance Council workloads would be a starting point for validation of these claims (particularly the unstructured data claims made by Exadata marketing) but to-date there are no audited industry-standard benchmark results with Exadata. Please don’t get me wrong. I’m not harping on the lack of published benchmarks for the many reasons I point out here. I’m only making the point that claims of world dominance should be backed up with industry-standard proof points. As for industry-standard benchmarks, the world’s fastest database machine for OLTP is the SPARC SuperCluster.  The world’s fastest database machine for DW/BI/DSS is Dell kit with Exasol at every scale except 30 TB where Oracle Database 10g on HP Superdome (Itanium) remains the top-dog. I’ll point out that the only independently validated proof point of Exadata, to date, was the Winter Corporation proof point I did back in 2009.  With that memory in mind, I’m led to the next point of contention with the video.
  • Setting Out To Build The World’s Fastest Database Machine. The stage for the video is set right at the beginning when Juan speaks of the motive behind building the “world’s fastest database machine” when referring to Exadata. The problem is that the product name Exadata came long after what was actually being built. When Exadata was first released, the term only applied to the storage cells. The first embodiment of a Database Machine was the HP Oracle Database Machine which included Exadata Storage Servers. When Sun Microsystems was acquired by Oracle the Exadata brand was sort of promoted to mean everything in the rack and, of course, current work for ports of Exadata storage connectivity (iDB) to such platforms as HP-UX were dropped. So what’s my point? There was, in fact, a port of iDB that allowed the HP-UX port of Oracle Database 11g instances to connect to Exadata Storage Servers but the effort was killed and the mutual respect between long-time partners Oracle and HP crumbled. Personalities sometimes get in the way it seems, but I digress. The very fact that such porting was in progress sort of suggests that what ended up being the only embodiment of Exadata technology was clearly not always the plan for the technology and thus the notion of “setting out to build the world’s first” seems more like hind-sight. In the “olden days” we looked at Exadata as optimal storage for Oracle Database—nothing more grandiose than that. But those were exciting and rewarding times nonetheless!
  • Ideal System For Running The Oracle Database. Juan points out that one of the goals in Exadata development was to create the ideal system for running the Oracle database. I think that is a good design center, but I stand fast in my position that the ideal system for Oracle database depends on the workload. There is no one-size fits all. The one-size fits all positioning is pervasive though. Another member of Juan’s team, Tim Shetler, garners the same level of esteem I have for those I’ve previously mentioned but I don’t always have to agree with him either. In this article in Database Trends and Applications, Tim puts it this way (emphasis added by me):

Our mission around Exadata is to create absolutely the best platform for running the Oracle Database.  Those words are carefully chosen because it is very focused. It is not the best platform for running databases. It is not the best platform for running data warehouses on Oracle. It is: Any application that uses the Oracle Database will run best if it uses the Exadata platform to host the Oracle Database.

Do As I Say, Not As I Do
The  problem I have with this idea that Exadata is the best platform for Oracle database full-stop is in spite of being “the best platform for running databases”, and best for “any application”,  Oracle IT doesn’t even use Exadata for ERP. We know from reading Oracle Corporation’s Mission Critical Systems Update (Google Docs View) that years after the production release of Exadata, Oracle IT migrated from older SPARC gear to an M9000. This is Oracle’s central ERP system. Nothing is more critical than that. This may sound like FUD, but the migration started last September (2010)—years after Oracle approached customers to adopt Exadata technology—and the configuration is still being expanded. I quote :

The additional M9000 SPARC system installation began at midnight March 3rd, 2011, and was completed, in full, at 11:31am the next day, March 4th, 2011. There was no down time of the live GSI database/ERP systems during installation by the Oracle PDIT staff.

                                                            — Chris Armes, Sr. Director, Oracle Systems

Let me paraphrase that. The core ERP systems at Oracle (they are not Exadata) were still being built-out as I was finishing my two-weeks notice of resignation from my position in the Exadata development organization! That hardware is going nowhere soon. Do As I Say, Not As I Do.

I’ll watch the view count on that YouTube video while I consider Part II in this series.

I was just joking about giving the video viewership a boost.

Here is the link: http://www.youtube.com/watch?v=ZHlFDgci9Fc

Filed under: oracle

Trace3 customer experience with Delphix

People often ask me what I work on now after having been at Oracle, Embarcardero and Quest. I’m now at a company called Delphix where I work on performance monitoring and tuning. Here is a great customer case writeup from  Trace3 about what Delphix does.

 

From Trace3:


 

“After their SAP administrator provisioned a 7.2 terabyte database
in 15 minutes with no assistance from the storage group, UNIX
group, DBAs, or backup/restore, they really understood the full
value of the Delphix solution.”

PAUL MONTOYA – Professional Services Architect
Trace3

Every once in a while, a technology comes along that addresses our customers’ pain
points so thoroughly that it pretty much sells itself. What if you could promise customers
a solution that would allow them to provision full copies of multi-terabyte databases
in minutes, with no impact to production, while achieving a huge consolidation of the
underlying storage?

 

No doubt you’ve heard of server virtualization, the foundation of private cloud computing.
It’s an easy sale: buy less hardware, provision resources faster, and reduce costs.
Database virtualization offers those same benefits to customers that routinely work
with large data sets, especially in dev/test environments. Right now, that technology is
exclusively the domain of Delphix, a Trace3 technology partner.

 

Just a few years ago, many customers were reluctant to virtualize their servers because
the technology was viewed by many as not ready for production. Today, virtual machines
are ubiquitous. Showing customers the benefits of database virtualization now will not
only generate sales, it will also build loyalty to Trace3, giving our customers confidence
that we stay on the leading edge.

 

The featured case study explains how Trace3 worked closely with a semiconductor
customer to embrace database virtualization.  A successful proof of concept resulted in
a 99% reduction in database refresh time—from 5 days to 15 minutes—while yielding a
45:1 storage consolidation ratio, for a projected payback of 1.9 months.

… article continued at Trace3


 

Of course you can check out my  current company straight from the horses mouth at Delphix.com

Oracle OpenWorld 2011: The Oracle Real-World Performance Group

The summer is flying by and in no time it will be October and that means Oracle OpenWorld 2011 should be on your radar. Once again the Oracle Real-World Performance Group will be hosting three sessions.  For those unfamiliar with our presentations, you will get marketing free, no nonsense performance insight of the highest caliber from Oracle’s most elite database performance engineers — the kind of things hard core Oracle people want to hear about.  Hope to see you there!

style="cursor: default;" width="100%" border="1" cellpadding="1">
style="cursor: text; margin: 8px;" bgcolor="#eeeeee" width="120"">Session ID: style="cursor: text; margin: 8px;">13641 (Wednesday Oct. 5th, 10:00) style="cursor: text; margin: 8px;" bgcolor="#eeeeee">Session Title: style="cursor: text; margin: 8px;">Real-World Performance: The Forklift Upgrade style="cursor: text; margin: 8px;" bgcolor="#eeeeee">Session Abstract: style="cursor: text; margin: 8px;">Today the motivation to consolidate and migrate existing data and applications into the extreme-high-performance database environments of Oracle Exadata and Oracle Exalogic is being driven by a desire to reduce costs and deliver increased performance and service levels to users. The process is often called a forklift migration, because applications and data are simply picked up and lifted onto new platforms.In this session, Oracle’s Real World Performance group describes how best to maximize your investment and achieve world-class performance and discusses the challenges and compromises that need to be made in the process. style="cursor: text; margin: 8px;" bgcolor="#999999"> style="cursor: text; margin: 8px;" bgcolor="#999999"> style="cursor: text; margin: 8px;" bgcolor="#eeeeee">Session ID: style="cursor: text; margin: 8px;">13643 (Tuesday Oct. 4th 13:15) style="cursor: text; margin: 8px;" bgcolor="#eeeeee">Session Title: style="cursor: text; margin: 8px;">Real-World Performance: How Oracle Does It style="cursor: text; margin: 8px;" bgcolor="#eeeeee">Session Abstract: style="cursor: text; margin: 8px;">Oracle’s Real-World Performance Group has been achieving world-class performance for clients for more than 15 years. In this session, some of the senior engineers describe the techniques, philosophy, and tools they use to address performance challenges. The session is packed with demos and examples to convey the pragmatism required in real-world performance today. style="cursor: text; margin: 8px;" bgcolor="#999999"> style="cursor: text; margin: 8px;" bgcolor="#999999"> style="cursor: text; margin: 8px;" bgcolor="#eeeeee">Session ID: style="cursor: text; margin: 8px;">13640 (Thursday Oct. 6th 10:30) style="cursor: text; margin: 8px;" bgcolor="#eeeeee">Session Title: style="cursor: text; margin: 8px;">Real-World Performance Questions and Answers style="cursor: text; margin: 8px;" bgcolor="#eeeeee">Session Abstract: style="cursor: text; margin: 8px;">This is your chance to pose specific questions to Oracle’s Real-World Performance Group. All questions need to be in writing (yes, cards will be provided) and should relate to database performance challenges. In the past, the best questions have been related to specific topics of interest to the audience.

The Dresden Files: Ghost Story…

Ghost Story is the latest book in The Dresden Files series by Jim Butcher.

Spoiler Alert. Don’t read on if you are currently reading the series and haven’t got to this book yet. It’s hard to talk about the book without doing a bit of a spoiler.

 

At the end of the last book Harry Dresden died. At the start of this book he comes back as a ghost and is sent back to Chicago to help his friends. The fact that there is another book planned makes you think one of two things are going to happen.

  1. He will be so successful as a ghost version of himself he will be able to carry on in that capacity.
  2. He will somehow be ressurected.

This book is more predictable than any of the other books in the series, but that’s not a bad point. The outcomes of certain situations are predictable, but the journeys to those outcomes are far from predictable, which is half the fun. You are sitting there thinking, I know how this is meant to pan out, but how the hell is he going to get there? :)

It was a great story, and well deserving of its place in the series. I’ve now got a really long wait for the next book (Cold Days) to be released. :(

Cheers

Tim…




Baselines and SQL_ID

I ran across a great post by Marcin Przepiorowski (How to find SQL_ID and PLAN_HASH_VALUE in Oracle SQL Plan Management Baselines) thanks to Karl Arao‘s tweet. Marcin posted a bit of code for calculating a SQL_ID using the text of a SQL statement (based on some research by Tanel Poder). I was aware of Tanel’s research but had not taken the time to try to figure out how to code it. The context of Marcin’s post was based on wanting to relate a Baseline to a SQL_ID. This is something I’ve wanted to do in the past so I was quite interested in the post. As a bonus Marcin also demonstrated a technique to pull the PLAN_HASH_VALUE associated with a Baseline. Of course I can never leave well enough alone and so I had to re-arrange Marcin’s code a little bit to suite my own purposes. So I created a function that returns either the PLAN_HASH_VALUE or the SQL_ID of a Baseline. Here’s the code to create the function: create_baseline_info.sql and a script that uses it: baselines2.sql

Here’s an example of how to use them:

 
SYS@SANDBOX1> select count(*) from kso.skew;
 
  COUNT(*)
----------
  32000004
 
SYS@SANDBOX1> @fsx
Enter value for sql_text: select count(*) from kso.skew
Enter value for sql_id: 
 
SQL_ID         CHILD  PLAN_HASH  EXECS  AVG_ETIME AVG_PX OFFLOAD IO_SAVED_% SQL_TEXT
------------- ------ ---------- ------ ---------- ------ ------- ---------- ----------------------------------------------------------------------
gm0w4as5hhr4m      0  578366071      1       1.30      8 Yes          71.78 select count(*) from kso.skew
 
SYS@SANDBOX1> @dplan
Enter value for sql_id: gm0w4as5hhr4m
Enter value for child_no: 
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  gm0w4as5hhr4m, child number 0
-------------------------------------
select count(*) from kso.skew
 
Plan hash value: 578366071
 
----------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |          |       |  6183 (100)|          |        |      |            |
|   1 |  SORT AGGREGATE                |          |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR               |          |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)         | :TQ10000 |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE             |          |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR         |          |    32M|  6183   (1)| 00:01:15 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS STORAGE FULL| SKEW     |    32M|  6183   (1)| 00:01:15 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - storage(:Z>=:Z AND :Z<=:Z)
 
 
23 rows selected.
 
SYS@SANDBOX1> @create_baseline
Enter value for sql_id: gm0w4as5hhr4m
Enter value for plan_hash_value: 578366071
Enter value for fixed (NO): 
Enter value for enabled (YES): 
Enter value for plan_name (SQLID_sqlid_planhashvalue): 
sql_id: gm0w4as5hhr4m
plan_hash_value: 578366071
fixed: NO
enabled: YES
plan_name: SQLID_gm0w4as5hhr4m_578366071
sql_handle: SQL_e86d1a732b7a5c25
Baseline SQLID_gm0w4as5hhr4m_578366071 created.
 
PL/SQL procedure successfully completed.
 
SYS@SANDBOX1> select sql_handle, plan_name from dba_sql_plan_baselines where plan_name like upper('SQLID_gm0w4as5hhr4m_578366071');
 
SQL_HANDLE           PLAN_NAME
-------------------- ------------------------------
SQL_e86d1a732b7a5c25 SQLID_GM0W4AS5HHR4M_578366071
 
SYS@SANDBOX1> @baselines2
Enter value for sql_text: select count(*) from kso.skew%
Enter value for name: 
Enter value for plan_name: 
 
SQL_ID        SQL_TEXT                                           PLAN_HASH_VALUE SQL_HANDLE           PLAN_NAME                      ENABLED ACC FIX LAST_EXECUTED
------------- -------------------------------------------------- --------------- -------------------- ------------------------------ ------- --- --- ----------------
1jp7sjmt0wp1j select count(*) from kso.skew s , skew_gtt g where 2414665383      SQL_936b37ad684d18d4 SQL_PLAN_96utrppn4u66ncefcc71f YES     NO  NO
                                                                 2178955164      SQL_936b37ad684d18d4 SQL_PLAN_96utrppn4u66ne441e87d YES     NO  NO
gm0w4as5hhr4m select count(*) from kso.skew                      578366071       SQL_e86d1a732b7a5c25 SQLID_GM0W4AS5HHR4M_578366071  YES     YES NO
 
SQL> -- cool - it's a match

So in the previous listing I ran a select statement, found the SQL_ID and PLAN_HASH_VALUE for that statement (using fsx.sql), displayed the plan (using dplan.sql) and created a Baseline for the statement (using create_baseline.sql). I then verified that the Baseline existed in DBA_SQL_PLAN_BASELINES and ran the baselines2.sql script which returns the SQL_ID and PLAN_HASH_VALUE using the baseline_info() function, which as you can see matched the original query.

Trouble-shooting

How do you trouble-shoot a problem ? It’s not an easy question to answer when posed in this generic fashion; but perhaps it’s possible to help people trouble-shoot by doing some examples in front of them. (This is why I’ve got so many statspack/AWR examples – just reading a collection of different problems helps you to get into the right mental habit.)

So here’s a problem someone sent me yesterday. Since it only took a few seconds to read, and included a complete build for a test case, with results, and since it clearly displayed an Oracle bug, I took a look at it. (I’ve trimmed the test a little bit, there were a few more queries leading up to the error):


create table person (id number(2), name varchar2(10)) ;

insert into person values (1, 'Alpha') ;
insert into person values (2, 'Bravo') ;
insert into person values (3, 'Charlie') ;
insert into person values (4, 'Charles') ;
insert into person values (5, 'Delta') ;

create or replace view vtest as
select id, 'C' as letter from person where name like 'C%' ;

select p.id, p.name, v.id, v.letter
from person p
left join vtest v on v.id = p.id
order by p.id ;

The problem was that 10.2.0.4 and 11.2.0.2 gave different results – and the 11.2.0.2 result was clearly wrong. So the question was: “is there something broken with outer joins on views, or possibly ANSI outer joins?” (The ansswer to the last question is always “probably” as far as I’m concerned, but I wouldn’t turn that into a “yes” without checking first.) Here are the two results:

10.2.0.4:
========
        ID NAME               ID L
---------- ---------- ---------- -
         1 Alpha
         2 Bravo
         3 Charlie             3 C
         4 Charles             4 C
         5 Delta

11.2.0.2
========
        ID NAME               ID L
---------- ---------- ---------- -
         1 Alpha                 C
         2 Bravo                 C
         3 Charlie             3 C
         4 Charles             4 C
         5 Delta                 C

Clearly the extra ‘C’s in the letter column are wrong.

So what to do next ? Knowing that Oracle transforms ANSI SQL before evaluating an execution plan I decided to run the 10053 trace. Sometimes you get lucky and see the “unparsed SQL” in this trace file, a representation (though not necessarily 100% exact) image of the statement for which Oracle will generate a plan. I was lucky, this was the unparsed SQL (cosmetically enhanced):


SELECT
	P.ID ID,
	P.NAME NAME,
	PERSON.ID ID,
	CASE  WHEN PERSON.ROWID IS NOT NULL THEN 'C' ELSE NULL END  LETTER
FROM
	TEST_USER.PERSON P,
	TEST_USER.PERSON PERSON
WHERE
	PERSON.ID  (+) = P.ID
AND	PERSON.NAME(+) LIKE 'C%'
ORDER BY
	P.ID
;

So I ran this query, and found that the same error appeared – so it wasn’t about ANSI or views. So possibly it’s something about the CASE statement and/or the ROWID in the CASE statement, which I tested by adding three extra columns to the query:

        person.name,
        person.rowid,
        CASE  WHEN PERSON.name IS NOT NULL THEN 'C' ELSE NULL END  LETTER

With these extra columns I got the following results from the query:

        ID NAME               ID NAME       ROWID              L L
---------- ---------- ---------- ---------- ------------------ - -
         1 Alpha                                               C
         2 Bravo                                               C
         3 Charlie             3 Charlie    AAAT7gAAEAAAAIjAAC C C
         4 Charles             4 Charles    AAAT7gAAEAAAAIjAAD C C
         5 Delta                                               C

So the CASE did the right thing with the person.name column, but the wrong thing with the person.rowid column.
Time to get onto MOS (Metalink).

I searched the bug database with the key words: case rowid null
This gave me 2,887 hits, so I added the expression (with the double quotes in place) “outer join”
This gave me 110 hits, so from the “product category” I pick “Oracle Database Products”
This gave me 80 hits, and the first one on the list was:

Bug 10269193: WRONG RESULTS WITH OUTER JOIN AND CASE EXPRESSION OPTIMIZATION CONTAINING ROWID

The text matched my problem, so job done – except it’s reported as not fixed until 12.1

This isn’t a nice bug, of course, because the particular problem can be generated automatically in the transformation of ANSI outer joins to Oracle outer joins, so you can’t just change the code.

In passing, it’s taken me 31 minutes to write this note – that’s 10 minutes longer than it took to pin down the bug, but I have to say I got lucky on two counts: first, that the “unparsed SQL” was available, second that my choice of key words for MOS got me to the bug so quickly (which is where I usually find I waste most time).

A Day of Real World Performance Raffle...

If you haven't heard - I'm participating in a series of seminars entitled "Real World Performance". It is a rather unique forum - with three simultaneous presenters and three screens. I find it more interesting than a seminar delivered by a single person - all day long. We haven't had anyone fall asleep yet - there is enough moving around and back and forth on the stage to keep you engaged all day long.

To get things jump started, I'm running a raffle of sorts. I'm going to give away the coolest t-shirt you'll ever own:


and a signed copy of

All you have to do is to sign up for the Real World Performance Day and select "Tom Kyte's Blog" from the "How did you hear about this event" element on the registration form. I'll be sending out a t-shirt and signed book to the first ten people that sign up this way. I'll be sending them out at the end of August. Apologies in advance to Germany, Australia and Scotland Real World participants - they are using a different registration system and I won't be able to track your sign-ups.

Here's a list of some podcasts we've done on the RWP tour - please listen to
get an idea of what takes place:

Tom Kyte, Oracle performance specialist and Host of
"AskTom.Oracle.Com" tells us what's in store at The IOUG's Real
World Performance Day.

Andy Flower, President of the Independent Oracle Users Group
(IOUG), discusses plans for COLLABORATE 11 and what folks can
expect when the Real World Performance Tour rolls to their town.

Hope to see you at one of them - we'll be doing more locations next year so if you don't see a city near you - they might still be on the drawing board.

How Do You Contribute?

Do you know your organization’s business? Making yourself valuable to the organization is easier if you understand how you contribute to its success. Talk to your customers, read annual reports, get involved in understanding what your organization does for a living. If you work for a corporation how do you contribute to the bottom line? If you work for a government or non-profit organization how does what you do help it achieve its goals? Once you know what makes your organization succeed you can make sure you are a contributor and increase your value. Great IT groups have become part of the core business by learning what customers need to do then building the tools help the business run better. IT is rarely a revenue-producer but recently smart organizations are leveraging handheld technology to enable direct communication with customers and users. iPads, tablets, and smart phones can be leveraged to provide rich customer experiences. This means that claims adjusters can not only record claim information and photos but instantly determine how quickly solutions are available and at what cost. I was recently at a busy coffee shop where a mobile-device equipped teenager met us in the long line taking orders that were available upon our arrival at the counter; they offered efficient assembly- line speed but managed to keep a folksy neighborhood atmosphere. Know your business and help make it better!

The New Order Oracle Coding Challenge 2

August 2, 2011 (Back to the Previous Post in the Series) (Forward to the Next Post in the Series) In the previous article in this series we tried to develop different methods for identifying special numbers between 1 and 1,000,000, such that the number formed by reversing the order of the digits will evenly divide into the original number.  [...]