Who's online

There are currently 0 users and 34 guests online.

Recent comments


Oakies Blog Aggregator

I’ve Moved!

If you are reading this it means that you have successfully followed the trail of breadcrumbs to my new blogging home, part of the new Scale Abilities website. Thank you for following! From a blogging perspective, nothing has changed – I will be writing the same kind of content as before, with just as much [...]

Evil things are happening in Oracle

Relax, I’m talking about the Oracle Database kernel here, not the corporation ;-)

Here’s a couple of more reasons why not to play around with undocumented debug events unless you’re really sure why and how would they help to solve your specific problem (and you’ve gotten a blessing in some form from Oracle support too):

$ oerr ora 10665
 10665, 00000, "Inject Evil Literals"
 // *Cause:  Event 10665 is set to some number > 0, causing 1/(value-1) of all
 //          literals to be replaced by 2000 letter 'A's.  A value of 1 does
 //          not corrupt anything.
 // *Action: never set this event

$ oerr ora 10668

10668, 00000, "Inject Evil Identifiers"
 // *Cause:  event 10668 is set to some number > 0, causing 1/(value-1) of all
 //          identifiers to be replaced by a maximum amount of x's.  It is
 //          common for an identifier to be parsed once with a max of 30 bytes,
 //          then reparsed later with a max of 4000, so it may not be possible
 //          to inject such an identifier without the aid of this event.  A
 //          value of 1 causes no identifiers to be corrupted.
 // *Action: never set this event


Some events are meant to be left alone. You don’t want to wake up the evil sleeping deep in the core of the Oracle Kernel-land!


By the way, Karl Arao once managed to capture what this evil creature looks like:


Irrational Ratios

I’ve pointed out in the past how bad the Instance Efficiency ratios are in highlighting a performance problem. Here’s a recent example from OTN repeating the point. The question, paraphrased, was:

After going through AWR reports (Instance Efficiency Percentages) I observed they have low Execute to Parse % but high Soft Parse %.
Please share if you had faced such issue and any suggestions to solve this

The first problem with this question is that ratios hide information.
The second problem is that we don’t know what values the OP considers to be “low” or “high” in this context.
The third is that there’s no reason why any such juxtaposition of ratios should implicitly mean there’s a problem.

The silly thing about the question is that the ratios are derived values, and actually share a common component, so it would be much easier to look at the absolute figures to see if they mean anything. Fortunately someone asked for more information and we got the following extract from an AWR. This was from a typical report (one hour snapshot interval) on a machine with 8 CPUs.

Load Profile
                                          Per Second       Per Transaction
                                    ---------------       ---------------
               Redo size:                  11,685.79              3,660.98
               Logical reads:              71,445.74             22,382.86
               Block changes:                  70.89                 22.21
               Physical reads:                 58.63                 18.37
               Physical writes:                 2.80                  0.88
               User calls:                    652.93                204.55
               Parses:                         48.39                 15.16
               Hard parses:                     0.33                  0.10
               Sorts:                           6.90                  2.16
               Logons:                          0.23                  0.07
               Executes:                       52.71                 16.51
               Transactions:                    3.19

            % Blocks changed per Read:     0.10    Recursive Call %:    30.48
            Rollback per transaction %:    2.57       Rows per Sort:    29.66

    Instance Efficiency Percentages (Target 100%)
             Buffer Nowait %:              100.00       Redo NoWait %:  100.00
             Buffer  Hit   %:               99.92    In-memory Sort %:  100.00
             Library Hit   %:               98.47        Soft Parse %:   99.32
             Execute to Parse %:             8.19         Latch Hit %:   99.63
             Parse CPU to Parse Elapsd %:   89.90     % Non-Parse CPU:   99.62

(Forgetting the ratios for a moment – what’s the strangest thing about the Load Profile ? )

The ratios quoted can be derived from the above as follows:

Execute to Parse %:  100 * (Executes - Parses) / Executes
Soft Parse %:        100 * (Parses - Hard Parses) / Parses

So why not look at Parses, Hard Parses, and Executes instead of confusing yourself with ratios:

Better still, you could just about squeeze some information from the % Non-Parse CPU figure namely: 99.62% of your CPU is spent on doing things other than parsing, so IF your largest time component is CPU, and IF you eliminated all the parsing costs then you would make virtually no difference to the performance. (And the first IF is why you wouldn’t really bother with the ratio, you should be looking elsewhere for CPU time spent.)

Coming back to the Parses – we see about 48 per second (which doesn’t sound a lot, really, for a system that’s been engineered with 8 CPUs) and 0.33 hard parses per second (i.e. virtually none). If we look at the Executes it’s about 53 per second (again, not a lot, but slightly more than Parses). So where’s the problem ? There isn’t one – not in the parses and executes, at least.


Let’s go back to the strangest thing in the Load Profile – how do you do 48 Parses, 53 Executes but manage to total 653 User Calls per second ? (… to be continued)


DV8: Can we talk about this?

One of my friends is a drama teacher, so every once in a while I get invited to see productions that are touring locally. Last night I went to see some physical theatre by a company called DV8. The show was called “Can we talk about this?” and was about “freedom of speech, censorship and Islam”. I’m not a political or religious person, so the subject matter was not something I’m really into. Actually, I didn’t even know what the production was about before it started. If I had I would probably have thought, “sounds too heavy”, and avoided it. Added to that, I’m not really an arty kind of guy, so I guess these facts make me about the worst possible person to sit in a show like this. So what did I think?

The human body is totally amazing. The way some of the performers move is quite bizarre and the level of fitness they have is incredible. To put it into context, it’s like doing a conference presentation while taking part in a circuits class and not having to stop for breath or water breaks. Flipping amazing!

I can’t say I could totally appreciate all the aspects of the production. I’m just not that into it, but it is good to take yourself out of your comfort zone occasionally.

Hopefully I’ll be back to reviewing crappy movies soon… :)



Whatever Happened to Software Quality?

In today’s hurry-up, times-a-wasting environment I’m afraid I see quality sacrificed in the name of expediency all too often. Those of us who’ve been around for a while have learned that designing systems to be scalable, extensible, and maintainable is important to long-term success. Most software lives far beyond the life predicted by those who create it and we should make sure it is robust and ready to go the distance (I ran into a fellow who maintains the same COBOL program his father worked on in the 90s and his grandfather wrote in the 70′s!). I also watch helplessly as clients build new Java, .NET, and other applications without regard to adequate testing, extensibility, or maintainability. In effect they’ve created an entire new generation of “legacy” code. Yes people give lip-service to test plans but in most cases tests do not provide the coverage necessary; people test the parts of the code and integrations with other systems where they expect issues; leaving many parts of systems to be tested thoroughly only by the end users. Ouch! Today complete tools are available for all types of testing, many are open-source and free to license. Information professionals must be accountable to our organizations. Our software must be demonstrably complete and error free. When regulatory or business issues cause application changes the suite of tests necessary must be available, then added to and passed along. Deployment packages should include test plans, test cases, and where possible test code. Do yourself and your employer a favor; design tests as part of your plan, be sure your software is tested thoroughly, and pass those tests along to those who follow you.

New Website Launched

Welcome to the new website! As part of this migration all the company information and employee blogs have been consolidated into a single site, allowing simple searching across the whole company. Not much more to say, really. If you find anything broken on the site, please let us know or email the webmaster, we’d love [...]

Browser Stats…

Website stats are an odd thing. If you look at them too often and you start to think they matter, which is why I avoid looking at them for months on end. :)

I’ve seen a couple of browser share articles recently, so I decided to check out the spread for my website ( The top 3 were a little predictable.

On the 14th March 2011, the numbers were:

  • Internet Explorer: 42.32%
  • Firefox: 38.68%
  • Chrome: 15.19%

So IE and Firefox are declining while Chrome is on the rise.

The breakdown for IE is a little scary.

So there are about 9% of techies still using IE 6. :(

Look away from the stats… Look away from the stats…



Update: For Frits Hoogland, here is the Broswer-OS breakdown.

Public Appearances

I've just got the confirmation that I have been accepted as speaker for the HotSOS Symposium 2012 in March next year, therefore I post here a quick update on my upcoming public appearances:

Next week the DOAG conference 2011 (Nürnberg, Germany) will begin. It's an impressive conference with a large number of tracks, although a lot of them are not database-centred.

Nevertheless it's certainly one of the conferences to go if you speak German and are interested in Oracle database technology (or the ever increasing range of Oracle technology in general).

Since I've again missed to negotiate a training day this year (something I hopefully will be able to do next year) I've decided to compensate this a little bit: I plan to give at least two additional sessions at the DOAG "Unconference". This will be so called "Optimizer hacking sessions" where I simply start up a SQL prompt and explore some of the most important aspects of the Cost Based Optimizer. The session's title is: "Optimizer issues - How to detect and prevent suboptimal execution plans". I hope this will be fun and educating for all of us - it is meant to be an interactive session where you're encouraged to participate by asking questions that we will try to answer by performing live exploration of the database. I've got plenty of material to talk about, starting from basic SQL statement performance troubleshooting and not ending with more advanced topics like histograms, cardinality estimates, virtual columns, extended statistics, clustering factor etc. etc. Furthermore I plan to demonstrate some cool things that you probably haven't seen yet, so I believe it's going to be a lot of fun.

If you've already received the print out with the schedule (I never understand why DOAG publishes these print outs that early given all the possible changes to the schedule until the actual start of the conference): I agreed with Heli Helskyaho from Finland when we met at this year's OOW to swap our presentation slots at DOAG, so my presentation about the Cost-Based Optimizer called "Query Transformations" will not take place on Wednesday, 9 am, but on Thursday, 3 pm, room "Kiew".

Unfortunately DOAG didn't accept the second paper I've submitted that would have been the "introductionary" session about how to read and understand execution plans, so I'll only give this more advanced session about the most important transformations that the optimizer applies to a query, why you should care and how to control them if necessary.

So this is what my preliminary schedule for DOAG looks like:

Thursday, November 17
12 pm: DOAG Unconference - Optimizer hacking session
2 pm: DOAG Unconference - Optimizer hacking session
3 pm: DOAG Conference - Query Transformations, room "Kiew"

I won't be at the UKOUG conference (December 5-7, UK, Birmingham) this year but this doesn't mean that I don't recommend going there. In fact I believe this year's conference has one of the most impressive list of speakers ever, including a number of top speakers from Oracle USA that you won't meet anywhere else across Europe, so if you can, get there - it certainly will be a top experience.

3. HotSOS
In March 2012 the 10th HotSOS Symposium (Dallas, Texas) will be held - and I'll be speaking there for the first time (about the CBO and how it calculates joins over histograms, by the way). It's a conference that I particularly look forward to for several reasons.
This year is the 10th anniversary of the conference so some special activities are planned and since this is the only Oracle conference in the world dedicated to performance naturally the OakTable member "density" will be extremely high.

I hope to meet some of you at one of those events (well except UKOUG, that is) !

Why OEM drives me crazy sometimes

OEM just seems to have too many brittle esoteric configuration files and process dependencies. Ideally I just want to connect with system/password and go. Is that too simple to ask for?

Today I tried out OEM and got the general broken page:

And my first reaction was just to give up and move on, but then I noticed the error message sounded some what simple:

ORA-28001: the password has expired (DBD ERROR: OCISessionBegin)

Hmm, maybe this *is* easily fixable. Well guess again. Luckily someone has well documented the fix

After the “fix” EM still wouldn’t work because DBSNMP’s password had expired as well, but at least it gave a password screen to change the password. So to simply launch OEM  to get to the performance page after my system password had expired I had to change the passwords for not one but 3 users:  SYSTEM, SYSMAN and DBSMP which is crazy enough, but the worst part is that SYSMAN’s password is hardcoded, though encrypted,in a flat file that has to found (good luck) and edited.

PS, to turn off password expiration which is on by default in 11 but was off by default in 10:

alter profile default limit password_life_time unlimited;




Revived NOUG DBA SIG Events

Congratulations to North East Oracle User Group in the Boston area who has restarted the DBA SIG. This was a highly successful program that used to be held after work hours on a weekday in the Oracle building in Burlington. I was privileged to have presented there from 2004 until its sad demise in 2009. Now, I am honored to be invited to be the first speaker in the revived program. I am presenting the session "Addressing Performance Issues during Change with Real Application Testing, Intelligent Stats and SQL Plan Baselines with Live Demos". More information here.

When: Nov 16th 6:30 PM to 9:00 PM
Where: Doubletree by Hilton at 5400 Computer Drive, Westborough, MA 01581
Food and drinks will be served.
The event is free to all NOUG members.

I will give away several Oracle books at the event for the best questions, etc.

If you plan on attending this, the organizers respectfully request that you RSVP to immediately. They need some reasonably accurate headcount to order food and drinks, which is yet another reason to attend.

Abstract: Change is inevitable - be it applying a patchset or creating an index. In this session you will learn how to harness the power of three major features of Oracle database to improve the performance during any types of change, or at any other time. You will learn, with plenty of demos, how to configure and use Database Replay and SQL Performance Analyzer to predict performance, use extended statistics to make the optimizer more intelligent and use SQL Plan Baselines to make the performance consistent but open to further improvements.

As it has been the norm, I plan to explain these concepts and techniques with lots of live demos. If you are in the Boston area, I sincerely hope to see you all there. I have nothing but pleasant memories every one of the 5 times I have presented in that venue and expect nothing less this time.