November 21, 2010 (Back to the Previous Post in the Series) (Forward to the Next Post in the Series) In the previous article in this series I compared the Standard Edition of Oracle 10.2.0.5 with the Enterprise Edition of Oracle 10.2.0.5 when a simple SQL statement was executed, looking for examples of different performances in the [...]
… but only a reference.
Arup Nanda, who wrote a series on 10g New Features and then 11g new features for Oracle Magazine, has just published a short note pointing out that Oracle has been busy doing a little house-keeping on their website(s) and this has resulted his articles moving.
If you want to read some really good material (the how and the why, as well as the what) on the more significant feartures of 10g and 11g, then following this link to Arup’s Blog.
November 21, 2010 (Forward to the Next Post in the Series) As I am sure that quite a few people are aware, the Standard Edition of Oracle Database is likely to be suitable for many of the smaller databases that you might find in your organization, as long as the following are not needed: Parallel execution [...]
The week before the UKOUG conference (or more accurately - UKOUG Conference Series Technology and E-Business Suite 2010. Maybe I'll just stick to UKOUG) and whilst it's an unusual year in as much as I'm not presenting some traditions hold true. I have a stinking cold - my third in the past month. Which proves it can't just be the stress or work associated with upcoming presentations. I'm starting to wonder about Vitamin D deficiency as it seems to be a factor in both psoriasis flare-ups (bad one at the moment) and a poor immune system. Or maybe it's that damn moustache trapping an unusual number of germs?
Whatever, I'll still be heading to Brum, but only for Monday and Tuesday. With no free place on offer, I decided to only take a couple of days away from work this time. Regardless, there's plenty to look forward to although my own personal contribution might not be what I'm most looking forward to! Once Lisa Dobson and Debra Lilley got wind of my fund-raising efforts, it was always going to end in tears. I believe I'll have a part to play in Monday's closing keynote, so please give generously. My agenda looks something like this but I somehow doubt I'll make all these presentations.
09:00 - 10:00 What's new in Oracle Database Application Development - Tom Kyte
10:25 - 11:10 Sane SAN 2010 - James Morle
11:20 - 12:20 Oracle 'Hidden' Features: Features You Probably Didn't Know About and are FREE - Graham Wood
11:45 - 12:45 Roundtable: Exadata Database Machine - Joel Goodman
13:20 - 14:05 Practical Use of Active Session History - David Kurtz
14:15 - 15:15 Performance Stories from Exadata Migrations - Tanel Poder
15:40 - 16:40 Oracle Optimizer - Top Tips to get Optimal SQL Execution all the Time - Maria Colgan
16:50 - 17:35 Customising ASH and AWR: Beyond ashrpt.sql and awrrpt.sql - Daniel Fink
17:50 - 18:50 Keynote Presentation - David Callaghan plus Movember Shave-off
18:50 - 20:00 Monday evening networking
08:45 - 09:30 Keynote presentation - Graham Wood
09:40 - 10:40 Stabilising Statistics - Tony Hasler
09:40 - 10:40 What else can I do with AWR/ASH data - Tom Kyte
11:05 - 12:05 The Optimizer – what you can expect after upgrading to Oracle Database 11g - Maria Colgan
12:15 - 13:00 Time Series Analysis Techniques for Statspack or AWR - Wolfgang Breitling
14:05 - 14:50 Co-operating with the database - Jonathan Lewis
15:00 - 15:45 Analysis of Oracle ASM Failability - Alex Gorbachev
16:10 - 17:10 Using Constraints to Improve Performnace - Joze Senegacnik
17:20 - 18:05 How to Build a System that Doesn't Scale - Graham Wood
18:05 - 20:00 Tuesday evening networking - Sponsored by OTN
As far as actual speaking engagements go, next up will be Hotsos Symposium 2011 - 6th-10th March 2011. Hotsos is one of my favourite conferences so I'm looking forward to that. It also gives me a chance to put the subject of statistics on partitioned objects to bed, for a while at least. (Oh, and expect a moustache to appear on that page at some point!) The more the blog posts have expanded into a sprawling mess and the more people I've pointed towards those posts, the more I've realised that I should write a white paper on the subject to tidy up the mess and hopefully act as a more useful reference. I'll then base the presentation on the paper. That's the plan anyway.
Then I have a couple of other overseas events. First is the OUG Ireland Conference on 30th March 2011. I don't know too much about this one yet, other than that I've offered to give a presentation on Real Time SQL Monitoring that I've done a couple of times at my current client site. I've presented in the past at an OUG Ireland event, but it was more than a few years ago!
Then it's time to go back on a ferry! The last such conference I attended was a blast so I'm expecting to have a great time at the Norwegian Oracle User Group (OUGN) Spring Conference - 7th-9th April 2011. Here's Mark Rittman's post about an earlier conference.
Lots to look forward to, then, not least the removal of my facial hair.
Thank you all those who attended my seminars in Bucharest. Thank you for taking the time of two days with me. I hope it has been worthwhile. I sincerely apologize for the delay in start of the first day. It was entirely my fault - I messed up the time difference calculations.
Apart from the fact that the “Rows” figure for the FILTER operation at line 6 is blank, what’s the obvious error in this extract from an execution plan:
------------------------------------------------------------------------- | Id | Operation | Name | Rows | ------------------------------------------------------------------------- ... | 5 | NESTED LOOPS | | 3864 | | 6 | FILTER | | | | 7 | HASH JOIN OUTER | | 3864 | | 8 | HASH JOIN OUTER | | 282K| | 9 | TABLE ACCESS BY INDEX ROWID | PRODUCT | 282K| | 10 | INDEX RANGE SCAN | PRD_SUPP_I1 | 282K| | 11 | VIEW | | 2293K| | 12 | HASH GROUP BY | | 2293K| | 13 | PARTITION LIST SINGLE | | 5790K| | 14 | TABLE ACCESS FULL | PRODUCT_PRICING | 5790K| | 15 | VIEW | | 2307K| | 16 | HASH GROUP BY | | 2307K| | 17 | PARTITION LIST SINGLE | | 5703K| | 18 | TABLE ACCESS FULL | PRODUCT_PRICING | 5703K| ... -------------------------------------------------------------------------
Once again I am reminded of two things – it’s important to be precise in your use of language if you want people to understand the question; and you can see a lot if you look carefully.
If you start to think about the activity that the plan represents, and the SQL that might have produced it, there are some ideas you might get about re-writing the query to be more efficient – but the point I was trying to make is that there is clearly an error in the content that the optimizer is displaying. The error suggests either that the optimizer has done the wrong arithmetic, or that the output is not a correct copy of the results produced by the optimizer.
The answer I was expecting comes from line 7. Stripping the error back to the bare minimum we see this:
-------------------------------------------------------------------------- | Id | Operation | Name | Rows | -------------------------------------------------------------------------- ... | 7 | HASH JOIN OUTER | | 3864 | | 8 | rowsource 1 (HASH JOIN OUTER) | | 282K| | 15 | rowsource 2 (VIEW) | | 2307K| ... --------------------------------------------------------------------------
As Milo points out in comment 3, In an outer join the result set cannot have fewer rows than the “preserved” rowsource (which, in this case, is the result set from line 8). I mentioned the fact that the “Rows” figure for the FILTER operation at line 6 was blank – it’s just possible that the optimizer has overwritten the figure in line 7 with the figure that should have been in line 6; there are cases where a FILTER operation and the operation you would normally think of as its first child are combined, so it’s possible that a little storage glitch has appeared in some cases where the combination rule doesn’t apply.
Someone did mention the FILTER operation and pointed out that it wasn’t filtering any data. The commonest forms of FILTER operation essentially check that some predicate it true for each row in their first child rowsource – and it is possible for someone to write code that has a filter that doesn’t eliminate any rows. In fact, though, this plan is probably saying: “line 7 will produce 282K rows, and the filter at line 6 will reduce that to 3,684.” (There’s also a comment about a “group by” not reducing the size of the rowsource – the comment was caused by a parallax error, but it is possible, of course, for Oracle to decide that a “group by” is going to produce an output with just as many rows as the input.)
Sean Molloy’s opening comment asks how you can get two different estimates from the same tablescan — and follows up with one answer which is that since we are looking at PARTITION LIST SINGLE the two tablescans could be of different partitions. But it’s only a puzzle if there were no predicates on the tablescans and, as Pavol points out in comment 7, there are no “star flags” in the ID column to suggest the presence of any predicates – but there are no stars anywhere – and there have to be some predicates in the plan, since you can’t do a hash join, index range scan, or filter without a predicate. As Timur points out – you don’t get the predicate section in the report from dbms_xplan.display_awr(), so you don’t get the stars.
Speaking of missing information, Dave Costa in comment 4 suggests that the user made an error in choosing which bits of the plan to copy. I did the choosing – how could it possible by wrong ! Given the number of times I’ve said “you must include the predicate section”, why isn’t it there ? (answer: it’s display_awr). In fact lines 5 and 6 were redundant as far as the “obvious problem” was concerned – but I thought that the blank and reappearance of the same cardinality might be a helpful visual clue.
Several people have commented on the rationale for code that does two outer join aggregations on the same table. It does look a little unusual, but it’s not possible to come to any conclusion about whether it’s a good thing or a bad thing without knowing the data and the intent of the SQL. For example the intent could be something like:
select product_name, min(offer_price), max(offer_price) ... from
(You can assume that in the actual code, the min() and max() would be hidden inside a couple of inline views)
In this case code which visits the product_pricing table twice might be a very good idea – because there is a special “index (min/max) range scan” optimisation that works very well if (a) have the right indexes in place (b) use a nested loop and (c) only ask for min, or max, but not both. Perhaps our problem is that the optimizer is doing a hash join when it should be doing a nested loop.
Another possibility is that we have something like:
select product_name, min(gbp_offer_price), min(usd_offer_price) ... from
(Again you can assume the min() and max() would be hidden inside inline view, and the different column names would be derived names rather than being from two separate columns in a table).
Notice that the plan shows list partitioning – maybe we have one currency per partition, and we’ve written the query to maximise the benefits of partition elimination (as well as leaving the data nicely normalised, thus maximising efficiency of maintenance).
Bottom line on the SQL – in this case I wasn’t asking people to guess what was wrong with the code; but it’s very interesting to see how many different topics of thought can come out from a starting point of a few lines extracted from a plan.
Seems like December came quickly this year… and UKOUG is only one week from Monday!
This will be my first year attending UKOUG – and I will be giving a new presentation called “Large Scale ASM Adoptions and Lessons Learned.” I was personally involved in a very large ASM adoption project and I’m also talking to a few other acquaintances with similar experiences. I will be summarizing our collective stories and lessons learned in this presentation. My session will be Monday morning (Nov 29) at 10:25 am – please stop by!
However there’s another way you are invited to participate in my UKOUG session. Do you know anybody who has been involved in an ASM adoption? Have you been involved in one? I have created a web survey with the questions that I’m asking my acquaintances about their experiences with ASM.
You can find the survey here: http://www.ardentperf.com/asm-survey
If you can find 20 or 30 minutes then please share your own experiences and feedback! I would like to gather as many stories as possible and summarize them all in a single presentation.
Do me a favor and help spread the word to anyone who’s been involved in an ASM adoption! (Especially at large companies!)
Also, try to send me your feedback by next Tuesday so that I can include it when I finalize my slides on Wednesday. And – of course – I will send you a link to the slides after UKOUG. Thanks!
November 18, 2010 I just browsed though Metalink (MOS) to see which operating system platforms are currently supported for Oracle Database 22.214.171.124. I noticed that the 126.96.36.199 patchset for 64 bit Linux was updated yesterday, and more digging revealed that all of the 188.8.131.52 patches for the other supported operating systems were also recently updated. Take [...]
In part I of my recent blog series on Linux hugepages and modern Oracle releases I closed the post by saying that future installments would materialize if I found any pitfalls. I don’t like to blog about bugs, but in cases where there is little material on the matter provided elsewhere I think it adds value. First, however, I’d like to offer links to parts I and II in the series:
The pitfall I’d like to bring to readers’ attention is a situation that can arise in the case where the Oracle Database 11g Release 2 184.108.40.206 parameter use_large_pages is set to “only” thus forcing the instance to either successfully allocate all shared memory from the hugepages pool or fail to boot. As I pointed out in parts I and II this is a great feature. However, after an instance is booted it stands to reason that other processes (e.g., Oracle instances) may in fact use hugepages thus drawing down the amount of free hugepages. In fact, it stands to reason that other uses of hugepages could totally deplete the hugepages pool.
So what happens to a running instance that successfully allocated its shared memory from the hugepages pool and hugepages are later externally drawn down? The answer is nothing. An instance can plod along just fine after instance startup even if hugepages continue to get drawn down to the point of total depletion. But is that the end of the story?
What Goes Up, Must (be able to) Come Down
OK, so for anyone that finds themselves in a situation where an instance is up and happy but HugePages_Free is zero the following is what to expect:
$ sqlplus '/ as sysdba' SQL*Plus: Release 220.127.116.11.0 Production on Wed Sep 29 17:32:32 2010 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to an idle instance. SQL> SQL> HOST grep -i huge /proc/meminfo HugePages_Total: 4663 HugePages_Free: 0 HugePages_Rsvd: 10 Hugepagesize: 2048 kB SQL> shutdown immediate ORA-01034: ORACLE not available ORA-27102: out of memory Linux-x86_64 Error: 12: Cannot allocate memory Additional information: 1 Additional information: 6422533 SQL>
Pay particular attention to the fact that sqlplus is telling us that it is attached to an idle instance! I assure you, this is erroneous. The instance is indeed up.
Yes, this is bug 10159556 (I filed it for what it is worth). The solution is to have ample hugepages as opposed to precisely enough. Note, in another shell a privileged user can dynamically allocate more hugepages (even a single hugepage) and the instance will be then able to be shutdown cleanly. As an aside, an instance in this situation can be shutdown with abort. I don’t aim to insinuate that this is some sort of zombie instance that will not go away.
Filed under: oracle