Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Statistics on Partitioned Tables - Part 6b - COPY_TABLE_STATS

Sigh ... these posts have become a bit of a mess.

There are so many different bits and pieces I want to illustrate and I've been trying to squeeze them in around normal work. Worse still, because I keep leaving them then coming back to them and re-running tests it's easy to lose track of where I was, despite using more or less the same test scripts each time (any new scripts tend to be sections of the main test script). I suspect my decision to only pull out the more interesting parts of the output has contributed to the difficulties too, but with around 18.5 thousand lines of output, I decided that was more or less essential.

It has got so bad that I noticed the other day that there were a couple of significant errors in the last post which are easy to miss when you're looking at detailed output and must be even less obvious if you're looking at it for the first time.

The fact no-one said much about these errors reinforces my argument with several bloggers that less people read and truly absorb the more technical stuff than they think. They just pick up the messages they need and take more on trust than you might imagine!

So what were the errors? Possibly more important, why did they appear? The mistakes are often as instructive as the successes.

Error 1

This is the tail-end of the subpartition stats at the end of part 5

SQL> select  table_name, subpartition_name, global_stats, last_analyzed, num_rows
  2  from dba_tab_subpartitions
  3  where table_name='TEST_TAB1'
  4  and table_owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     SUBPARTITION_NAME              GLO LAST_ANALYZED          NUM_ROWS
------------------------------ ------------------------------ --- -------------------- ----------
TEST_TAB1                      P_20100209_GROT                NO  28-FEB-2010 21:41:47          3
TEST_TAB1                      P_20100209_HALO                NO  28-FEB-2010 21:41:49          3
TEST_TAB1                      P_20100209_JUNE                NO  28-FEB-2010 21:41:49          3
TEST_TAB1                      P_20100209_OTHERS              NO  28-FEB-2010 21:41:50          3

Compared to the supposedly same section produced at the start of part 6a :-

TABLE_NAME                     SUBPARTITION_NAME              GLO LAST_ANALYZED          NUM_ROWS           
------------------------------ ------------------------------ --- -------------------- ----------           

TEST_TAB1                      P_20100209_GROT                YES 28-MAR-2010 15:38:32          3           
TEST_TAB1                      P_20100209_HALO                YES 28-MAR-2010 15:38:32          3           
TEST_TAB1                      P_20100209_JUNE                YES 28-MAR-2010 15:38:32          3           
TEST_TAB1                      P_20100209_OTHERS              YES 28-MAR-2010 15:38:33          3           

Spot the difference? Forget the timestamps for now, although I hope it's clear that :-

- The stats were gathered at different times
- I really need to get my blogging act together ;-)

Instead, notice that GLOBAL_STATS is set to NO in part 5 and YES in part 6a. How could that happen?

The first thing to note is that it's probably not as significant as it first appears because what does it mean for Subpartition stats to be Global when there are no underlying sub-components of a Subpartition? In fact I'd argue that all Subpartition stats are Global by implication but I may be missing something important. (Comments welcome ...)

Instead I'll focus on how you can manage to get the two different results. The output from part 5 was the result of gathering statistics on a load table (LOAD_TAB1) and then exchanging it with the relevant subpartition of TEST_TAB1 (as shown in part 5). When you do that, the GLOBAL_STATS flag will be set to NO.

If I take the alternate approach of exchanging LOAD_TAB1 with the subpartition of TEST_TAB1 and only then gathering statistics on the subpartition of TEST_TAB1 then GLOBAL_STATS will be YES for that subpartition. That's the most obvious reason I can think of for the discrepancy but I can't be certain because the log files that I took the output from are history now.

At some point when ripping parts of a master script to run in isolation for each blog post I've changed the stats gathering approach from gather-the-exchange to exchange-then-gather. The output shown in part 5 was correct so I've updated part 6a to reflect that and added a note.

Error 2

I think this one is worse, because it's down to me mixing up some pastes because the original results looked wrong when they were, in fact right. It's extremely rare for me to edit results and I regret doing it here. Whenever you start tampering with the evidence, you're asking for trouble!

When I'd been pasting in the final example output, showing that subpartition stats had been copied for the new P_20100210_GROT subpartition, I saw another example when the subpartition stats hadn't been copied, so I decided I was mistaken and fixed the post. But the original was correct so I've put it back the way it should be and added a further note.

If you weren't confused already, you have my permission to be utterly confused now ;-)


On a more serious note, let's recap what I'm trying to do here and what does and doesn't work.

I've added a new partition for the 20100210 reporting date and I tried to copy the partition and subpartition stats from the previous partition (P_20100209) to the new partition. I attempted a Partition-level copy

SQL> exec dbms_stats.copy_table_stats(ownname => 'TESTUSER', tabname => 'TEST_TAB1', 
                                      srcpartname => 'P_20100209', dstpartname => 'P_20100210');

PL/SQL procedure successfully completed.

in the expectation that DBMS_STATS would copy the partition stats as well as all of the subpartition stats from the P_20100209 partition. I'll repeat how the stats looked here ...

SQL> select  table_name, global_stats, last_analyzed, num_rows
  2  from dba_tables
  3  where table_name='TEST_TAB1'
  4  and owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     GLO LAST_ANALYZED          NUM_ROWS                                       
------------------------------ --- -------------------- ----------                                       
TEST_TAB1                      NO                                                                        

SQL> select  table_name, partition_name, global_stats, last_analyzed, num_rows
  2  from dba_tab_partitions
  3  where table_name='TEST_TAB1'
  4  and table_owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     PARTITION_NAME                 GLO LAST_ANALYZED          NUM_ROWS        
------------------------------ ------------------------------ --- -------------------- ----------        
TEST_TAB1                      P_20100131                     NO                                         
TEST_TAB1                      P_20100201                     NO                                         
TEST_TAB1                      P_20100202                     NO                                         
TEST_TAB1                      P_20100203                     NO                                         
TEST_TAB1                      P_20100204                     NO                                         
TEST_TAB1                      P_20100205                     NO                                         
TEST_TAB1                      P_20100206                     NO                                         
TEST_TAB1                      P_20100207                     NO                                         
TEST_TAB1                      P_20100209                     NO  28-MAR-2010 15:38:38         12        
TEST_TAB1                      P_20100210                     NO                                         

10 rows selected.

SQL> select  table_name, subpartition_name, global_stats, last_analyzed, num_rows
  2  from dba_tab_subpartitions
  3  where table_name='TEST_TAB1'
  4  and table_owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     SUBPARTITION_NAME              GLO LAST_ANALYZED          NUM_ROWS        
------------------------------ ------------------------------ --- -------------------- ----------        
TEST_TAB1                      P_20100131_GROT                NO                                         
TEST_TAB1                      P_20100131_HALO                NO                                         


TEST_TAB1                      P_20100209_GROT                NO  28-MAR-2010 15:38:32          3        
TEST_TAB1                      P_20100209_HALO                NO  28-MAR-2010 15:38:32          3        
TEST_TAB1                      P_20100209_JUNE                NO  28-MAR-2010 15:38:32          3        
TEST_TAB1                      P_20100209_OTHERS              NO  28-MAR-2010 15:38:33          3        
TEST_TAB1                      P_20100210_GROT                NO  28-MAR-2010 15:38:33          3    
TEST_TAB1                      P_20100210_HALO                NO                                         
TEST_TAB1                      P_20100210_JUNE                NO                                         
TEST_TAB1                      P_20100210_OTHERS              NO                                         

40 rows selected.

So here's where we are

- No sign of partition statistics for P_20100210, despite the P_20100209 'source' partition having valid stats.
- The subpartition stats have been copied from P_20100209_GROT to P_20100210_GROT.
- The subpartition stats have not been copied for the other three P_20100210 partitions.

Weird, right? I've checked this over and over and I'm pretty sure I'm right, but decided to upload the entire script and output here in case others can spot some mistake I'm making.

Getting back down to earth, though, this isn't the end of the world. It just means that for our particular stats collection strategy of loading data into load tables, exchanging them with subpartitions and then copying earlier stats, we just need to make sure we're working at the subpartition level all the time and that's what I'll look at next.

Finally, I'll re-emphasise that this is not the only strategy and it's fair to say it's flushing out some unusual effects that you might never see if you work primarily with Partition-level stats!

Statistics on Partitioned Tables - Contents

When Jonathan Lewis decided it was time to post a list of the Partition Stats posts on his blog and Noons suggested I made them easier to track down, I listened. So this post will link to the others and, at least in the short term, I've also included links in the side-bar on the right over there.

- Default options - GLOBAL AND PARTITION
- Estimated Global Stats
- Stats Aggregation Problems I
- Stats Aggregation Problems II
- Minimal Stats Aggregation

To be continued ....

P.S. It was a lot easier doing this than actually finishing off Part 6 and I'm insanely busy at the moment.
P.P.S. My ultimate intention is to re-write the blog posts as a more considered, easier-to-read article but who knows when that might happen?

Graphing AWR Data in Excel

I often use data collected by the Oracle Automatic Workload Repository (AWR) to help to diagnose performance problems. However, I often work on performance problems with application teams, rather than the DBAs.  It is surprising how often that I don't have access to Oracle Enterprise Manager.
Somebody might say that the system was slow at a particular time. I want to get an overview of the database at that time, and I might also want to compare it to another time when the system wasn't slow. Later I may generate AWR reports using particular pairs of snapshots, but I need something to direct me to when an issue occurred, and hence which snapshots to compare.
When dealing with performance problems the question is nearly always the same. "How long did I spend doing what?". Amongst lots of metrics, AWR collects snapshots of how much time system-wide has been spent waiting on each wait event.
Thumbnail of AWR wait chartTo give me an overview of that data, I have created an excel spreadsheet that will query the AWR repository and retrieve that data for up to a week into am Excel pivot table, and will then graph the top wait events. Click on the thumbnail on the left to see an example.
For me, this approach has a number of advantages:
  • First; don't drown in numbers. The eye is very good at picking out patterns and irregularities. You can then drill into the numbers behind the chart.
  • The chart provides a quick overview of the behaviour of the database. You can quickly see if there is anything to examine in more detail, or whether the database is exhibiting its normal characteristics.
  • The AWR data is extracted into the spreadsheet, so I can keep a copy of the spreadsheet for reference. I generally like to collect AWR snapshots every 15 minutes. One week of data, at that frequency, turns into about 6Mb of spreadsheet.
  • Excel charts can easily be put into documents, presentations or e-mails. Excel is very flexible, it is easy to format the chart to show only what is important.
  • It's free (that is to say that you won't need to go and buy anything else). All you need is Excel and an Oracle client.
Over time I have produced a variety of spreadsheets that collect all sorts of metrics from the Oracle database and others that collect specific metrics from PeopleSoft applications.

To be fair, AWR's forerunner, Statspack collects a lot of similar data to AWR, the essential difference being that AWR is a licensed option. This technique could also be used to extract data from the statspack repository, but most of my customers are licensed for AWR.

You can download a sample Excel workbook and query from my website. Go to one of the sheets with data and 'Refresh Data'. The first time you will probably be prompted to select and ODBC source. You will be prompted for the login credentials.

In a subsequent blog posting I will discuss a real example of where this technique helped me to resolve a problem.

It is fairly straightforward to create spreadsheets that query databases, but here are some links that should help to get you going if you are not familiar with the process.

  • From the Microsoft website: 'Use Microsoft Query to retrieve external data'. This is based on Excel 2007, but it explains how Excel calls MSQuery which then calls the ODBC driver. However, I recommend that you do not use the Query Wizard. I prefer to get the query working in SQL*Plus and then I copy it into MSQuery
  • Querying External Data in Excel. This is a straightforward recipe for using MSQuery.

Update 16.6.2010: I have released a new version with an adjustment to the query used in the spreadsheet.  AWR snapshots happen at slightly different times on different RAC nodes, although they have the same SNAP_ID.  This was causing multiple rows for the same snapshot in the pivot tables, and so the charts did not show correctly aggregated wait events across RAC nodes.

Time Matters: Throughput vs. Response Time - Part 2

In one of my Hotsos Symposium 2010 posts I mentioned that Peter Stalder had plugged some test results from an earlier blog post into Neil Gunther's Universal Scalability Law to see how well the model applied. Peter's posted his slides now and I've added the URL to the comments thread of the original post so people can see another perspective.

He also pointed out a recent blog post discussing similar subjects at Neil Gunther's blog although I must admit I've only had a quick glance at it because I'm up to my eyeballs in mail at the moment :-(

Oracle Latch Contention Troubleshooting

I wrote a latch contention troubleshooting article for IOUG Select journal last year (it was published earlier this year). I have uploaded this to tech.E2SN too, I recommend you to read it if you want to become systematic about latch contention troubleshooting:

I’m working on getting the commenting & feedback work at tech.E2SN site too, but for now you can comment here at this blog entry…


Oracle Latch Contention Troubleshooting

I wrote a latch contention troubleshooting article for IOUG Select journal last year (it was published earlier this year). I have uploaded this to tech.E2SN too, I recommend you to read it if you want to become systematic about latch contention troubleshooting:
I’m working on getting the commenting & feedback work at tech.E2SN site too, but for now you can comment here at this blog entry…

Words fail me

I just stumbled across this fine example of the art of writing news articles when you clearly don’t understand the subject. The article has a date of 2 days ago on the front page, though curiously a September 2009 date on the article. The article purports to discuss the omotion feature of Oracle Rac One-Node. My [...]

Session Snapper v3.11 – bugfix update – now ASH report works properly on Oracle 10.1 too

This is an updated version of Snapper, which works ok on Oracle 10.1 now as well (9i support is coming some time in the future :)

Thanks to Jamey Johnston for sending me the fix info (and saving me some time that way :)

So if you have some problems with Snapper on Oracle 10.1, please make sure you have the latest version v3.11, which you can get from here:

The output below is from Snapper 3.11 on Oracle, the ASH columns in the bottom part of the output are displayed correctly now:

SQL> @snapper ash,ash1,ash2,ash3,stats,gather=t 15 1 all
Sampling with interval 15 seconds, 1 times...

-- Session Snapper v3.11 by Tanel Poder @ E2SN ( )

    SID, USERNAME  , TYPE, STATISTIC                               ,         DELTA, HDELTA/SEC,    %TIME, GRAPH
     52, SYSTEM    , TIME, PL/SQL execution elapsed time           ,         53968,      3.6ms,      .4%, |          |
     52, SYSTEM    , TIME, DB CPU                                  ,         10000,   666.67us,      .1%, |          |
     52, SYSTEM    , TIME, sql execute elapsed time                ,        118225,     7.88ms,      .8%, |@         |
     52, SYSTEM    , TIME, DB time                                 ,        118632,     7.91ms,      .8%, |@         |
     54, SYSTEM    , TIME, hard parse elapsed time                 ,        289905,    19.33ms,     1.9%, |@         |
     54, SYSTEM    , TIME, parse time elapsed                      ,        528034,     35.2ms,     3.5%, |@         |
     54, SYSTEM    , TIME, PL/SQL execution elapsed time           ,       5010579,   334.04ms,    33.4%, |@@@@      |
     54, SYSTEM    , TIME, DB CPU                                  ,      10660000,   710.67ms,    71.1%, |@@@@@@@@  |
     54, SYSTEM    , TIME, sql execute elapsed time                ,      12920952,    861.4ms,    86.1%, |@@@@@@@@@ |
     54, SYSTEM    , TIME, DB time                                 ,      12937606,   862.51ms,    86.3%, |@@@@@@@@@ |
     54, SYSTEM    , TIME, sequence load elapsed time              ,          1079,    71.93us,      .0%, |          |
     56, (MMNL)    , TIME, background cpu time                     ,           940,    62.67us,      .0%, |          |
     56, (MMNL)    , TIME, background elapsed time                 ,           940,    62.67us,      .0%, |          |
     58, (MMON)    , TIME, background cpu time                     ,           158,    10.53us,      .0%, |          |
     58, (MMON)    , TIME, background elapsed time                 ,           158,    10.53us,      .0%, |          |
     64, (RBAL)    , TIME, background cpu time                     ,            86,     5.73us,      .0%, |          |
     64, (RBAL)    , TIME, background elapsed time                 ,            86,     5.73us,      .0%, |          |
     68, (CJQ0)    , TIME, background cpu time                     ,           820,    54.67us,      .0%, |          |
     68, (CJQ0)    , TIME, background elapsed time                 ,           820,    54.67us,      .0%, |          |
     70, (SMON)    , TIME, background cpu time                     ,           141,      9.4us,      .0%, |          |
     70, (SMON)    , TIME, background elapsed time                 ,           141,      9.4us,      .0%, |          |
     71, (CKPT)    , TIME, background cpu time                     ,         14515,   967.67us,      .1%, |          |
     71, (CKPT)    , TIME, background elapsed time                 ,         14515,   967.67us,      .1%, |          |
     72, (LGWR)    , TIME, background cpu time                     ,       1530000,      102ms,    10.2%, |@         |
     72, (LGWR)    , TIME, background elapsed time                 ,       1954778,   130.32ms,    13.0%, |@@        |
     73, (DBW0)    , TIME, background cpu time                     ,         10000,   666.67us,      .1%, |          |
     73, (DBW0)    , TIME, background elapsed time                 ,        268787,    17.92ms,     1.8%, |@         |
     74, (MMAN)    , TIME, background cpu time                     ,           141,      9.4us,      .0%, |          |
     74, (MMAN)    , TIME, background elapsed time                 ,           141,      9.4us,      .0%, |          |
     75, (PMON)    , TIME, background cpu time                     ,          1636,   109.07us,      .0%, |          |
     75, (PMON)    , TIME, background elapsed time                 ,          1636,   109.07us,      .0%, |          |
--  End of Stats snap 1, end=2010-03-27 16:37:13, seconds=15

Active% | SQL_ID          | EVENT                     | WAIT_CLASS
    61% | 6d0z2j01c8ytc   | ON CPU                    | ON CPU
    22% |                 | log file parallel write   | System I/O
     7% | 6d0z2j01c8ytc   | db file sequential read   | User I/O
     3% | 0zkt25f36kbzd   | ON CPU                    | ON CPU
     3% |                 | db file parallel write    | System I/O
     2% | g1xapjmt4vm5c   | ON CPU                    | ON CPU
     2% |                 | ON CPU                    | ON CPU
     2% | gaxwgwd72b3pn   | ON CPU                    | ON CPU
     1% | 4ftbahd08ab2a   | ON CPU                    | ON CPU
     1% | c69wrxcndxuzw   | ON CPU                    | ON CPU

Active% | EVENT                     | WAIT_CLASS
    76% | ON CPU                    | ON CPU
    22% | log file parallel write   | System I/O
     9% | db file sequential read   | User I/O
     3% | db file parallel write    | System I/O
     3% | db file scattered read    | User I/O
     1% | direct path write temp    | User I/O

Active% |    SID | SQL_ID
    69% |     54 | 6d0z2j01c8ytc
    23% |     72 |
     3% |     54 | 0zkt25f36kbzd
     3% |     73 |
     3% |     54 | 8qs4shjvhk2w4
     2% |     54 | g1xapjmt4vm5c
     2% |     54 | gaxwgwd72b3pn
     1% |     54 | 3w6304ztrww4h
     1% |     54 | b86h705svfmjz
     1% |     54 | drppqann6dwfa

    69% | N/A        | N/A        | 6d0z2j01c8ytc
    27% | N/A        | N/A        |
     3% | N/A        | N/A        | 0zkt25f36kbzd
     3% | N/A        | N/A        | 8qs4shjvhk2w4
     2% | N/A        | N/A        | g1xapjmt4vm5c
     2% | N/A        | N/A        | gaxwgwd72b3pn
     1% | N/A        | N/A        | 3w6304ztrww4h
     1% | N/A        | N/A        | b86h705svfmjz
     1% | N/A        | N/A        | drppqann6dwfa
     1% | N/A        | N/A        | c69wrxcndxuzw

--  End of ASH snap 1, end=2010-03-27 16:37:13, seconds=15, samples_taken=96

PL/SQL procedure successfully completed.



“Ideas build off ideas”… making use of Social Networking sites


In this post (which has been looong overdue :p ) I’ll show you how you could make use of Social Networking Sites to make knowledge sharing and meeting people more fun. As you can see from the illustration above, I’ve made use of the following sites/services to build a dynamic and multidimensional means of communication:

  • Facebook
  • Twitter
  • FeedBurner RSS
  • FeedBurner Mail Subscription
  • Google Sites
  • LinkedIn

The story behind it…

I am a DBA for 4years+, and started with zero knowledge about Oracle. I joined SQL*Wizard, which has a great training program and exposed me to a lot of difficult DBA situations. After a while I got my OCA and OCP then got interested on Linux and got my RHCT… well, hopefully RHCE soon. All of my accumulated knowledge came from three things. First: lots of reading/research Second: doing test cases….. Third:…..guess what….

it’s the community.

And speaking of community, we have our local users group here, called Philippine Oracle Users Group (POUG) which has been inactive for a long time (speaking based on my experience) I joined the POUG Yahoo Group last November 2007, ever since I joined there was no activity… then months have passed I was able to gain more knowledge and experience as a DBA, joined other forums, participated on online communities, mailing list, and even follow other Users Group activities. Then one day, I was checking on some of the Yahoo Groups that I’ve joined and a couple of mouse clicks brought me again to POUG Yahoo Group and noticed the last message was still year 2007…. yes I felt bad…. I think the community spirit among technology groups in the US or other countries seems to be stronger… I admire these guys, and I want to have one here, locally. It’s just difficult to motivate people to share ideas and help each other.

BTW the latest news about POUG… it is closing down as a corporation. There could be a number of reasons and one of them could be the inactive status. Whatever are the reasons for closing it, I am sure I’m not alone wanting to have a local community…

It got me thinking, why not do some social experiment? … If I could pool a number of Oracle Users (DBA/Developers), feed them updates about the latest in Oracle, give helpful articles, links, blogs, videos, or any technical info that would be useful on their everyday tasks will stir up the curiosity and I may find like minded people that could help me build (or revive) the local Users Group/community. Well, I’d like to have an environment similar to Oracle-L ( or China’s ITPUB (, where you can find lots of good stuff, good questions, good replies, no one is ranting about something (but this can’t be completely avoided), somebody is sharing of a problem then the other guy (expert or a newbie) shares his solution, nobody thinks of competition (company A vs. company B, or consultant A or B vs C)…

just pure technology…(but yes, preferably there should be real interaction)

All of these inspired me to create a Facebook page, and called it “Oracle Users – Philippines” (sorry I can’t think of a better name </p />

    	  	<div class=