Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

You Buy a NUMA System, Oracle Says Disable NUMA! What Gives? Part III.

By The Way, How Many NUMA Nodes Is Your AMD Opteron 6100-Based Server?
In my on-going series about Oracle Database 11g configuration for NUMA systems I’ve spoken of the enabling parameter and how it changed from _enable_NUMA_optimization (11.1) to _enable_NUMA_support (11.2). For convenience sake I’ll point to the other two posts in the series for folks [...]

WLS + Oracle benchmark – 2

Another press release from Oracle on the same topic: benchmark of WLS 10.3.3 (not released yet) + Oracle @ Dell PowerEdge R910 Server with 4×8-core beasts X7560 (announced today by Intel). Benchmark was done by Oracle itself (previous were conducted by Cisco & HP). Results and FDA are here. DB & WLS configuration are [...]

heap block compress

In a recent note showing how an index could become much larger than the underlying table because of the different ways that Oracle handles deletion from table and index blocks, I pointed out that Oracle would have to pick a moment to replace rows marked for deletion with a stub, or place-holder, showing where the [...]

Unique Bitmap Indexes Part II (You Can’t Do That)

As discussed in the previous post, a Bitmap index on a unique column will be larger than a corresponding Btree index due to the additional overheads associated with each index entry (such as the additional rowid, the additional column length bytes and the bitmap column itself). Oracle therefore attempts to protect you from explicitly creating such [...]

The New OakTable Website is (almost) Alive

Thanks to the almost endless energy of James Morle the OakTable Network had a web presence for several years. Lately we decided that it had to be adjusted so it would be more fitting the current needs and style of the internet. Thanks to Kurt van Meerbeeck’s (aka the “dude” guy) incredible wizardry, two week later we have a new placeholder for our musings and rants…

Although we still have to update the DNS  record to the new location,  if you want you can get an (pre)view here:

Have have, or are in the business of updating the site with the following features and/or content:

  • Upcoming Events
  • Book Reviews
  • Articles
  • Contributed Files
  • Media, among others YouTube clips
  • Twitter feeds, trends/search
  • Polls/Voting
  • OakTable Member Blog posts (and/or merged from the original posts)
  • A Tag Cloud
  • OakTable Blog Aggregator

Let us hear of what you think.

On behalf of the OakTable Network members.


Philosophy – 9

There is an old joke about an engineer, a mathematician, and a philosopher sitting together in a train travelling from London (England) to Cardiff (Wales) ***
As the train crosses the border, the engineer glances out of the window and exclaims: “Oh, look! Welsh sheep are black”.
The mathematician responds: “No; all you can say is that [...]

Statistics on Partitioned Tables - Part 6b - COPY_TABLE_STATS

Sigh ... these posts have become a bit of a mess.

There are so many different bits and pieces I want to illustrate and I've been trying to squeeze them in around normal work. Worse still, because I keep leaving them then coming back to them and re-running tests it's easy to lose track of where I was, despite using more or less the same test scripts each time (any new scripts tend to be sections of the main test script). I suspect my decision to only pull out the more interesting parts of the output has contributed to the difficulties too, but with around 18.5 thousand lines of output, I decided that was more or less essential.

It has got so bad that I noticed the other day that there were a couple of significant errors in the last post which are easy to miss when you're looking at detailed output and must be even less obvious if you're looking at it for the first time.

The fact no-one said much about these errors reinforces my argument with several bloggers that less people read and truly absorb the more technical stuff than they think. They just pick up the messages they need and take more on trust than you might imagine!

So what were the errors? Possibly more important, why did they appear? The mistakes are often as instructive as the successes.

Error 1

This is the tail-end of the subpartition stats at the end of part 5

SQL> select  table_name, subpartition_name, global_stats, last_analyzed, num_rows
  2  from dba_tab_subpartitions
  3  where table_name='TEST_TAB1'
  4  and table_owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     SUBPARTITION_NAME              GLO LAST_ANALYZED          NUM_ROWS
------------------------------ ------------------------------ --- -------------------- ----------
TEST_TAB1                      P_20100209_GROT                NO  28-FEB-2010 21:41:47          3
TEST_TAB1                      P_20100209_HALO                NO  28-FEB-2010 21:41:49          3
TEST_TAB1                      P_20100209_JUNE                NO  28-FEB-2010 21:41:49          3
TEST_TAB1                      P_20100209_OTHERS              NO  28-FEB-2010 21:41:50          3

Compared to the supposedly same section produced at the start of part 6a :-

TABLE_NAME                     SUBPARTITION_NAME              GLO LAST_ANALYZED          NUM_ROWS           
------------------------------ ------------------------------ --- -------------------- ----------           

TEST_TAB1                      P_20100209_GROT                YES 28-MAR-2010 15:38:32          3           
TEST_TAB1                      P_20100209_HALO                YES 28-MAR-2010 15:38:32          3           
TEST_TAB1                      P_20100209_JUNE                YES 28-MAR-2010 15:38:32          3           
TEST_TAB1                      P_20100209_OTHERS              YES 28-MAR-2010 15:38:33          3           

Spot the difference? Forget the timestamps for now, although I hope it's clear that :-

- The stats were gathered at different times
- I really need to get my blogging act together ;-)

Instead, notice that GLOBAL_STATS is set to NO in part 5 and YES in part 6a. How could that happen?

The first thing to note is that it's probably not as significant as it first appears because what does it mean for Subpartition stats to be Global when there are no underlying sub-components of a Subpartition? In fact I'd argue that all Subpartition stats are Global by implication but I may be missing something important. (Comments welcome ...)

Instead I'll focus on how you can manage to get the two different results. The output from part 5 was the result of gathering statistics on a load table (LOAD_TAB1) and then exchanging it with the relevant subpartition of TEST_TAB1 (as shown in part 5). When you do that, the GLOBAL_STATS flag will be set to NO.

If I take the alternate approach of exchanging LOAD_TAB1 with the subpartition of TEST_TAB1 and only then gathering statistics on the subpartition of TEST_TAB1 then GLOBAL_STATS will be YES for that subpartition. That's the most obvious reason I can think of for the discrepancy but I can't be certain because the log files that I took the output from are history now.

At some point when ripping parts of a master script to run in isolation for each blog post I've changed the stats gathering approach from gather-the-exchange to exchange-then-gather. The output shown in part 5 was correct so I've updated part 6a to reflect that and added a note.

Error 2

I think this one is worse, because it's down to me mixing up some pastes because the original results looked wrong when they were, in fact right. It's extremely rare for me to edit results and I regret doing it here. Whenever you start tampering with the evidence, you're asking for trouble!

When I'd been pasting in the final example output, showing that subpartition stats had been copied for the new P_20100210_GROT subpartition, I saw another example when the subpartition stats hadn't been copied, so I decided I was mistaken and fixed the post. But the original was correct so I've put it back the way it should be and added a further note.

If you weren't confused already, you have my permission to be utterly confused now ;-)


On a more serious note, let's recap what I'm trying to do here and what does and doesn't work.

I've added a new partition for the 20100210 reporting date and I tried to copy the partition and subpartition stats from the previous partition (P_20100209) to the new partition. I attempted a Partition-level copy

SQL> exec dbms_stats.copy_table_stats(ownname => 'TESTUSER', tabname => 'TEST_TAB1', 
                                      srcpartname => 'P_20100209', dstpartname => 'P_20100210');

PL/SQL procedure successfully completed.

in the expectation that DBMS_STATS would copy the partition stats as well as all of the subpartition stats from the P_20100209 partition. I'll repeat how the stats looked here ...

SQL> select  table_name, global_stats, last_analyzed, num_rows
  2  from dba_tables
  3  where table_name='TEST_TAB1'
  4  and owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     GLO LAST_ANALYZED          NUM_ROWS                                       
------------------------------ --- -------------------- ----------                                       
TEST_TAB1                      NO                                                                        

SQL> select  table_name, partition_name, global_stats, last_analyzed, num_rows
  2  from dba_tab_partitions
  3  where table_name='TEST_TAB1'
  4  and table_owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     PARTITION_NAME                 GLO LAST_ANALYZED          NUM_ROWS        
------------------------------ ------------------------------ --- -------------------- ----------        
TEST_TAB1                      P_20100131                     NO                                         
TEST_TAB1                      P_20100201                     NO                                         
TEST_TAB1                      P_20100202                     NO                                         
TEST_TAB1                      P_20100203                     NO                                         
TEST_TAB1                      P_20100204                     NO                                         
TEST_TAB1                      P_20100205                     NO                                         
TEST_TAB1                      P_20100206                     NO                                         
TEST_TAB1                      P_20100207                     NO                                         
TEST_TAB1                      P_20100209                     NO  28-MAR-2010 15:38:38         12        
TEST_TAB1                      P_20100210                     NO                                         

10 rows selected.

SQL> select  table_name, subpartition_name, global_stats, last_analyzed, num_rows
  2  from dba_tab_subpartitions
  3  where table_name='TEST_TAB1'
  4  and table_owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     SUBPARTITION_NAME              GLO LAST_ANALYZED          NUM_ROWS        
------------------------------ ------------------------------ --- -------------------- ----------        
TEST_TAB1                      P_20100131_GROT                NO                                         
TEST_TAB1                      P_20100131_HALO                NO                                         


TEST_TAB1                      P_20100209_GROT                NO  28-MAR-2010 15:38:32          3        
TEST_TAB1                      P_20100209_HALO                NO  28-MAR-2010 15:38:32          3        
TEST_TAB1                      P_20100209_JUNE                NO  28-MAR-2010 15:38:32          3        
TEST_TAB1                      P_20100209_OTHERS              NO  28-MAR-2010 15:38:33          3        
TEST_TAB1                      P_20100210_GROT                NO  28-MAR-2010 15:38:33          3    
TEST_TAB1                      P_20100210_HALO                NO                                         
TEST_TAB1                      P_20100210_JUNE                NO                                         
TEST_TAB1                      P_20100210_OTHERS              NO                                         

40 rows selected.

So here's where we are

- No sign of partition statistics for P_20100210, despite the P_20100209 'source' partition having valid stats.
- The subpartition stats have been copied from P_20100209_GROT to P_20100210_GROT.
- The subpartition stats have not been copied for the other three P_20100210 partitions.

Weird, right? I've checked this over and over and I'm pretty sure I'm right, but decided to upload the entire script and output here in case others can spot some mistake I'm making.

Getting back down to earth, though, this isn't the end of the world. It just means that for our particular stats collection strategy of loading data into load tables, exchanging them with subpartitions and then copying earlier stats, we just need to make sure we're working at the subpartition level all the time and that's what I'll look at next.

Finally, I'll re-emphasise that this is not the only strategy and it's fair to say it's flushing out some unusual effects that you might never see if you work primarily with Partition-level stats!

Statistics on Partitioned Tables - Contents

When Jonathan Lewis decided it was time to post a list of the Partition Stats posts on his blog and Noons suggested I made them easier to track down, I listened. So this post will link to the others and, at least in the short term, I've also included links in the side-bar on the right over there.

- Default options - GLOBAL AND PARTITION
- Estimated Global Stats
- Stats Aggregation Problems I
- Stats Aggregation Problems II
- Minimal Stats Aggregation

To be continued ....

P.S. It was a lot easier doing this than actually finishing off Part 6 and I'm insanely busy at the moment.
P.P.S. My ultimate intention is to re-write the blog posts as a more considered, easier-to-read article but who knows when that might happen?

Graphing AWR Data in Excel

I often use data collected by the Oracle Automatic Workload Repository (AWR) to help to diagnose performance problems. However, I often work on performance problems with application teams, rather than the DBAs.  It is surprising how often that I don't have access to Oracle Enterprise Manager.
Somebody might say that the system was slow at a particular time. I want to get an overview of the database at that time, and I might also want to compare it to another time when the system wasn't slow. Later I may generate AWR reports using particular pairs of snapshots, but I need something to direct me to when an issue occurred, and hence which snapshots to compare.
When dealing with performance problems the question is nearly always the same. "How long did I spend doing what?". Amongst lots of metrics, AWR collects snapshots of how much time system-wide has been spent waiting on each wait event.
Thumbnail of AWR wait chartTo give me an overview of that data, I have created an excel spreadsheet that will query the AWR repository and retrieve that data for up to a week into am Excel pivot table, and will then graph the top wait events. Click on the thumbnail on the left to see an example.
For me, this approach has a number of advantages:
  • First; don't drown in numbers. The eye is very good at picking out patterns and irregularities. You can then drill into the numbers behind the chart.
  • The chart provides a quick overview of the behaviour of the database. You can quickly see if there is anything to examine in more detail, or whether the database is exhibiting its normal characteristics.
  • The AWR data is extracted into the spreadsheet, so I can keep a copy of the spreadsheet for reference. I generally like to collect AWR snapshots every 15 minutes. One week of data, at that frequency, turns into about 6Mb of spreadsheet.
  • Excel charts can easily be put into documents, presentations or e-mails. Excel is very flexible, it is easy to format the chart to show only what is important.
  • It's free (that is to say that you won't need to go and buy anything else). All you need is Excel and an Oracle client.
Over time I have produced a variety of spreadsheets that collect all sorts of metrics from the Oracle database and others that collect specific metrics from PeopleSoft applications.

To be fair, AWR's forerunner, Statspack collects a lot of similar data to AWR, the essential difference being that AWR is a licensed option. This technique could also be used to extract data from the statspack repository, but most of my customers are licensed for AWR.

You can download a sample Excel workbook and query from my website. Go to one of the sheets with data and 'Refresh Data'. The first time you will probably be prompted to select and ODBC source. You will be prompted for the login credentials.

In a subsequent blog posting I will discuss a real example of where this technique helped me to resolve a problem.

It is fairly straightforward to create spreadsheets that query databases, but here are some links that should help to get you going if you are not familiar with the process.

  • From the Microsoft website: 'Use Microsoft Query to retrieve external data'. This is based on Excel 2007, but it explains how Excel calls MSQuery which then calls the ODBC driver. However, I recommend that you do not use the Query Wizard. I prefer to get the query working in SQL*Plus and then I copy it into MSQuery
  • Querying External Data in Excel. This is a straightforward recipe for using MSQuery.

Update 16.6.2010: I have released a new version with an adjustment to the query used in the spreadsheet.  AWR snapshots happen at slightly different times on different RAC nodes, although they have the same SNAP_ID.  This was causing multiple rows for the same snapshot in the pivot tables, and so the charts did not show correctly aggregated wait events across RAC nodes.

Time Matters: Throughput vs. Response Time - Part 2

In one of my Hotsos Symposium 2010 posts I mentioned that Peter Stalder had plugged some test results from an earlier blog post into Neil Gunther's Universal Scalability Law to see how well the model applied. Peter's posted his slides now and I've added the URL to the comments thread of the original post so people can see another perspective.

He also pointed out a recent blog post discussing similar subjects at Neil Gunther's blog although I must admit I've only had a quick glance at it because I'm up to my eyeballs in mail at the moment :-(