Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

RAC, parallel query and udpsnoop

I presented about various performance myths in my ‘battle of the nodes’ presentation. One of the myth was that how spawning parallel query slaves across multiple RAC instances can cause major bottleneck in the interconnect. In fact, that myth was direct result of a lessons learnt presentation from a client engagement. Client was suffering from performance issues with enormous global cache waits running in to 30+ms average response time for global cache CR traffic and crippling application performance. Essentially, their data warehouse queries were performing hundreds of parallel queries concurrently with slaves spawning across three node RAC instances.

Of course, I had to hide the client details and simplified using a test case to explain the myth. Looks like either a)my test case is bad or b) some sort of bug I encountered in 9.2.0.5 version c) I made a mistake in my analysis somewhere. Most likely it is the last one :-( . Greg Rahn questioned that example and this topic deserves more research to understand this little bit further. At this point, I don’t have 9.2.0.5 and database is in 10.2.0.4 and so we will test this in 10.2.0.4.

udpsnoop

UDP is one of the protocol used for cache fusion traffic in RAC and it is the Oracle recommended protocol. In this article, UDP traffic size must be measured. Measuring Global cache traffic using AWR reports was not precise. So, I decided to use a dtrace tool kit tool:udpsnoop.d to measure the traffic between RAC nodes. There are two RAC nodes in this setup. You can read more about udpsnoop.d. That tool udpsnoop.d can be downloaded from dtrace toolkit . Output of this script is of the form:

PID        LADDR           LPORT           DR         RADDR           RPORT                 SIZE
---------- --------------- --------------- ---------- --------------- --------------- -----------
15393      1.1.59.192      38395           ->         2.1.59.192      40449                 8240
...

In the output above, PID 15393 sent an UDP packet of size 8240 from IP address 192.59.1.1 to 192.59.1.2 with local port as 38395 and remote port as 40449. As UDP traffic is flying between nodes, udpsnoop.d will print UDP traffic in to a file. So, I start collecting udpsnoop output before and end that collection immediately after our script is complete. Of course, we need to aggregate this data and play with it little bit, and so, will create an external table based upon this output file too.

- This is to read the file as an external table
drop table  external_udpsnoop;

create table external_udpsnoop
(
  c_uid varchar2(10),
  pid varchar2(10),
  laddr varchar2(15),
  lport varchar2(15),
  dr   varchar2(10),
  raddr varchar2(15),
  rport varchar2(15),
  c_size   varchar2(10),
  cmd  varchar2(15)
)
organization external (
  type oracle_loader
  default directory UTL_FILE_DIR
  access parameters (
      records delimited by newline
     badfile APPS_DATA_FILE_DIR:'xtern_rpt.bad'
      logfile APPS_DATA_FILE_DIR:'xtern_rpt.log'
      discardfile APPS_DATA_FILE_DIR:'xtern_rpt.dsc'
      fields terminated by whitespace
      missing field values are null
   )
  location ('udpsnoop_ra_join_2_8th_iter.lst')
 )
  reject limit 1000
/
REM Reject limit is high since there are few packets with some junk outputs, might be due unstable fbt in dtrace.

Test case #1: Hash join – slaves from all instances

First, let’s test for an hash join to show how UDP traffic is flowing between these ports. In this test case below, we use a big table and join a 10 Million rows to another 10 Million rows table. rownum is used so that script will complete in decent time, other wise, this ran for few hours before running in to errors. This selects few non-indexed columns so that SQL must do full table scan. SQL also has hint for 16 slaves from 2 instances.

Both instances will participate in this PQ operation as parallel_instance_group is set to ALL at session level.

alter session set parallel_instance_Group='ALL';
select /*+ parallel ( t1, 8,2)   parallel (t2, 8, 2)  */
min (t1.CUSTOMER_TRX_LINE_ID +t2.CUSTOMER_TRX_LINE_ID ) , max ( t1.SET_OF_BOOKS_ID+t2.set_of_books_id  ), avg(t1.SET_OF_BOOKS_ID +t2.set_of_books_id),
	avg( t1.QUANTITY_ORDERED + t2.QUANTITY_ORDERED ), max(t1.ATTRIBUTE_CATEGORY  ), max(t2.attribute1) , max(t1.attribute2)
from
  (select * from BIG_TABLE where rownum <=100000000)t1 ,
  (select * from BIG_TABLE where rownum <=100000000)t2
where t1.CUSTOMER_TRX_LINE_ID = t2.CUSTOMER_TRX_LINE_ID
;

PQ in operation

We will also use yet another script to make sure that SQL is indeed getting 8 slaves in each instance. I don’t remember, where I got this SQL to pull slaves information (may be Doug burns, Thanks!), but anyway, I modified that little bit for RAC.

   select
      s.inst_id,
      decode(px.qcinst_id,NULL,s.username,
            ' - '||lower(substr(s.program,length(s.program)-4,4) ) ) "Username",
      decode(px.qcinst_id,NULL, 'QC', '(Slave)') "QC/Slave" ,
      to_char( px.server_set) "Slave Set",
      to_char(s.sid) "SID",
      decode(px.qcinst_id, NULL ,to_char(s.sid) ,px.qcsid) "QC SID",
      px.req_degree "Requested DOP",
     px.degree "Actual DOP", p.spid
   from
     gv$px_session px,
     gv$session s, gv$process p
   where
     px.sid=s.sid (+) and
     px.serial#=s.serial# and
     px.inst_id = s.inst_id
     and p.inst_id = s.inst_id
     and p.addr=s.paddr
  order by 5 , 1 desc
SQL> /
   INST_ID Username     QC/Slave   Slave Set  SID    QC SID Requested DOP Actual DOP SPID
---------- ------------ ---------- ---------- ------ ------ ------------- ---------- ------------
         1 SYS          QC                    10931  10931                           7366
         1  - p000      (Slave)    1          10925  10931             16         16 24762
         1  - p001      (Slave)    1          10956  10931             16         16 24764
         1  - p002      (Slave)    1          10955  10931             16         16 24766
         1  - p003      (Slave)    1          10918  10931             16         16 24768
         1  - p004      (Slave)    1          10941  10931             16         16 24778
         1  - p005      (Slave)    1          10916  10931             16         16 24781
         1  - p006      (Slave)    1          10945  10931             16         16 24787
         1  - p007      (Slave)    1          10922  10931             16         16 24795
         2  - p000      (Slave)    1          10943  10931             16         16 16920
         2  - p001      (Slave)    1          10961  10931             16         16 16923
         2  - p002      (Slave)    1          10920  10931             16         16 16970
         2  - p003      (Slave)    1          10946  10931             16         16 16972
         2  - p004      (Slave)    1          10935  10931             16         16 16974
         2  - p005      (Slave)    1          10934  10931             16         16 16976
         2  - p006      (Slave)    1          10899  10931             16         16 16988
         2  - p007      (Slave)    1          10940  10931             16         16 16991

         1 SYS          QC                    10927  10927                           9476
         2  - pz99      (Slave)    1          10890  10927              2          2 17723
         1  - pz99      (Slave)    1          10912  10927              2          2 25875
20 rows selected.

From the output above 8 slaves are from instance 1 and 8 are from instance 2 allocated, with query coordinator ( 7366) running instance 1. Above sample also captured my own session accessing gv$ views ( Notice pz99, slaves for gv$ access use different PQ slave naming conventions from 10.2 onwards).

Results

We have established that slaves were allocated from multiple instances and udpsnoop is capturing UDP packet size between these instances. We also have external table mapping to that udpsnoop output file so as to query this data. Script completed in 1500 seconds. I mapped output of px slaves query above with UDP external table and here is the table I put together to show PQ slaves and their UDP size.

  INST Username     QC/Slave   Slave SID   QC SID Req. Actual   SPID  LADDR      RADDR      RPORT SIZE
                                 Set              DOP  DOP
------ ------------ ---------- ----- ----  ------ ---  ------- ------ ---------- ---------- ----- ---------
     1 SYS          QC               10931  10931               7366
     1  - p000      (Slave)    1     10925  10931  16       16 24762
     1  - p001      (Slave)    1     10956  10931  16       16 24764
     1  - p002      (Slave)    1     10955  10931  16       16 24766
     1  - p003      (Slave)    1     10918  10931  16       16 24768
     1  - p004      (Slave)    1     10941  10931  16       16 24778
     1  - p005      (Slave)    1     10916  10931  16       16 24781
     1  - p006      (Slave)    1     10945  10931  16       16 24787
     1  - p007      (Slave)    1     10922  10931  16       16 24795
     2  - p000      (Slave)    1     10943  10931  16       16 16920  2.1.59.192 2.1.59.192 62783 127068484
     2  - p001      (Slave)    1     10961  10931  16       16 16923  2.1.59.192 2.1.59.192 62783 126904080
     2  - p002      (Slave)    1     10920  10931  16       16 16970  2.1.59.192 2.1.59.192 62783 127767353
     2  - p003      (Slave)    1     10946  10931  16       16 16972  2.1.59.192 2.1.59.192 62783 128154145
     2  - p004      (Slave)    1     10935  10931  16       16 16974  2.1.59.192 2.1.59.192 62783 128096875
     2  - p005      (Slave)    1     10934  10931  16       16 16976  2.1.59.192 2.1.59.192 62783 126057311
     2  - p006      (Slave)    1     10899  10931  16       16 16988  2.1.59.192 2.1.59.192 62783 128228830
     2  - p007      (Slave)    1     10940  10931  16       16 16991  2.1.59.192 2.1.59.192 62783 127471579

Few important points to make here:

  1. In this case, all slaves running in node 2 were talking to one UDP port in node 1 (Port 62783). lsof shows that PID 7366 (Query co-ordinator) was listening on that UDP port. Point is that these slaves from node 2 were sending packats to the co-ordinator.
  2. Interestingly, there is no UDP traffic from instance 1 to 2. I think, that looks due to the nature of aggregation in the SQL.

Few minutes later..

Interestingly, few minutes later, while I was watching UDP traffic, few other processes kicked in and started generating UDP traffic. Re-queried the database again to see what these processes are. Query has allocated 16 more slaves, 8 more running from node 1 and 8 more running in node 2 [processes p008-p015 below]. These slaves were talking to a different UDP port 62789 which was also listened by coordinator process 7366 in node 1.

  INST Username     QC/Slave   Slave SID   QC SID Req. Actual   SPID  LADDR      RADDR      RPORT SIZE
                                 Set              DOP  DOP
------ ------------ ---------- ----- ----  ------ ---  ------- ------ ---------- ---------- ----- ---------
     1 SYS          QC               10931  10931               7366
     1  - p000      (Slave)    1     10925  10931  16       16 24762
     1  - p001      (Slave)    1     10956  10931  16       16 24764
     1  - p002      (Slave)    1     10955  10931  16       16 24766
     1  - p003      (Slave)    1     10918  10931  16       16 24768
     1  - p004      (Slave)    1     10941  10931  16       16 24778
     1  - p005      (Slave)    1     10916  10931  16       16 24781
     1  - p006      (Slave)    1     10945  10931  16       16 24787
     1  - p007      (Slave)    1     10922  10931  16       16 24795

     1  - p008      (Slave)    1     10958  10931  16       16 24798
     1  - p009      (Slave)    1     10938  10931  16       16 24818
     1  - p010      (Slave)    1     10965  10931  16       16 24836
     1  - p011      (Slave)    1     10953  10931  16       16 24838
     1  - p012      (Slave)    1     10946  10931  16       16 24841
     1  - p013      (Slave)    1     10929  10931  16       16 24843
     1  - p014      (Slave)    1     10919  10931  16       16 24853
     1  - p015      (Slave)    1     10942  10931  16       16 24855

     2  - p000      (Slave)    1     10943  10931  16       16 16920  2.1.59.192 2.1.59.192 62783 127068484
     2  - p001      (Slave)    1     10961  10931  16       16 16923  2.1.59.192 2.1.59.192 62783 126904080
     2  - p002      (Slave)    1     10920  10931  16       16 16970  2.1.59.192 2.1.59.192 62783 127767353
     2  - p003      (Slave)    1     10946  10931  16       16 16972  2.1.59.192 2.1.59.192 62783 128154145
     2  - p004      (Slave)    1     10935  10931  16       16 16974  2.1.59.192 2.1.59.192 62783 128096875
     2  - p005      (Slave)    1     10934  10931  16       16 16976  2.1.59.192 2.1.59.192 62783 126057311
     2  - p006      (Slave)    1     10899  10931  16       16 16988  2.1.59.192 2.1.59.192 62783 128228830
     2  - p007      (Slave)    1     10940  10931  16       16 16991  2.1.59.192 2.1.59.192 62783 127471579

     2  - p008      (Slave)    1     10911  10931  16       16 16993  2.1.59.192 2.1.59.192 62989 182053370
     2  - p009      (Slave)    1     10949  10931  16       16 16995  2.1.59.192 2.1.59.192 62989 182490908
     2  - p010      (Slave)    1     10951  10931  16       16 17000  2.1.59.192 2.1.59.192 62989 181899025
     2  - p011      (Slave)    1     10890  10931  16       16 17007  2.1.59.192 2.1.59.192 62989 181858294
     2  - p012      (Slave)    1     10972  10931  16       16 17009  2.1.59.192 2.1.59.192 62989 182104499
     2  - p013      (Slave)    1     10950  10931  16       16 17011  2.1.59.192 2.1.59.192 62989 182334705
     2  - p014      (Slave)    1     10902  10931  16       16 17013  2.1.59.192 2.1.59.192 62989 181611641
     2  - p015      (Slave)    1     10955  10931  16       16 17023  2.1.59.192 2.1.59.192 62989 181816693

In real life..

Summing this up, approximately, 2.4GB of UDP traffic was generated with one parallel query. Can you imagine what will happen if this inter-instance parallelism is allowed in data warehouse queries scanning many tables and partitions with many hash joins? Obviously, this has the effect of saturating Interconnect quickly and so performance will suffer. Our solution was to disallow parallel queries spawning multiple instances. All of them will be running within an instance boundary and effect of this change was immediately visible in the client environment. Back to our test, parallel_execution_message_size was set to 8192. Increasing this parameter will decrease elapsed time little bit, but we are worried about saturating interconnect traffic not just elapsed time of that query.

Further, I ran this query with parallel_instance_group set to one instance and then all instances, few times. Spawning across multiple instances, in fact, increases elapsed time too.

Parallel_instance_group :ALL
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.20       0.26          0         76          0           0
Fetch        2   1481.76    1509.95     701158         76          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4   1481.96    1510.22     701158        152          0           1

parallel_instance_group :INST1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.20       0.23          0         76          0           0
Fetch        2   1321.05    1331.67     701344         76          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4   1321.25    1331.90     701344        152          0           1

What about tha original example ?

Of course, let’s talk about that original example also. In this example, there was just one table and data was aggregated.

select /*+ parallel ( t1, 8,2)  */
min (t1.CUSTOMER_TRX_LINE_ID +t1.CUSTOMER_TRX_LINE_ID ) , max ( t1.SET_OF_BOOKS_ID+t1.set_of_books_id  ), avg(t1.SET_OF_BOOKS_ID +t1.set_of_books_id),
        avg( t1.QUANTITY_ORDERED + t1.QUANTITY_ORDERED ), max(t1.ATTRIBUTE_CATEGORY  ), max(t1.attribute1) , max(t1.attribute2)
from
  BIG_TABLE t1
;

Measuring, UDP traffic, It is visible that for this huge table, Interconnect traffic is kept minimal. It looks like, there are some optimization techniques for this single table aggregation query minimizing cache fusion traffic to a minimal level, just 2152. This convinces that, just the SQL in that myth is a bad example, but that myth is still a myth. I should have used original SQL joining multiple tables with hash join for my presentation, but as a consultant, I have a responsibility to keep clients information confidential and protect. At the end of the day, they pay for my bread.

   INST_ID Username     QC/Slave   Slave Set  SID    QC SID Requested DOP Actual DOP SPID		  Size
---------- ------------ ---------- ---------- ------ ------ ------------- ---------- ------------ -------
         1 SYS          QC                    10933  10933                           3314
         1  - p000      (Slave)    1          10958  10933             16         16 24762
         1  - p001      (Slave)    1          10948  10933             16         16 24764
         1  - p002      (Slave)    1          10953  10933             16         16 24766
         1  - p003      (Slave)    1          10925  10933             16         16 24768
         1  - p004      (Slave)    1          10916  10933             16         16 24778
         1  - p005      (Slave)    1          10938  10933             16         16 24781
         1  - p006      (Slave)    1          10951  10933             16         16 24787
         1  - p007      (Slave)    1          10946  10933             16         16 24795

         2  - p000      (Slave)    1          10949  10933             16         16 16920            2152
         2  - p001      (Slave)    1          10937  10933             16         16 16923            2152
         2  - p002      (Slave)    1          10946  10933             16         16 16970            2152
         2  - p003      (Slave)    1          10956  10933             16         16 16972            2152
         2  - p004      (Slave)    1          10902  10933             16         16 16974            2152
         2  - p005      (Slave)    1          10981  10933             16         16 16976            2152
         2  - p006      (Slave)    1          10899  10933             16         16 16988            2152
         2  - p007      (Slave)    1          10927  10933             16         16 16991            2152

         1 SYS          QC                    10945  10945                           3527
         1  - pz99      (Slave)    1          10942  10945              2          2 25875
         2  - pz99      (Slave)    1          10962  10945              2          2 17723           72344

Summary

In summary, having too many parallel query slaves spawning across multiple instances can cripple interconnect. There are some optimization techniques that seems to help in the case of single table aggregation and of course, that must be considered as an exception. I have modified the presentation little bit below, but will do a second and complete update on this presentation later:

Battle of the nodes RAC performance myths doc
Battle of the nodes RAC performance myths ppt

Distributed Oracle Buffer Cache

If you’ve read EnterpriseDB’s latest press release, you’ll notice a new feature called Infinite Cache.  While it may sound revolutionary, it is not a new technology.  In fact, it was done for MySQL quite awhile ago under the Waffle Grid project. While I consider this type of caching relatively simple, I don’t want Oracle to be left out.  As […]

Temporary Tablespace Groups

Oracle 10g introduced the concept of temporary tablespace groups.

These allow to group multiple temporary tablespaces into a single group and assign a user this group of tablespaces instead of a single temporary tablespace.

This raises some interesting questions, and for some of these I don't find answers in the official documentation. Some of these questions are:

- Can a single workarea execution allocate space from more than one temporary tablespace, e.g. to support large serial sort operations?

A workarea belongs to a single operation of an execution plan. There are several different types of operations that require a workarea, among them are sorts, hash joins, group bys and sort/merge joins.

This workarea can fit into available PGA memory, but can also spill to disk in case there is insufficient memory available to support the operation.

Furthermore this implies that a execution of a single SQL statement can require multiple workareas, e.g. a quite simple statement might need two workareas for two hash joins and a third one for a subsequent sort order by operation.

Note that there are other types of operations that don't require a workarea, e.g. a nested loop join doesn't require a workarea (and therefore will never acquire temporary space).

Details about workareas can be obtained from various dynamic performance views, e.g. V$SQL_WORKAREA, V$SQL_WORKAREA_ACTIVE and V$SQL_WORKAREA_HISTOGRAM.

- Can multiple workareas of a single session allocate space from different tablespaces?

- According to the documentation different sessions of the same user can use different temporary tablespaces from the group. Is this correct?

- What about parallel execution? The documentation says that the parallel slaves can use different temporary tablespaces. Is this correct?

Here's a simple testcase that generates two small temporary tablespaces and assigns them to the same group. A test user is assigned first a single temporary tablespace and afterwards the group to see if this makes any difference.

It uses a carefully crafted table for which a sort operation doesn't fit into a single small temporary tablespace but is supposed to fit if both temporary tablespaces of the group can be used.

The session is deliberately using a very small sort_area_size in manual workarea policy mode to force the sort operation to spill to disk.

The following are the results from 11.1.0.7 Win32. Similar results can be seen from 10.2.0.4 Win32. The database used a 8KB default block size and a locally managed tablespace with manual segment space management (no ASSM) for the data but that shouldn't matter here much.

SQL>
SQL> drop tablespace temp1_grp including contents and datafiles;

Tablespace dropped.

SQL>
SQL> drop tablespace temp2_grp including contents and datafiles;

Tablespace dropped.

SQL>
SQL> create temporary tablespace temp1_grp
2 tempfile 'C:\ORACLE\ORADATA\ORCL11\TEMP1_GRP.DBF'
3 size 2M reuse uniform size 100K
4 tablespace group temp_grp;

Tablespace created.

SQL>
SQL> create temporary tablespace temp2_grp
2 tempfile 'C:\ORACLE\ORADATA\ORCL11\TEMP2_GRP.DBF'
3 size 2M reuse uniform size 100K
4 tablespace group temp_grp;

Tablespace created.

SQL>
SQL> drop user tempgrp_user cascade;

User dropped.

SQL>
SQL> create user tempgrp_user identified by tempgrp_user;

User created.

SQL>
SQL> grant create session to tempgrp_user;

Grant succeeded.

SQL>
SQL> exec dbms_random.seed(0)

PL/SQL procedure successfully completed.

SQL>
SQL> drop table test_temp_grp purge;

Table dropped.

SQL>
SQL> create table test_temp_grp
2 as
3 select
4 sys.dbms_random.string('U', 140) as object_name
5 from
6 dual
7 connect by
8 rownum <= 10000;

Table created.

SQL>
SQL> exec dbms_stats.gather_table_stats(null, 'TEST_TEMP_GRP')

PL/SQL procedure successfully completed.

SQL>
SQL> select blocks from dba_tables where table_name = 'TEST_TEMP_GRP';

BLOCKS
----------
205

SQL>
SQL> grant select on test_temp_grp to tempgrp_user;

Grant succeeded.

SQL>
SQL> create or replace synonym tempgrp_user.test_temp_grp for test_temp_grp;

Synonym created.

SQL>
SQL> -- This is the behaviour when using traditional tablespaces
SQL> alter user tempgrp_user temporary tablespace temp1_grp;

User altered.

SQL>
SQL> connect tempgrp_user/tempgrp_user
Connect durchgeführt.
SQL>
SQL> alter session set nls_language = 'AMERICAN';

Session altered.

SQL>
SQL> variable r refcursor
SQL>
SQL> variable t refcursor
SQL>
SQL> -- Force sort on disk
SQL> alter session set workarea_size_policy = manual sort_area_size = 65536;

Session altered.

SQL>
SQL> exec open :r for select * from test_temp_grp order by object_name

PL/SQL procedure successfully completed.

SQL>
SQL> -- Sort doesn't fit into available temp space
SQL> declare
2 rec test_temp_grp%rowtype;
3 begin
4 fetch :r into rec;
5 end;
6 /
declare
*
ERROR at line 1:
ORA-01652: unable to extend temp segment by 13 in tablespace TEMP1_GRP
ORA-06512: at line 4

SQL>
SQL> exec open :r for select * from test_temp_grp where rownum <= 5000 order by object_name

PL/SQL procedure successfully completed.

SQL>
SQL> -- This works
SQL> declare
2 rec test_temp_grp%rowtype;
3 begin
4 fetch :r into rec;
5 end;
6 /

PL/SQL procedure successfully completed.

SQL>
SQL> connect cbo_test/cbo_test
Connect durchgeführt.
SQL>
SQL> alter session set nls_language = 'AMERICAN';

Session altered.

SQL>
SQL> -- Switch to tablespace group
SQL> alter user tempgrp_user temporary tablespace temp_grp;

User altered.

SQL>
SQL> connect tempgrp_user/tempgrp_user
Connect durchgeführt.
SQL>
SQL> alter session set nls_language = 'AMERICAN';

Session altered.

SQL>
SQL> -- Force sort on disk
SQL> alter session set workarea_size_policy = manual sort_area_size = 65536;

Session altered.

SQL>
SQL> exec open :r for select * from test_temp_grp order by object_name

PL/SQL procedure successfully completed.

SQL>
SQL> -- Sort still doesn't fit into available temp space (but it should as you'll see later)
SQL> -- so obviously a single workarea can't use space from different tablespaces
SQL> declare
2 rec test_temp_grp%rowtype;
3 begin
4 fetch :r into rec;
5 end;
6 /
declare
*
ERROR at line 1:
ORA-01652: unable to extend temp segment by 13 in tablespace TEMP1_GRP
ORA-06512: at line 4

SQL>
SQL> -- What if we have multiple workareas per session
SQL> exec open :r for select * from test_temp_grp where rownum <= 5000 order by object_name

PL/SQL procedure successfully completed.

SQL>
SQL> -- First one works
SQL> declare
2 rec test_temp_grp%rowtype;
3 begin
4 fetch :r into rec;
5 end;
6 /

PL/SQL procedure successfully completed.

SQL>
SQL> exec open :t for select * from test_temp_grp where rownum <= 5000 order by object_name

PL/SQL procedure successfully completed.

SQL>
SQL> -- Second one fails
SQL> declare
2 rec test_temp_grp%rowtype;
3 begin
4 fetch :t into rec;
5 end;
6 /
declare
*
ERROR at line 1:
ORA-01652: unable to extend temp segment by 13 in tablespace TEMP1_GRP
ORA-06512: at line 4

SQL>
SQL> -- So it looks like a session is limited to a particular tablespace taken from the group
SQL> -- What happens to a second session of the same user
SQL> -- One tablespace is still in use by cursor R
SQL> -- Check V$SORT_USAGE if interested
SQL>
SQL> -- Run the same query in second session
SQL> -- You might need multiple attempts to get this working
SQL> -- Depending on the tablespace taken from the group
SQL> -- and then close second session to free temp space
SQL> -- Press ENTER to continue
SQL>
SQL> pause

SQL>
SQL> -- This works if the second session is assigned a different tablespace taken from the group
SQL> -- So different sessions of the same user will potentially be assigned to different tablespaces
SQL>
SQL> -- What about parallel execution
SQL> exec close :r

PL/SQL procedure successfully completed.

SQL>
SQL> exec open :r for select /*+ parallel(test_temp_grp, 2) */ * from test_temp_grp order by object_name

PL/SQL procedure successfully completed.

SQL>
SQL> -- This works
SQL> -- So the PX sessions can be assigned to different tablespaces taken from the group
SQL> -- Check V$SORT_USAGE if interested
SQL> declare
2 rec test_temp_grp%rowtype;
3 begin
4 fetch :r into rec;
5 end;
6 /

PL/SQL procedure successfully completed.

SQL>
SQL> spool off

So in summary the following can be deduced:

1. A single workarea execution can't use space from more a single tablespace even when the user is assigned a temporary tablespace group. Or in other words: A single workarea execution can only allocate a single temporary segment, and segments in general can't span multiple tablespaces in Oracle, i.e. they must fit into a single tablespace.

2. Even multiple workareas for a single session can't use space from the multiple temporary tablespaces available. So this looks like a session attribute, i.e. assigned to the session, and all workareas of that session will use the single temporary tablespace assigned (taken from the group)

3. Multiple sessions of the same user can be assigned to different temporary tablespaces taken from the group.

4. Parallel execution slave sessions can do the same, they can be assigned to different temporary tablespaces taken from the group.

So in order to leverage the power of temporary tablespace groups you need either multiple sessions running serial SQL or the parallel execution option.

Continuing with part 2 of the Helsinki presentation

The second part of my two-hour Helsinki presentation brings the message that in order to avoid PL/SQL spaghetti when taking the "fat database" approach, one must employ a layered PL/SQL code architecture inside the DBMS. This means that UI-code, BL-code and DL-code should not be merged together inside single pieces of PL/SQL code blocks, but instead remain completely separated. Now, for UI-code

on the importance of being agile

If you're expecting something in favor Agile development, you're on the wrong blog. You'll notice that my 'agile' uses a little 'a' - I'm referring to the adjective agile, defined on answers.com as: 1. Characterized by quickness, lightness, and ease of movement; nimble. 2. Mentally quick or alert: an agile mind.or in the Merriam-Webster dictionary: 1 : marked by ready ability to move with

Oracle Interface for Google Visualization API (external data source interface)

I have been looking for a while at the Google Visualization API and finally found a way to produce the required JSON to produce any graphs directly and indirectly from Oracle database. I will make a white paper available on how to do this. Checkout this link: http://lab4.oraperf.com/demogoogle.html. The procedure handles the SQL that the Google API uses (not for all functionality yet (pivot, offset, format are not yet supported).

More on this later.

The Helsinki Platform

Nice picture from "OraDude" showing the Fat Database, or Helsinki's first observation.

Dynamic sampling and partitioned tables

------------------------------------------------------------------------------

Update January 2010: A thread on OTN mentioned this blog post and another, actually contradicting blog post by Asif Momen.

So why are these two blog posts coming to different conclusions regarding Dynamic Sampling and partitions with missing statistics?

This is the good thing with documented test cases - I reproduced what Asif has done and found out that the significant difference between these two test cases is the existence of global level statistics.

In my test case below, I have explicitly gathered statistics only on partition level, and there are no statistics on global level/table (which can be seen from the output of the query against user_tab_statistics below).

Asif has actually gathered statistics on global/table level which can be seen from his blog post.

So the conclusion seems to be: If you prune to a single partition, but this partition has no statistics, then Dynamic sampling will be used if no global/table level statistics are available. If global/table level statistics are available, the optimizer won't perform dynamic sampling and revert to these global/table level statistics instead.

Oddly this obviously doesn't apply to the subpartition/partition level case: Repeating a similar setup with subpartitions having no statistics, but statistics on partition level are available, Dynamic Sampling still was used (tested on 11.1.0.7 Win32).

------------------------------------------------------------------------------

Dynamic sampling for tables with missing statistics is enabled by default from Oracle 10g on (OPTIMIZER_DYNAMIC_SAMPLING = 2). You can get the same behaviour in Oracle 9i by increasing the default dynamic sampling level of 1 to at least 2, by the way, at system, session or statement level (OPTIMIZER_DYNAMIC_SAMPLING parameter or the DYNAMIC_SAMPLING hint at statement level). For more information, see the documentation.

It's an interesting question what happens if you have a partitioned table but only for some of the partitions or subpartitions statistics are missing, and some others have statistics gathered.

Does dynamic sampling selectively kick in depending on which partition accessed or is it simply checking if the table itself has statistics or not?

The following testcase which works only on 11.1 and later since it's using list/range composite partitioning for the subpartition specific tests shows the results of 11.1.0.7 on Win32:

SQL> exec dbms_random.seed(0)

PL/SQL procedure successfully completed.

SQL>
SQL> -- Range partitioning testcase
SQL> CREATE TABLE wr_test
2 ( test_id
3 , trade_date
4 , CONSTRAINT test_pk PRIMARY KEY (trade_date, test_id) using index local)
5 PARTITION BY RANGE (trade_date)
6 ( PARTITION p_jan VALUES LESS THAN (DATE '2009-02-01')
7 , PARTITION p_feb VALUES LESS THAN (DATE '2009-03-01')
8 , PARTITION p_mar VALUES LESS THAN (DATE '2009-04-01') )
9 AS
10 SELECT ROWNUM AS test_id
11 , DATE '2009-02-01' + trunc(dbms_random.value(0, 59)) as trade_date
12 FROM dual
13 connect by level <= 1000;

Table created.

SQL>
SQL> exec dbms_stats.gather_table_stats(null, 'WR_TEST', partname=>'p_feb', granularity=>'partition')

PL/SQL procedure successfully completed.

SQL>
SQL> select
2 partition_name
3 , num_rows
4 from
5 user_tab_statistics
6 where
7 table_name = 'WR_TEST';

PARTITION_NAME NUM_ROWS
------------------------------ ----------

P_JAN
P_FEB 491
P_MAR

4 rows selected.

SQL>
SQL> -- Dynamic sampling is selectively used on partitions without statistics
SQL> explain plan for
2 select * from wr_test where trade_date = date '2009-03-01';

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1136113187

--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 16 | 352 | 2 (0)| 00:00:01 | | |
| 1 | PARTITION RANGE SINGLE| | 16 | 352 | 2 (0)| 00:00:01 | 3 | 3 |
|* 2 | TABLE ACCESS FULL | WR_TEST | 16 | 352 | 2 (0)| 00:00:01 | 3 | 3 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("TRADE_DATE"=TO_DATE(' 2009-03-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Note
-----
- dynamic sampling used for this statement

18 rows selected.

SQL>
SQL> -- No dynamic sampling with statistics in place
SQL> explain plan for
2 select * from wr_test where trade_date = date '2009-02-01';

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3091737428

--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 18 | 198 | 2 (0)| 00:00:01 | | |
| 1 | PARTITION RANGE SINGLE| | 18 | 198 | 2 (0)| 00:00:01 | 2 | 2 |
|* 2 | INDEX RANGE SCAN | TEST_PK | 18 | 198 | 2 (0)| 00:00:01 | 2 | 2 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("TRADE_DATE"=TO_DATE(' 2009-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

14 rows selected.

SQL>
SQL> drop table wr_test purge;

Table dropped.

SQL>
SQL> exec dbms_random.seed(0)

PL/SQL procedure successfully completed.

SQL>
SQL> -- composite partitioning testcase
SQL> CREATE TABLE wr_test
2 ( test_id
3 , trade_date
4 , CONSTRAINT test_pk PRIMARY KEY (trade_date, test_id) using index local)
5 partition by list (test_id)
6 SUBPARTITION BY RANGE (trade_date)
7 (
8 partition p_default values (default)
9 ( SUBPARTITION p_jan VALUES LESS THAN (DATE '2009-02-01')
10 , SUBPARTITION p_feb VALUES LESS THAN (DATE '2009-03-01')
11 , SUBPARTITION p_mar VALUES LESS THAN (DATE '2009-04-01') )
12 )
13 AS
14 SELECT ROWNUM AS test_id
15 , DATE '2009-02-01' + trunc(dbms_random.value(0, 59)) as trade_date
16 FROM dual
17 connect by level <= 1000;

Table created.

SQL>
SQL> exec dbms_stats.gather_table_stats(null, 'WR_TEST', partname=>'p_feb', granularity=>'subpartition')

PL/SQL procedure successfully completed.

SQL>
SQL> select
2 partition_name
3 , subpartition_name
4 , num_rows
5 from
6 user_tab_statistics
7 where
8 table_name = 'WR_TEST';

PARTITION_NAME SUBPARTITION_NAME NUM_ROWS
------------------------------ ------------------------------ ----------

P_DEFAULT
P_DEFAULT P_JAN
P_DEFAULT P_FEB 491
P_DEFAULT P_MAR

5 rows selected.

SQL>
SQL> -- Dynamic sampling also is selectively used on SUBpartitions without statistics
SQL> explain plan for
2 select * from wr_test where trade_date = date '2009-03-01';

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1060835009

---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 16 | 352 | 2 (0)| 00:00:01 | | |
| 1 | PARTITION LIST SINGLE | | 16 | 352 | 2 (0)| 00:00:01 | 1 | 1 |
| 2 | PARTITION RANGE SINGLE| | 16 | 352 | 2 (0)| 00:00:01 | 3 | 3 |
|* 3 | TABLE ACCESS FULL | WR_TEST | 16 | 352 | 2 (0)| 00:00:01 | 3 | 3 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter("TRADE_DATE"=TO_DATE(' 2009-03-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Note
-----
- dynamic sampling used for this statement

19 rows selected.

SQL>
SQL> -- No dynamic sampling with statistics in place
SQL> explain plan for
2 select * from wr_test where trade_date = date '2009-02-01';

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1060835009

---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 18 | 198 | 2 (0)| 00:00:01 | | |
| 1 | PARTITION LIST SINGLE | | 18 | 198 | 2 (0)| 00:00:01 | 1 | 1 |
| 2 | PARTITION RANGE SINGLE| | 18 | 198 | 2 (0)| 00:00:01 | 2 | 2 |
|* 3 | TABLE ACCESS FULL | WR_TEST | 18 | 198 | 2 (0)| 00:00:01 | 2 | 2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter("TRADE_DATE"=TO_DATE(' 2009-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

15 rows selected.

SQL>
SQL> -- Different treatment of subpartitions in pre-10.2.0.4
SQL> alter session set optimizer_features_enable = '10.2.0.3';

Session altered.

SQL>
SQL> -- Now uses partition-level statistics
SQL> -- These are missing
SQL> -- Therefore dynamic sampling
SQL> -- Although the subpartition accessed has statistics
SQL> -- Bet these are not used by pre-10.2.0.4 optimizer code
SQL> explain plan for
2 select * from wr_test where trade_date = date '2009-02-01';

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1060835009

---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 23 | 506 | 2 (0)| 00:00:01 | | |
| 1 | PARTITION LIST SINGLE | | 23 | 506 | 2 (0)| 00:00:01 | 1 | 1 |
| 2 | PARTITION RANGE SINGLE| | 23 | 506 | 2 (0)| 00:00:01 | 2 | 2 |
|* 3 | TABLE ACCESS FULL | WR_TEST | 23 | 506 | 2 (0)| 00:00:01 | 2 | 2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter("TRADE_DATE"=TO_DATE(' 2009-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Note
-----
- dynamic sampling used for this statement

19 rows selected.

SQL>
SQL> -- Gathering statistics on partition level
SQL> exec dbms_stats.gather_table_stats(null, 'WR_TEST', partname=>'p_default', granularity=>'partition', method_opt=>'for all columns size 1')

PL/SQL procedure successfully completed.

SQL>
SQL> -- No longer using dynamic sampling
SQL> explain plan for
2 select * from wr_test where trade_date = date '2009-02-01';

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3944471208

---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 17 | 187 | 2 (0)| 00:00:01 | | |
| 1 | PARTITION LIST SINGLE | | 17 | 187 | 2 (0)| 00:00:01 | 1 | 1 |
| 2 | PARTITION RANGE SINGLE| | 17 | 187 | 2 (0)| 00:00:01 | 2 | 2 |
|* 3 | INDEX RANGE SCAN | TEST_PK | 17 | 187 | 2 (0)| 00:00:01 | 2 | 2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("TRADE_DATE"=TO_DATE(' 2009-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

15 rows selected.

SQL>

So as can be seen in 11.1 dynamic sampling is selectively used, depending on what kind of partition pruning is recognized by the optimizer at parse time and if statistics have been gathered for that partition, and this also applies to the subpartition level. The same can be seen in 10.2.0.4, apart from the severe bug regarding single subpartition pruning in the 10.2.0.4 patch set release as shown here.

As already mentioned a couple of times here on my blog, the optimizer code of pre-10.2.0.4 versions doesn't use subpartition level statistics even when pruned to a single subpartition and always reverts to the partition level.

Diagnosing and Resolving “gc block lost”

Last week, one of our clients had a sudden slow down on all of their applications which is running on two node RAC environment

Below is the summary of the setup:
– Server and Storage: SunFire X4200 with LUNs on EMC CX300
– OS: RHEL 4.3 ES
– Oracle 10.2.0.3 (database and clusterware)
– Database Files, Flash Recovery Area, OCR, and Voting disk are located on OCFS2 filesystems
– Application: Forms and Reports (6i and also lower)

As per the DBA, the workload on the database was normal and there were no changes on the RAC nodes and on the applications. Hmm, I can’t really tell because I haven’t really looked into their workload so I don’t have past data to compare.

I first setup the OS Watcher on both nodes (with 60sec snapshot interval) to have an end-to-end view of the server performance while I’m doing the diagnosis on the database. It’s a cool tool and automatically gathers a lot of info when you start it (cpu, io, memory, network, process info), also this tool has a graphing facility which makes it faster to detect spikes and any trends.

Then, I looked into the database parameters and compared it with the RDA of the database I had before. This is just to check if there are any changes on the parameters that might cause the sudden slow down. Hmm…there was nothing new

Then, I queried on the V$SESSION

SQL> select count(*) from v$session;      -- server1

  COUNT(*)
----------
       385

SQL> select count(*) from v$session;     -- server2

  COUNT(*)
----------
    49

.
I was surprised! The session count on both servers show that there are more users connected on server1 (almost 89% of the total users). This could be because of
1) the clients having lower versions (< Sql*Plus 8.1 or OCI8, see Note 97926.1) that may not support TAF (FAILOVER_MODE) and Load Balancing (LOAD_BALANCE) or
2) they are using TNS entries explicitly connecting to server1
Since the RAC database has different applications connected to it, knowing which applications are on server1 and how many they are will be enough to determine which users on particular applications could be transferred to server2 to at least balance the number of users on both nodes, but doing this may still not solve the issue as I don’t have enough info that this is the root cause of the problem..

The query below shows that all the users don’t have FAILOVER capabilities which could be attributed by the two points I mentioned above

select distinct s.inst_id instance_id, s.failover_type failover_type, s.failover_method
failover_method, s.failed_over failed_over
from gv$session s;
INSTANCE_ID FAILOVER_TYPE        FAILOVER_METHOD      FAILED_OVER
----------- -------------------- -------------------- --------------------
          1 NONE                 NONE                 NO
          2 NONE                 NONE                 NO

.
Below query shows the distinct applications/modules and their count which are all on server1. Also you’ll notice that most of them are inactive (some of these users should connect to server2 to have balance on both nodes)

SQL> select distinct inst_id, module, program, status, count(*)
from gv$session
group by inst_id, module, program, status
order by 1,2,3,4;

   INST_ID MODULE                         PROGRAM                        STATUS       COUNT(*)
---------- -------------------------------------------------- -------------------------------------------------- ---------- ----------
     1 F45RUN32.EXE                       F45RUN32.EXE                   INACTIVE       23       

... output snipped ....

     1 xxxPgm                         xxxPgm                         INACTIVE       21      

... output snipped ....

     1 c:\xxxwin95\BIN\F45RUN32.EXE L:\xxx\VS\FORMS\MAI   c:\xxxwin95\BIN\F45RUN32.EXE L:\xxx\VS\FORMS\MAI   INACTIVE       44
     1 c:\xxxwin95\BIN\F45RUN32.EXE L:\MAIN.FMX       c:\xxxwin95\BIN\F45RUN32.EXE L:\MAIN.FMX       INACTIVE       31
     1 c:\xxxwin95\BIN\F45RUN32.EXE w:\xxxMENU.fmx       c:\xxxwin95\BIN\F45RUN32.EXE w:\xxxMENU.fmx     INACTIVE       14
     1 c:\xxxwin95\BIN\F45RUN32.EXE w:\xxxlogin.fmx       c:\xxxwin95\BIN\F45RUN32.EXE w:\xxxlogin.fmx   INACTIVE       96  

... output snipped ....

     1 c:\xxxwin95\BIN\R25SRV32.exe               c:\xxxwin95\BIN\R25SRV32.exe           INACTIVE       54      

... output snipped ....

     1 ifweb90.exe                        ifweb90.exe                    INACTIVE       11

.
Below info shows some data samples of DBA_HIST_ACTIVE_SESS_HISTORY which is graphed using Tanel Poder’s Perfsheet (could be found here and here) to clearly distinguish the distribution of the modules and number of users. Most of the users on server1 uses the “c:\xxxwin95\BIN\F45RUN32.EXE w:\xxxlogin.fmx”

GcBlocksLost_1_SessionCount

Having the graph above may lead us to conclusion that the significant number of users on server1 attributes to the slow down of the transactions. But as per the DBA, there were no changes made and they were running the same transactions as before on server1 which has acceptable response time. Also the OS Watcher shows that the CPU utilization (peak is 60% and most of the time at 40%) and run queue was low and the disk IO utilization and service time were average (always 10ms below), and there were enough memory on the server (3GB free out of 12GB.. the SGA is 2GB) and no swapping.

Hmm…Drilling down on the wait interface and on per session level that is doing the important business operations will give us a definite conclusion on what is really the bottleneck on the database.

The graph below is another sample from DBA_HIST_ACTIVE_SESS_HISTORY that shows server1 (in blue box) is suffering from “gc cr block lost” and “gc cr multi block request” from 7am to 4pm. The “Metalink Doc ID: 563566.1 gc lost blocks diagnostics” indicates that it is a cluster problem which could be a problem on the network interconnect

GcBlocksLost_2_server1waits

To confirm the bottleneck shown from the sample of DBA_HIST_ACTIVE_SESS_HISTORY (above), the output of ADDM and AWR report from a peak period was analyzed (from 2-3 & 3-4 pm). Below are the output of ADDM and AWR reports:

ADDM output:

          DETAILED ADDM REPORT FOR TASK 'TASK_137854' WITH ID 137854
          ----------------------------------------------------------

              Analysis Period: 28-MAY-2009 from 13:58:59 to 14:59:12
         Database ID/Instance: 3967623528/2
      Database/Instance Names: xxx
                    Host Name: xxx
             Database Version: 10.2.0.3.0
               Snapshot Range: from 15642 to 15643
                Database Time: 3710 seconds
        Average Database Load: 1 active sessions

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

FINDING 1: 52% impact (1917 seconds)
------------------------------------
Cluster multi-block requests were consuming significant database time.

   RECOMMENDATION 1: SQL Tuning, 37% benefit (1389 seconds)
      ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
         "0h0fn2d19adtk". Look for an alternative plan that does not use
         object scans.
         RELEVANT OBJECT: SQL statement with SQL_ID 0h0fn2d19adtk

         ... output snipped ... 

   RECOMMENDATION 2: SQL Tuning, 8.4% benefit (310 seconds)
      ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
         "8jd43xr4rp00u". Look for an alternative plan that does not use
         object scans.
         RELEVANT OBJECT: SQL statement with SQL_ID 8jd43xr4rp00u

         ... output snipped ... 

   RECOMMENDATION 3: SQL Tuning, 4.6% benefit (172 seconds)
      ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
         "6wpgf3s0vzuks". Look for an alternative plan that does not use
         object scans.
         RELEVANT OBJECT: SQL statement with SQL_ID 6wpgf3s0vzuks

         ... output snipped ... 

   SYMPTOMS THAT LED TO THE FINDING:
      SYMPTOM: Inter-instance messaging was consuming significant database
               time on this instance. (94% impact [3488 seconds])
         SYMPTOM: Wait class "Cluster" was consuming significant database
                  time. (94% impact [3488 seconds])

FINDING 2: 51% impact (1874 seconds)
------------------------------------
Global Cache Service Processes (LMSn) in other instances were not processing
requests fast enough.

   RECOMMENDATION 1: DB Configuration, 51% benefit (1874 seconds)
      ACTION: Increase throughput of the Global Cache Service (LMSn)
         processes. Increase the number of Global Cache Service processes by
         increasing the value of the parameter "gcs_server_processes".
         Alternatively, if the host is CPU bound consider increasing the OS
         priority of the Global Cache Service processes.
      RATIONALE: The value of parameter "gcs_server_processes" was "2" during
         the analysis period.

   SYMPTOMS THAT LED TO THE FINDING:
      SYMPTOM: Inter-instance messaging was consuming significant database
               time on this instance. (94% impact [3488 seconds])
         SYMPTOM: Wait class "Cluster" was consuming significant database
                  time. (94% impact [3488 seconds])

FINDING 3: 36% impact (1330 seconds)
------------------------------------
Cluster communications that were retried due to lost blocks consumed
significant database time.

   RECOMMENDATION 1: Host Configuration, 36% benefit (1330 seconds)
      ACTION: Check the configuration of the cluster interconnect. Check OS
         setup like adapter setting, firmware and driver release. Check that
         the OS's socket receive buffers are large enough to store an entire
         multiblock read. The value of parameter
         "db_file_multiblock_read_count" may be decreased as a workaround.
      RATIONALE: The instance was consuming 477 kilo bits per second of
         interconnect bandwidth.

   SYMPTOMS THAT LED TO THE FINDING:
      SYMPTOM: Inter-instance messaging was consuming significant database
               time on this instance. (94% impact [3488 seconds])
         SYMPTOM: Wait class "Cluster" was consuming significant database
                  time. (94% impact [3488 seconds])

#######################################################################

          DETAILED ADDM REPORT FOR TASK 'TASK_137874' WITH ID 137874
          ----------------------------------------------------------

              Analysis Period: 28-MAY-2009 from 14:59:12 to 15:58:44
         Database ID/Instance: 3967623528/2
      Database/Instance Names: xxx
                    Host Name: xxx
             Database Version: 10.2.0.3.0
               Snapshot Range: from 15643 to 15644
                Database Time: 1563 seconds
        Average Database Load: .4 active sessions

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

FINDING 1: 42% impact (654 seconds)
-----------------------------------
Cluster communications that were retried due to lost blocks consumed
significant database time.

   RECOMMENDATION 1: Host Configuration, 42% benefit (654 seconds)
      ACTION: Check the configuration of the cluster interconnect. Check OS
         setup like adapter setting, firmware and driver release. Check that
         the OS's socket receive buffers are large enough to store an entire
         multiblock read. The value of parameter
         "db_file_multiblock_read_count" may be decreased as a workaround.
      RATIONALE: The instance was consuming 134 kilo bits per second of
         interconnect bandwidth.

   SYMPTOMS THAT LED TO THE FINDING:
      SYMPTOM: Inter-instance messaging was consuming significant database
               time on this instance. (88% impact [1371 seconds])
         SYMPTOM: Wait class "Cluster" was consuming significant database
                  time. (88% impact [1372 seconds])

.
AWR output:

WORKLOAD REPOSITORY report for

DB Name         DB Id    Instance     Inst Num Release     RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
xxxxx         3967623528 xxxxxx              2 10.2.0.3.0  YES xxx

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:     15642 28-May-09 13:58:58        57      27.0
  End Snap:     15643 28-May-09 14:59:12        64      28.7
   Elapsed:               60.23 (mins)
   DB Time:               61.83 (mins)

Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
gc cr multi block request            19,644       1,902     97   51.3    Cluster
gc cr block lost                      1,011       1,117   1104   30.1    Cluster
enq: WF - contention                    626         268    429    7.2      Other
gc current block lost                   186         203   1089    5.5    Cluster
cr request retry                        678         162    240    4.4      Other

Per Wait Class
~~~~~~~~~~~~~~~~~~
                                                                  Avg
                                       %Time       Total Wait    wait     Waits
Wait Class                      Waits  -outs         Time (s)    (ms)      /txn
-------------------- ---------------- ------ ---------------- ------- ---------
Cluster                        34,450    7.0            3,385      98       6.8
Other                          91,292   64.7              467       5      18.1
User I/O                       26,510     .0              102       4       5.3
Concurrency                    29,668     .0               54       2       5.9
System I/O                     13,360     .0               11       1       2.6
Commit                          1,313     .0                4       3       0.3
Application                     2,374     .0                2       1       0.5
Network                       129,774     .0                0       0      25.7
Configuration                      20     .0                0       0       0.0

#######################################################################

WORKLOAD REPOSITORY report for

DB Name         DB Id    Instance     Inst Num Release     RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
xxxxx         3967623528 xxxxxx              2 10.2.0.3.0  YES xxx

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:     15643 28-May-09 14:59:12        64      28.7
  End Snap:     15644 28-May-09 15:58:43        64      29.6
   Elapsed:               59.53 (mins)
   DB Time:               26.05 (mins)

Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
gc cr block lost                        389         429   1103   27.5    Cluster
gc cr multi block request             2,360         422    179   27.0    Cluster
enq: WF - contention                    510         212    416   13.6      Other
gc current block lost                   188         204   1084   13.0    Cluster
cr request retry                        326         181    554   11.6      Other

Per Wait Class
~~~~~~~~~~~~~~~~~~
                                                                  Avg
                                       %Time       Total Wait    wait     Waits
Wait Class                      Waits  -outs         Time (s)    (ms)      /txn
-------------------- ---------------- ------ ---------------- ------- ---------
Cluster                        10,239    8.9            1,247     122       2.1
Other                         153,082   71.9              468       3      31.0
Application                     1,784   18.0              161      90       0.4
Concurrency                    33,014     .0               53       2       6.7
User I/O                        4,246     .0               27       6       0.9
System I/O                     13,648     .0               14       1       2.8
Commit                          1,546     .0                4       3       0.3
Network                       138,333     .0                1       0      28.0
Configuration                      16     .0                0       4       0.0

.
The only thing that is left to do is to drill down on the complaining users and check on the network performance and interconnect… unfortunately the users were going home and I can’t see any significant database activity. So I called it a night, and just analyzed my activity log and read on Metalink Doc ID 563566.1. Also I advised the DBA to divide the users across the nodes.

Next day afternoon, and it’s time to do the drill down. You can see on the graph below (a sample from GV$SESSION) that the users were already distributed, but still there is a performance problem. I told you this will not solve the issue ! </p />
</p></div>

    	  	<div class=

Two weeks to go for ODTUG

ODTUG Kaleidoscoop 2009 is approaching soon. I'll be hosting two presentations.Fat Databases: a Layered ApproachThis will basically be the Helsinki Declaration talk, only crammed into just one hour. I'll probably skip the four observations and go straight to the WoD application and its code classification (DL, BL and UI-code). And close with a short demo by building a page (with Apex of course)