Search

OakieTags

Who's online

There are currently 0 users and 33 guests online.

Recent comments

Affiliations

performance

Determining optimal Amazon S3 transfer parallelism

Amazon’s Simple Storage Service (S3) is a robust, inexpensive and highly-available internet data storage service.  At Blue Gecko, we occasionally help our customers design and implement S3-based backup strategies.

Star Transformation

A little while ago I published a note explaining how it was possible to find queries which ran faster if you manually de-coupled the index and table accesses. Here’s a further example that came up in discussion on a client site recently. The query looks something like this (at least in concept, although it was a little more complex, with some messy bits around the edges):

select
	ord.*
from
	products	prd,
	customers	cst,
	orders		ord
where
	prd.grp = 50
and	cst.grp = 50
and	ord.id_prd = prd.id
and	ord.id_cst = cst.id
;

There are indexes on products(id), customers(id) and orders(id), as well as indexes on orders(id_prd) and orders(id_cst). Basically it’s a collection of primary key and “foreign key” indexes – except there are no defined constraints and none of the indexes is unique.

The production “orders” table is very large (hundreds of millions of rows). There are lots of customers – and the orders for each customer are scattered throughout the entire orders table; similarly there are lots of products, and the orders for each product are scattered throughout the orders table. What execution plans (with the right indexes, of course) might you get for this query. There are two obvious possibilities:

-----------------------------------------------------------------
| Id  | Operation           | Name      | Rows  | Bytes | Cost  |
-----------------------------------------------------------------
|   0 | SELECT STATEMENT    |           |    11 |  1584 |   492 |
|*  1 |  HASH JOIN          |           |    11 |  1584 |   492 |
|*  2 |   TABLE ACCESS FULL | CUSTOMERS |   100 |   900 |    14 |
|*  3 |   HASH JOIN         |           |  3314 |   436K|   477 |
|*  4 |    TABLE ACCESS FULL| PRODUCTS  |   100 |   900 |    14 |
|   5 |    TABLE ACCESS FULL| ORDERS    |  1000K|   120M|   446 |
-----------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("ORD"."ID_CST"="CST"."ID")
   2 - filter("CST"."GRP"=50)
   3 - access("ORD"."ID_PRD"="PRD"."ID")
   4 - filter("PRD"."GRP"=50)

-----------------------------------------------------------------------------
| Id  | Operation                      | Name       | Rows  | Bytes | Cost  |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |            |    11 |  1584 | 10223 |
|   1 |  NESTED LOOPS                  |            |       |       |       |
|   2 |   NESTED LOOPS                 |            |    11 |  1584 | 10223 |
|   3 |    NESTED LOOPS                |            |  3314 |   436K|  3614 |
|*  4 |     TABLE ACCESS FULL          | PRODUCTS   |   100 |   900 |    14 |
|   5 |     TABLE ACCESS BY INDEX ROWID| ORDERS     |    33 |  4158 |    36 |
|*  6 |      INDEX RANGE SCAN          | ORD_PRD_FK |    33 |       |     2 |
|*  7 |    INDEX RANGE SCAN            | CST_I1     |     1 |       |     1 |
|*  8 |   TABLE ACCESS BY INDEX ROWID  | CUSTOMERS  |     1 |     9 |     2 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter("PRD"."GRP"=50)
   6 - access("ORD"."ID_PRD"="PRD"."ID")
   7 - access("ORD"."ID_CST"="CST"."ID")
   8 - filter("CST"."GRP"=50)

The first option is to hash the two dimension tables into memory then probe each in turn after a tablescan of the orders table – and this could be a very effective choice in some cases; the second option is to drive from one of the dimension tables into the orders table, picking up a relatively large number of rows, then discard a large fraction of those rows by indexing (or possibly doing a hash join) into the second dimension table. (Obviously the roles of products and customers could be reversed in the nested loop plan I’ve shown).

Both plans are likely to be rather expensive – a full scan on orders could be a huge amount of I/O in the form of “db file scattered read” waits; the indexed access path from one dimension to the orders table could be a huge amount of I/O in the form of “db file sequential read” wait collecting rows we won’t eventually need. In the correct circumstances (with the right data patterns) then, we might like to see a star transformation. But there are two possible problems:

  • It is “common knowledge” that you need primary keys on the dimension tables to do a star transformation. (In fact I was sure of this until about 30 minutes ago but then I discovered I was wrong.)
  • It is also “common knowledge” that you need bitmap indexes on the fact table to do a star transformation. Again this is wrong (but fortunately I’ve known about that for years).

So if we enable star transformations (alter session set star_transformation_enabled=true) we can get a plan that looks like this:

----------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name                       | Rows  | Bytes | Cost  |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                            |     1 |   136 |   102 |

|   1 |  TEMP TABLE TRANSFORMATION            |                            |       |       |       |
|   2 |   LOAD AS SELECT                      | SYS_TEMP_0FD9D6601_17047D9 |       |       |       |
|*  3 |    TABLE ACCESS FULL                  | CUSTOMERS                  |   100 |   900 |    14 |
|   4 |   LOAD AS SELECT                      | SYS_TEMP_0FD9D6601_17047D9 |       |       |       |
|*  5 |    TABLE ACCESS FULL                  | PRODUCTS                   |   100 |   900 |    14 |

|*  6 |   HASH JOIN                           |                            |     1 |   136 |    74 |
|*  7 |    HASH JOIN                          |                            |     1 |   131 |    71 |

|   8 |     TABLE ACCESS BY INDEX ROWID       | ORDERS                     |    11 |  1400 |    68 |
|   9 |      BITMAP CONVERSION TO ROWIDS      |                            |       |       |       |
|  10 |       BITMAP AND                      |                            |       |       |       |

|  11 |        BITMAP MERGE                   |                            |       |       |       |
|  12 |         BITMAP KEY ITERATION          |                            |       |       |       |
|  13 |          TABLE ACCESS FULL            | SYS_TEMP_0FD9D6600_17047D9 |     1 |    13 |     2 |
|  14 |          BITMAP CONVERSION FROM ROWIDS|                            |       |       |       |
|* 15 |           INDEX RANGE SCAN            | ORD_CST_FK                 |       |       |     3 |

|  16 |        BITMAP MERGE                   |                            |       |       |       |
|  17 |         BITMAP KEY ITERATION          |                            |       |       |       |
|  18 |          TABLE ACCESS FULL            | SYS_TEMP_0FD9D6601_17047D9 |     1 |    13 |     2 |
|  19 |          BITMAP CONVERSION FROM ROWIDS|                            |       |       |       |
|* 20 |           INDEX RANGE SCAN            | ORD_PRD_FK                 |       |       |     3 |

|  21 |     TABLE ACCESS FULL                 | SYS_TEMP_0FD9D6600_17047D9 |   100 |   500 |     2 |
|  22 |    TABLE ACCESS FULL                  | SYS_TEMP_0FD9D6601_17047D9 |   100 |   500 |     2 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("CST"."GRP"=50)
   5 - filter("PRD"."GRP"=50)
   6 - access("ORD"."ID_PRD"="C0")
   7 - access("ORD"."ID_CST"="C0")
  15 - access("ORD"."ID_CST"="C0")
  20 - access("ORD"."ID_PRD"="C0")

In this example you can see that Oracle has decided to use a hash join to do a “join-back” to the dimension tables and has decided to extract the subset of rows (grp = 50) from each of those tables and save them in a global temporary. This join-back is a little odd since the query isn’t selecting any date from those two tables so we shouldn’t need to do it – but I think that may be a side effect of not having the primary key declared.

I’ve broken the plan up a little bit to make it a little easier to see the critical steps:

  • Lines 11 – 15 we select the index ranges we want from the customer index on orders, convert to bitmaps and merge.
  • Lines 16 – 20 we do the same for the products index on orders
  • Lines 8 – 10 we do the “bitmap and” between the two constructed bit strings, convert to rowids, and pick up exactly the rows we want
  • Lines 6,7,21,22 show two hash joins to the temporary tables which represent the subset of rows we have from customers and products
  • Lines 1 – 5 are where we started by extracting the two subsets into a form of internalised global temporary table.

If this doesn’t work well enough for you – and, allowing for a few little tweaks here and there, it really should – or if there is something about your code that makes it impossible for Oracle to do a star transformation, here’s an example of taking total control by restructuring the SQL to do a similar operation:

select
	ord.*
from
	(
	select
		/*+
			leading(prd ord)
			use_nl(ord)
			no_merge
		*/
		ord.rowid 	prid
	from
		products	prd,
		orders		ord
		where
		prd.grp = 50
	and	ord.id_prd = prd.id
		)	prid,
	(
	select
		/*+
			leading(cst ord)
			use_nl(ord)
			no_merge
		*/
		ord.rowid 	crid
	from
		customers	cst,
		orders		ord
	where
		cst.grp = 50
	and	ord.id_cst = cst.id
	)	crid,
	orders	ord
where
	prid.prid = crid.crid
and	ord.rowid = prid.prid

With the no_merge hints we produce two result sets: prid - the rowids from the orders table for all rows which match the products we are interested in; and crid – the rowids from the orders table for all rows which match the customers we are interested in.

We then do a join between crid and prid – which has to leave us with just those rowids for the rows which represent the customers we’re interested in who have placed orders for products we’re interested in. So we can then do a join, by rowid, to the orders table to collect the data for those orders. Here’s the plan:

--------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |    11 |  1650 |   570 |
|   1 |  NESTED LOOPS               |            |    11 |  1650 |   570 |
|*  2 |   HASH JOIN                 |            |    11 |   264 |   559 |
|   3 |    VIEW                     |            |  3361 | 40332 |   277 |
|   4 |     NESTED LOOPS            |            |  3361 | 87386 |   277 |
|*  5 |      TABLE ACCESS FULL      | CUSTOMERS  |    98 |   882 |    81 |
|*  6 |      INDEX RANGE SCAN       | ORD_CST_FK |    34 |   578 |     2 |
|   7 |    VIEW                     |            |  3390 | 40680 |   281 |
|   8 |     NESTED LOOPS            |            |  3390 | 88140 |   281 |
|*  9 |      TABLE ACCESS FULL      | PRODUCTS   |   100 |   900 |    81 |
|* 10 |      INDEX RANGE SCAN       | ORD_PRD_FK |    34 |   578 |     2 |
|  11 |   TABLE ACCESS BY USER ROWID| ORDERS     |     1 |   126 |     1 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("PRID"."PRID"="CRID"."CRID")
   5 - filter("CST"."GRP"=50)
   6 - access("ORD"."ID_CST"="CST"."ID")
   9 - filter("PRD"."GRP"=50)
  10 - access("ORD"."ID_PRD"="PRD"."ID")

The SQL has obviously become more complex – and complexity is generally something to be wary of as it’s an easy step from “complex” to “wrong”. So don’t try something like this unless you’re sure it’s necessary, and then document what you’re doing very carefully.

Footnote:

If you want to play around with this example, here’s the code to generate the tables. I was using my standard LMT, 1MB uniform extents, 8KB block size, freelist management model. The database was version 11.1.0.6 and CPU costing was disabled.

create table products
as
select
	rownum			id,
	mod(rownum,300)		grp,
	lpad(rownum,10,'0')	small_vc,
	rpad('x',100)		padding
from
	all_objects
where
	rownum <= 30000;

create index prd_i1 on products(id);
-- alter table products add constraint prd_pk primary key(id);

create table customers
as
select
	rownum			id,
	mod(rownum,300)		grp,
	lpad(rownum,10,'0')	small_vc,
	rpad('x',100)		padding
from
	all_objects
where
	rownum <= 30000;

create index cst_i1 on customers(id);
-- alter table customers add constraint cst_pk primary key(id);

create table orders
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		rownum <= 1000)
select
	rownum			id,
	trunc(dbms_random.value(1,30000))	id_prd,
	trunc(dbms_random.value(1,30000))	id_cst,
	lpad(rownum,10,'0')	small_vc,
	rpad('x',100)		padding
from
	generator	v1,
	generator	v2
where
	rownum  <= 1000000;

create index ord_prd_fk on orders(id_prd);
create index ord_cst_fk on orders(id_cst);

Updated XML Content & Paper section

Once per year I try to update the “XML Content” page that, in principle, should contain all my XML relevant thoughts and “how to” posts, so I can find my gained “wisdom” a bit more quickly (getting old and stuff). This year I had only 14 and a bit of such XML related posts, which was afterwards easily explained when I updated my “Paper” page where among others I have a list on “presentations” done… Oops… I have been busy…

Anyway. Enjoy the updated XML content overview reference page, it contains now 100+ posts regarding specific XML(DB) related howto’s, solutions, approaches, ideas, etc.

 

More CR

Following on from yesterday’s post on consistent reads, I thought I’d make the point that the way you work can make an enormous difference to the amount of work you do. Here’s a silly little demo (in 10.2.0.3):

drop table t1 purge;
create table t1 (id number, n1 number);
insert into t1 values (1,0);
insert into t1 values (2,0);
commit;

execute dbms_stats.gather_table_stats(user,'t1')
execute snap_my_stats.start_snap

begin
	for i in 1..1000 loop
		update t1 set n1 = i where id = 1;
	end loop;
end;
/

execute snap_my_stats.end_snap

set doc off
doc

Output - 10.2.0.3 (some rows deleted)
Name                                                                     Value
----                                                                     -----
opened cursors cumulative                                                   11
user calls                                                                   6
recursive calls                                                          1,068
recursive cpu usage                                                          7
session logical reads                                                    4,041
CPU used when call started                                                   7
CPU used by this session                                                     7
DB time                                                                      6
db block gets                                                            1,030
db block gets from cache                                                 1,030
consistent gets                                                          3,011
consistent gets from cache                                               3,011
consistent gets - examination                                                4
db block changes                                                         2,015
change write time                                                            4
free buffer requested                                                    1,014
switch current to new buffer                                             1,000
calls to kcmgas                                                          1,014
calls to get snapshot scn: kcmgss                                        3,009
redo entries                                                               960
redo size                                                              295,160
undo change vector size                                                111,584
no work - consistent read gets                                           1,004
table scans (short tables)                                               1,001
table scan rows gotten                                                   2,002
table scan blocks gotten                                                 1,001
buffer is pinned count                                                   1,000
execute count                                                            1,009

#

I’ve created two rows in a table, then updated one of them 1,000 times – using a table scan to do the update. I haven’t yet committed my transaction. At this point I’m going to use a second session to run the same update loop on the second row in the table:

begin
	for i in 1..1000 loop
		update t1 set n1 = i where id = 2;
	end loop;
end;
/

Name                                                                     Value
----                                                                     -----
opened cursors cumulative                                                    8
user calls                                                                   6
recursive calls                                                          1,009
recursive cpu usage                                                        170
session logical reads                                                  965,999
CPU used when call started                                                 172
CPU used by this session                                                   172
DB time                                                                    173
db block gets                                                            1,030
db block gets from cache                                                 1,030
consistent gets                                                        964,969
consistent gets from cache                                             964,969
consistent gets - examination                                          961,965
db block changes                                                         3,016
consistent changes                                                   1,001,000
free buffer requested                                                    1,015
CR blocks created                                                        1,001
calls to kcmgas                                                          1,015
calls to get snapshot scn: kcmgss                                        3,008
redo entries                                                             1,936
redo size                                                              358,652
undo change vector size                                                111,608
data blocks consistent reads - undo records applied                  1,001,000
cleanouts and rollbacks - consistent read gets                           1,001
immediate (CR) block cleanout applications                               1,001
active txn count during cleanout                                         2,000
cleanout - number of ktugct calls                                        1,001
IMU CR rollbacks                                                        41,041
table scans (short tables)                                               1,001
table scan rows gotten                                                   2,002
table scan blocks gotten                                                 1,001
execute count                                                            1,006

Many of the statistics are (virtually) identical (e.g. “execute count”, “db block gets”, “free buffer requested”); some show an increase by 1,000 (often from zero) – largely because we have to worry 1,000 times about cleaning out the current block and creating a read-consistent version so that we can check if it can be updated.

But the most noticeable changes are in the “CPU time” and “consistent gets” because of the 1,000 times we have to apply 1,000 undo records to the block as we create the read-consistent version of the block. The CPU time has gone from 7 (hundredths of a second) to 172 because of (roughly) 1,000,000 “consistent gets – examination”. As I mentioned yesterday, this matches closely to “data blocks consistent reads – undo records applied” so we know why they are happening. Watch out in your batch jobs – if you have a lot of concurrent update activity going on a significant fraction of the workload may be the constant re-creation of CR clones.

However, there is another interesting detail to watch out for – what happens if I change the update execution path from a tablescan to an indexed access path:

create table t1 (id number, n1 number);
insert into t1 values (1,0);
insert into t1 values (2,0);
commit;

execute dbms_stats.gather_table_stats(user,'t1')
create index t1_i1 on t1(id);                            --  Make indexed access path available.

Then with an index hint in my update code, I get the following effects (having done the same update loop on row 1 in the first session, of course):


begin
	for i in 1..1000 loop
		update /*+ index(t1) */ t1 set n1 = i where id = 2;     -- indexed access path hinted
	end loop;
end;
/

Name                                                                     Value
----                                                                     -----
opened cursors cumulative                                                    7
user calls                                                                   6
recursive calls                                                          1,006
recursive cpu usage                                                         11
session logical reads                                                    2,036
CPU used when call started                                                  11
CPU used by this session                                                    11
DB time                                                                     11
db block gets                                                            1,030
db block gets from cache                                                 1,030
consistent gets                                                          1,006
consistent gets from cache                                               1,006
consistent gets - examination                                                6
db block changes                                                         2,015
free buffer requested                                                       14
shared hash latch upgrades - no wait                                     1,000
calls to kcmgas                                                             14
calls to get snapshot scn: kcmgss                                        1,004
redo entries                                                               960
redo size                                                              295,144
undo change vector size                                                111,608
index crx upgrade (positioned)                                           1,000
index scans kdiixs1                                                      1,000
execute count                                                            1,005

The difference is astonishing – where did all the ‘create CR copy’ activity go ?

I’ve pointed out before now that choosing a different execution plan for an update can have a big impact on performance – this is just another example demonstrating the point.

Consistent Reads

Here’s a quick demo to make a point about consistent reads (prompted by a question on the Oracle-L mailing list):

SQL> drop table t1;

Table dropped.

SQL> create table t1 (n1 number);

Table created.

SQL> insert into t1  values(0);

1 row created.

SQL> begin
  2  for i in 1..1000 loop
  3  update t1 set n1 = i;
  4  end loop;
  5  end;
  6  /

PL/SQL procedure successfully completed.

Note that I haven’t issued a commit in this session, and all I’ve got is a single row in the table (and because it’s my usual demo setup of locally managed tablespaces with uniform extents of 1MB using freelist management I know that that one row is in the first available block of the table).

How much work is a second session going to do to scan that table ?

SQL> alter system flush buffer_cache;
SQL> execute snap_my_stats.start_snap
SQL> select * from t1;
SQL> set serveroutput on size 1000000 format wrapped
SQL> execute snap_my_stats.end_snap
---------------------------------
Session stats - 18-Apr 11:33:01
Interval:-  2 seconds
---------------------------------
Name                                                                     Value
----                                                                     -----
session logical reads                                                      967
consistent gets                                                            967
consistent gets from cache                                                 967
consistent gets - examination                                              964
consistent changes                                                       1,001
CR blocks created                                                            1
data blocks consistent reads - undo records applied                      1,001
IMU CR rollbacks                                                            41

The snap_my_stats package is similar in concept to Tom Kyte’s “runstats” or Tanel Poder’s “snapper” program to capture changes in values in the dynamic performance views over short time periods. In this case I’ve deleted all but a few of the larger changes, and a couple of small changes.

The figure that stands out (probably) is the “session logical reads” – we’ve done 967 logical I/Os to scan a tables of just one block. The reason for this is that we’ve created a read-consistent copy of that one block (“CR blocks created” = 1), and it has taken a lot of work to create that copy. We’ve had to apply 1,001 undo records (“data blocks consistent reads – undo records applied” = 1001).

Most of those undo records come from individual accesses (which are of the cheaper “consistent gets – examination” type that only need a single get on the “cache buffers chains” latch) to undo blocks, following the “UBA (undo block address)” pointer in the relevant ITL entry of the table block, but since this is a 10g database the last few undo records come out of the “In-memory Undo” of the other session. Basically the cloning operation is something like this:

  1. Get current block
  2. Notice uncommitted transaction
  3. Clone current block – then ignore current block
  4. Get Undo block indicated by uncommitted ITL and apply undo change vector
  5. Repeat step four – 1,000 times
  6. Block is now clear of all uncommitted transactions
  7. There are no committed transactions with a commit SCN later than the start of query
  8. Display contents of block

It is an interesting point that as the first session created undo records it would pin and fill undo blocks – so would only do a few current gets (one for each block) on the undo blocks it was using. As another process reverses out the changes in a CR clone it has to get and release each undo block every time it wants a single undo record … applying undo records introduces far more latch and buffer activity that the original generation of the undo.

Footnote

It’s worth knowing that there are three statistics relating to applying undo records:

transaction tables consistent reads - undo records applied        Estimating "old" commit SCNs during delayed block cleanout
data blocks consistent reads - undo records applied               Creating CR clones
rollback changes - undo records applied                           The result of a real "rollback;"

See this posting on comp.databases.oracle.server for a rough description of transaction table consistent reads; and this elderly posting highlighting a benefit of knowing about rollback changes.

Footnote 2

The second step in the list of actions is: “Notice uncommitted transaction”. It’s probably worth pointing out that another part of the ITL entry holds the transaction id (“xid”) which implicitly identifies the undo segment and transaction table slot in that segment that has been used to hold  the transaction state. The current contents of that slot allow Oracle to determine whether or not (and when, if necessary) the transaction was committed.

Brain Teaser: 10046 Extended SQL Trace Shows “EXEC #435118472:c=15600,e=510″, How is that Possible?

April 14, 2011 I find interesting details from time to time about Oracle Database.  I am in the process of organizing material for a presentation, and I encountered something interesting when testing Oracle Database 11.2.0.2 with patch 3 applied.  I traced the execution of an application and then processed the 660MB  trace file using a trace file [...]

HOWTO: Partition Binary XML based on a Virtual Column

This one is long overdue. There is a partition example of binary xml on this website based on Range, Hash and List partitioning, but this is, seen from a XML view, a incorrect way to do it due to the fact that the partition column is a regular one and not a virtual column as described in the Oracle XMLDB Developers Guide for 11.2. The examples given in that ppost will partition the table on the ID column but does not partition the XML based on a value/element IN the XML document. The following will.

So here, a small example, of how it can be done based on a virtual column. Be aware that you should support these virtual columns with at least an index or XMLIndex structure for performance reasons.

#993333; font-weight: bold;">CREATE #993333; font-weight: bold;">TABLE binary_part_xml #993333; font-weight: bold;">OF XMLType
  XMLTYPE STORE #993333; font-weight: bold;">AS SECUREFILE #993333; font-weight: bold;">BINARY XML
  VIRTUAL #993333; font-weight: bold;">COLUMNS
  #66cc66;">(
    LISTING_TYPE #993333; font-weight: bold;">AS #66cc66;">(XMLCast#66cc66;">(XMLQuery#66cc66;">(#ff0000;">'/LISTING/@TYPE'
         PASSING OBJECT_VALUE RETURNING CONTENT#66cc66;">) #993333; font-weight: bold;">AS VARCHAR2#66cc66;">(#cc66cc;">100#66cc66;">)#66cc66;">)#66cc66;">)
  #66cc66;">)
  PARTITION #993333; font-weight: bold;">BY LIST #66cc66;">(LISTING_TYPE#66cc66;">)
  #66cc66;">(
    PARTITION health #993333; font-weight: bold;">VALUES #66cc66;">(#ff0000;">'Health'#66cc66;">)#66cc66;">,
    PARTITION law_firms #993333; font-weight: bold;">VALUES #66cc66;">(#ff0000;">'Law Firm'#66cc66;">)
  #66cc66;">);

M.

Reminder and Public Appearances 2011

First, a reminder – my Advanced Oracle Troubleshooting v2.0 online seminar starts next week already. Last chance to sign up, I can accept registrations until Sunday :-)

I won’t do another AOT seminar before Oct (or Nov) this year. More details and sign-up here:

I have rescheduled my Advanced SQL Tuning and Partitioning & Parallel Execution for Performance seminars too. I will do them in September/October. Unfortunately I’m too busy right now to do them before the summer.

Public Appearances:

  • I will be speaking at the UKOUG Exadata Special Event in London on 18th April
  • I have submitted a few papers for Oracle OpenWorld in San Francisco as well (end of Sep/beginning of Oct), all about Exadata. Let’s see how it goes, but I’ll be there anyway, which means that I’ll probably show up at the Oracle Closed World event too!

And that’s all the travel I will do this year…

Virtual Conferences:

I’ll soon announce the 2nd EsSN virtual conference too ;-)

Free online stuff:

Perhaps in a month or so I will do another hacking session (I’ll plan 2 hours this time, 1 hour isn’t nearly enough for going deep). The topic will probably be about low-level details of SQL plan execution internals… stay tuned!

Latch contention troubleshooting case study and Flashback Database performance issues with LOBs

Steve Bamber has written up a case study of library cache latch contention troubleshooting of an Apex application with LatchProf. I’m happy that others also see the value and have had success with my new LatchProf based latch contention troubleshooting approach which takes into account both sides of the contention story (latch waiters and latch holders/blockers) as opposed to the guesswork used previously (hey if it’s shared pool latch contention – is must be about bad SQL not using bind variables …. NOT always…)

Anyway, I’m happy. If you have success stories with LatchProf, please let me know!

As a second topic of interest, Laimutis Nedzinskas has written some good notes about the effect and overhead of Flashback Database option when you are using and modifying (nocache) LOBs. We’ve exchanged some mails on this topic and yeah, my clients have sure seen some problems with this combination as well. You basically want to keep your LOBs cached when using FB database…

Oracle Troubleshooting TV Show: Season 1, Episode 01 ;-)

Ok, it’s official – the first and only Oracle Troubleshooting TV show is live now!

The first show is almost 2 hours about the ORA-4031 errors and shared pool hacking. It’s a recording of the US/EMEA timezone online hacking session I did some days ago.

There are a couple of things to note:

  1. The text still isn’t as sharp as in the original recording, but it’s much better than in my previous upload attempts and is decently readable. I’ll try some more variations with my next shows so I hope the text quality will get better! Or maybe I should just switch to GUI tools or powerpoint slides? ;-)
  2. You probably should view this video in full screen (otherwise the text will be tiny and unreadable)
  3. There’s advertising in the beginning (and maybe end) of this show! I’ll see how much money I’ll make out of this – maybe these shows start contributing towards the awesome beer selection I’ll have in my fridge some day (right now I have none). Viewing a 30-sec advert is small price to pay for 2 hours of kick-ass shared pool hacking content !!!
  4. You can download the scripts and tools used in the demos from http://tech.e2sn.com/oracle-scripts-and-tools/
  5. Make sure you check out my online Oracle troubleshooting seminars too (this April and May already)

View the embedded video below or go to my official Oracle Troubleshooting TV show channel:

http://tanelpoder.blip.tv

Enjoy!