Search

OakieTags

Who's online

There are currently 0 users and 42 guests online.

Recent comments

Affiliations

Transactions 2

Here’s a little follow-on from Friday’s posting. I’ll start it off as a quiz, and follow up tomorrow with an explanation of the results (though someone will probably have given the correct solution by then anyway).

I have a simple heap table t1(id number(6,0), n1 number, v1 varchar2(10), padding varchar2(100)). The primary key is the id column, and the table holds 3,000 rows where id takes the values from 1 to 3,000. There are no other indexes. (I’d show you the code, but I don’t want to make it too easy to run the code, I want you to try to work it out in your heads).

I run the following pl/sql block.

begin
	--
	--	Going to fail on primary key constraint
	--
	for i in 1..200 loop
		begin
			insert into t1 values(50,i,'x','x');
			commit;
		exception
			when others then null;
		end;
	end loop;

	--
	--	Explicit rollback
	--
	for i in 1..40 loop
		begin
			insert into t1 values(4000 + i,i,'x','x');
			rollback;
		end;
	end loop;

	--
	--	Explicit commit
	--
	for i in 1..50 loop
		begin
			insert into t1 values(4500 + i,i,'x','x');
			commit;
		end;
	end loop;

	--
	--	Rolling back to savepoint
	--
	for i in 1..70 loop
		begin
			savepoint A;
			insert into t1 values(4600 + i,i,'x','x');
			rollback to savepoint A;
		end;
	end loop;

end;
/

If (from another session) you take an AWR (or Statspack) snapshot before and after running the block, what will you see for the statistics

user commits
user rollbacks
transaction rollbacks
rollback changed – undo records applied

and, for a bonus point, what will the value of “Rollback per transaction %” be ?

Note – since the snapshots are global, your results will be affected by other work on the system; in particular I noticed that simply taking two AWR snapshots in a row on the instance I used resulted in 8 user commits.

 

Update

And the answers – cut from an AWR report – are:


Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
rollback changes - undo records                 420           25.2           4.3
transaction rollbacks                           240            4.7           2.5
user commits                                     58            1.1           0.6
user rollbacks                                   40            0.8           0.4

 Rollback per transaction %:   40.82       Rows per Sort:   122.70

The 40 “user rollbacks” come from the loop with the explicit rollback. At the same time the 40 rollbacks introduced 80 “rollback changes – undo records applied” – one for the table index and one for the table for each call to rollback. Our “user rollbacks” have been “real” rollbacks, so they’ve introduced 40 transaction rollbacks at the same time.

The 50 “user commits” come from the loop with the explicit commit – except that the picture is slightly blurred by the fact that simply running the AWR snapshot introduced a few extra commits.

The 40.82% is 40 / (40 + 58) expressed as a percentage.

The “rollback to savepoint” calls haven’t contributed to the user rollbacks – even though, in this case, they have rolled back 40 transaction starts, and a check of various other statistics would show a TX enqueue being requested and released, and transactions rotating through the undo segments. Despite this clearing of the TX locks, these rollbacks to savepoint don’t count towards transaction rollbacks; however, the 70 “rollback to savepoint” calls have introduced a further 140 “rollback changes – undo records applied” (again, one for the table and one for the index for each call).

Finally the failed attempts to insert duplicate keys have, in these circumstances, introduced 200 transaction rollbacks (though not user rollbacks – we didn’t ask for them explicitly). They are also responsible for the final 200 “rollback changes – undo records applied” that we need to account for. The changes, very specifically, are the changes to the table – Oracle does actually have to insert the row into the table before trying to insert the index entry (after all, the index entry needs to know the rowid), and so it has to reverse that insert when it discovers that the relevant entry already exists in the primary key index. The requirement to insert and then rollback is one of the reasons why it is often better for “batch merge by pl/sql loop” to try an update then insert is sql%rowcount is zero rather than trying to insert, capture the exception, and then update; it’s also a good reason for creating the unique indexes on a table before creating the non-unique indexes – Oracle maintains indexes in the order they were created so you want it to find the duplicate key error as soon as possible to minimise the amount of change and rollback.

If you want to experiment further, there are a couple of slightly different variations of the code that exhibit interesting variations in results. There are also other events that can cause transaction rollbacks to appear without user rollbacks being invoked.