Here’s a quick demo to make a point about consistent reads (prompted by a question on the Oracle-L mailing list):
SQL> drop table t1; Table dropped. SQL> create table t1 (n1 number); Table created. SQL> insert into t1 values(0); 1 row created. SQL> begin 2 for i in 1..1000 loop 3 update t1 set n1 = i; 4 end loop; 5 end; 6 / PL/SQL procedure successfully completed.
Note that I haven’t issued a commit in this session, and all I’ve got is a single row in the table (and because it’s my usual demo setup of locally managed tablespaces with uniform extents of 1MB using freelist management I know that that one row is in the first available block of the table).
How much work is a second session going to do to scan that table ?
SQL> alter system flush buffer_cache; SQL> execute snap_my_stats.start_snap SQL> select * from t1; SQL> set serveroutput on size 1000000 format wrapped SQL> execute snap_my_stats.end_snap --------------------------------- Session stats - 18-Apr 11:33:01 Interval:- 2 seconds --------------------------------- Name Value ---- ----- session logical reads 967 consistent gets 967 consistent gets from cache 967 consistent gets - examination 964 consistent changes 1,001 CR blocks created 1 data blocks consistent reads - undo records applied 1,001 IMU CR rollbacks 41
The snap_my_stats package is similar in concept to Tom Kyte’s “runstats” or Tanel Poder’s “snapper” program to capture changes in values in the dynamic performance views over short time periods. In this case I’ve deleted all but a few of the larger changes, and a couple of small changes.
The figure that stands out (probably) is the “session logical reads” – we’ve done 967 logical I/Os to scan a tables of just one block. The reason for this is that we’ve created a read-consistent copy of that one block (“CR blocks created” = 1), and it has taken a lot of work to create that copy. We’ve had to apply 1,001 undo records (“data blocks consistent reads – undo records applied” = 1001).
Most of those undo records come from individual accesses (which are of the cheaper “consistent gets – examination” type that only need a single get on the “cache buffers chains” latch) to undo blocks, following the “UBA (undo block address)” pointer in the relevant ITL entry of the table block, but since this is a 10g database the last few undo records come out of the “In-memory Undo” of the other session. Basically the cloning operation is something like this:
It is an interesting point that as the first session created undo records it would pin and fill undo blocks – so would only do a few current gets (one for each block) on the undo blocks it was using. As another process reverses out the changes in a CR clone it has to get and release each undo block every time it wants a single undo record … applying undo records introduces far more latch and buffer activity that the original generation of the undo.
It’s worth knowing that there are three statistics relating to applying undo records:
transaction tables consistent reads - undo records applied Estimating "old" commit SCNs during delayed block cleanout data blocks consistent reads - undo records applied Creating CR clones rollback changes - undo records applied The result of a real "rollback;"
The second step in the list of actions is: “Notice uncommitted transaction”. It’s probably worth pointing out that another part of the ITL entry holds the transaction id (“xid”) which implicitly identifies the undo segment and transaction table slot in that segment that has been used to hold the transaction state. The current contents of that slot allow Oracle to determine whether or not (and when, if necessary) the transaction was committed.