Search

OakieTags

Who's online

There are currently 0 users and 51 guests online.

Recent comments

Oakies Blog Aggregator

Friday Philosophy – The Answer To Everything

For those of us acquainted with the philosophical works of Douglas Adams we know that the the answer to everything is 42.

mdw1123> select all knowledge from everything
  2  /

 KNOWLEDGE
----------
        42

This above is a real SQL statement (version 11.2.0.3, just in case you wanted to know :-) ).

This was prompted by a silly discussion at lunch time about the answer to everything and databases and I wondered aloud how you could go about getting Oracle to respond with 42 when you “selected all from everything”. My colleagues looked at me like I was an idiot and said “create a table called everything with a column called all and select it”. Yeah, of course, and I laughed. So much for being an expert at Oracle huh?

Well, I tried. It did not work:

mdw1123> create table EVERYTHING (ALL number not null)
  2  /
create table EVERYTHING (ALL number not null)
                         *
ERROR at line 1:
ORA-00904: : invalid identifier

Damn. It’s a reserved word. But for what? Off the top of my head I could not remember what ALL is used for in Oracle select syntax. Never mind, I could get around the issue to some degree by the use of quotes around the column name (and just for fun, I made the column name lowercase too – this is how you can create lowercase columns but you have to be careful with this, as you will see below):

mdw1123> create table everything ("all" number not null)
  2  /
mdw1123> desc everything
 Name                                                              Null?    Type
 ----------------------------------------------------------------- -------- --------

 all                                                               NOT NULL NUMBER

mdw1123> insert into everything values (42)
  2  /
mdw1123> select "all" from everything
  2  /

       all
----------
        42

-- but be careful of case
mdw1123> select "ALL" from everything
  2  /
select "ALL" from everything
       *
ERROR at line 1:
ORA-00904: "ALL": invalid identifier

I was not happy with this though, I was having to put the quotes in my line and be careful about the syntax.

So, what is the word ALL used for? A quick check of the SQL reference manual:

It is the opposite of DISTINCT and the default, so we never have to put it in the statement.

With the above in mind I was able to quickly come up with something close, but not quite, what I originally asked for. See below for how.

mdw1123> create table everything (KNOWLEDGE NUMBER NOT NULL)
  2  /

mdw1123> insert into everything values (42)
  2  /

mdw1123> select all knowledge from everything
  2  /

 KNOWLEDGE
----------
        42
mdw1123>

Having said it was not quite what I had originally set out to do, I actually prefer this version.

Of course, I cleaned up after myself. It feels odd typing in commands that have an English meaning that would not be what you want to do – I could not get over the nagging feeling that the below was going to cause a lot of data to disappear :-) :

mdw1123> drop table everything purge;

Oracle Documentation: The broken links fiasco continues…

So I was just patting myself on the back for finishing my website clean up, then I happened on a few pages with broken links to Oracle documentation. That annoyed me, but I figured I better do a quick scan to see how many broken external links I had. The first attempt was a complete fail because the tool I used clicked all my Google Adsense adverts, making me a DotCom millionaire in about 3 minutes. I wrote to Google and apologised profusely. In my defense, the tool I used was right at the top of the list in the Chrome Web App Store…

Once I got a link checker that didn’t put me at risk of a jail sentence, things got a little more depressing. A very large number of my articles contain broken links to Oracle documentation. As I started looking at links it became apparent that Oracle have used at least 3 main URLs for documentation over the years:

The versions listed are based on the links I’ve added in my articles. If you check today, all/most docs come from the “http://docs.oracle.com” address.

This in itself shouldn’t present a problem, because any company with an involvement in the web knows that URLs should never change. If by chance you do have to change something, you put a redirect in place. The problem is, Oracle don’t do this, or at least not consistently. Check out the following three URLs:

They are the same document, just using the three base URLs I mentioned previously. If you click them, you’ll notice the first one fails and the following two work. My guess is Oracle have created a 301 permanent redirect from http://download.oracle.com/docs to http://docs.oracle.com, but not bothered to maintain the http://download-west.oracle.com/docs URL, thereby breaking just about every link to its docs on the internet that references anything older than about 11gR1. That includes forums (including their own), blog posts, documents containing URLs etc. It’s just a nightmare.

So PLEASE Oracle:

  • Stop changing URLs.
  • When you do change them, PLEASE use rewrites/redirects properly.
  • Remember, your rewrites/redirects should be permanent, not just long enough for search engines to update their indexes.

This would solve the vast majority of my gripes about the links to the Oracle docs…

Notes:

  • For those not familiar with web servers, this kind of rewrite/redirect for a whole domain name is really simple. It’s one line in your “.htaccess” file, not a separate one for each page, so I’m not asking for the world here. :)
  • I am aware there are other issues with changing URLs at Oracle that a blanket redirect would not solve. I’m not even going to start on whitepapers and PDFs…

Cheers

Tim…




Autotrace Polluting The Shared Pool?

Introduction

Another random note that I made during the sessions attended at OOW was about the SQL*Plus AUTOTRACE feature. As you're hopefully already aware of this feature has some significant shortcomings, the most obvious being that it doesn't pull the actual execution plan from the Shared Pool after executing the statement but simply runs an EXPLAIN PLAN on the SQL text which might produce an execution plan that is different from the actual one for various reasons.

Now the claim was made that in addition to these shortcomings the plan generated by the AUTOTRACE feature will stay in the Shared Pool and is eligible for sharing, which would mean that other statement executions could be affected by a potentially bad execution plan generated via AUTOTRACE rather then getting re-optimized on their own.

Now that claim initially struck me as odd because so far I was under the impression that the shortcoming of AUTOTRACE was the fact that it simply used the EXPLAIN PLAN facility to get the execution plan details - and I don't think that any plan generated by EXPLAIN PLAN is eligible for sharing with actual statement execution. After thinking about it for a while I realized however that there are some interesting side effects possible, but it depends on how you actually use AUTOTRACE.

Using Default AUTOTRACE

So in order to see what AUTOTRACE does behind the scenes I've decided to trace AUTOTRACE. Here is what I've tried:

set echo on timing on

alter session set tracefile_identifier = 'autotrace';

alter session set sql_trace = true;

set autotrace on

var n number

exec :n := 1

select * from dual where 1 = :n;

select * from dual where dummy = 'X';

And that's a snippet from the corresponding SQL trace file:

.
.
.
=====================
PARSING IN CURSOR #7 len=31 dep=0 uid=91 oct=3 lid=91 tim=651497870527 hv=868568466 ad='7ff0ce23638' sqlid='b9j0230twamck'
select * from dual where 1 = :n
END OF STMT
PARSE #7:c=0,e=460,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=651497870525
=====================
.
.
.
EXEC #7:c=0,e=1306,p=1,cr=3,cu=0,mis=0,r=0,dep=0,og=1,plh=3752461848,tim=651497871918
FETCH #7:c=0,e=654,p=2,cr=3,cu=0,mis=0,r=1,dep=0,og=1,plh=3752461848,tim=651497872660
STAT #7 id=1 cnt=1 pid=0 pos=1 obj=0 op='FILTER (cr=3 pr=2 pw=0 time=0 us)'
STAT #7 id=2 cnt=1 pid=1 pos=1 obj=116 op='TABLE ACCESS FULL DUAL (cr=3 pr=2 pw=0 time=0 us cost=2 size=2 card=1)'
FETCH #7:c=0,e=3,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=3752461848,tim=651497873015
CLOSE #7:c=0,e=16,dep=0,type=0,tim=651497876511
.
.
.
=====================
PARSING IN CURSOR #9 len=79 dep=0 uid=91 oct=3 lid=91 tim=651497880846 hv=3377064296 ad='7ff0ce196a8' sqlid='1tfgxbv4nmub8'
EXPLAIN PLAN SET STATEMENT_ID='PLUS6499083' FOR select * from dual where 1 = :n
END OF STMT
PARSE #9:c=0,e=583,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=651497880843
=====================
.
.
.
=====================
PARSING IN CURSOR #2 len=74 dep=0 uid=91 oct=3 lid=91 tim=651497888595 hv=920998108 ad='7ff0cdd8b00' sqlid='3s1hh8cvfan6w'
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE', :1))
END OF STMT
PARSE #2:c=0,e=264,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=651497888593
=====================
.
.
.
PARSING IN CURSOR #7 len=36 dep=0 uid=91 oct=3 lid=91 tim=651498044006 hv=3267611628 ad='7ff0cdbd0f8' sqlid='4k6g7vr1c7kzc'
select * from dual where dummy = 'X'
END OF STMT
PARSE #7:c=0,e=1071,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=272002086,tim=651498044003
EXEC #7:c=0,e=34,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=272002086,tim=651498044138
FETCH #7:c=0,e=60,p=0,cr=3,cu=0,mis=0,r=1,dep=0,og=1,plh=272002086,tim=651498044289
STAT #7 id=1 cnt=1 pid=0 pos=1 obj=116 op='TABLE ACCESS FULL DUAL (cr=3 pr=0 pw=0 time=0 us cost=2 size=2 card=1)'
FETCH #7:c=0,e=2,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=272002086,tim=651498044616
CLOSE #7:c=0,e=28,dep=0,type=0,tim=651498062083
.
.
.
=====================
PARSING IN CURSOR #2 len=84 dep=0 uid=91 oct=50 lid=91 tim=651498073656 hv=290419607 ad='7ff0cdb8a28' sqlid='5jx46tw8nywwr'
EXPLAIN PLAN SET STATEMENT_ID='PLUS6499083' FOR select * from dual where dummy = 'X'
END OF STMT
PARSE #2:c=0,e=1295,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=272002086,tim=651498073653
=====================
.
.
.
=====================
PARSING IN CURSOR #9 len=74 dep=0 uid=91 oct=3 lid=91 tim=651498076015 hv=920998108 ad='7ff0cdd8b00' sqlid='3s1hh8cvfan6w'
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE', :1))
END OF STMT
PARSE #9:c=0,e=254,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=651498076013
=====================
.
.
.

So that looks pretty much like the expected behaviour I've mentioned above - AUTOTRACE executes the statements and afterwards runs an EXPLAIN PLAN to show the execution plan.

As a side note it's interesting that the SQL trace doesn't contain the queries used to gather the delta of the session statistics. The reason is simple: They are not issued by this session. SQL*Plus establishes temporarily a second session for that purpose, using one of the modes provided by the OCI allowing to create a second session on the same connection / process. You can tell this by looking at the corresponding V$SESSION.PADDR resp. the entry in V$PROCESS: For both sessions the same process entry will be used (dedicated server model). By the way I've adopted the same approach for SQLTools++, the GUI that I maintain, for all activities that potentially could interfere with the main session, like collecting session statistics delta or calling DBMS_XPLAN.DISPLAY_CURSOR.

So when using AUTOTRACE in this way the only potential threat comes from the actual execution of the statement - but this is no different from executing a statement in any other way. Of course you'll appreciate that using an odd bind value in the execution as part of the AUTOTRACE activity could theoretically lead to issues with the shared usage of such a cursor afterwards - again this is nothing that is special to AUTOTRACE.

The potentially "wrong" execution plan that can be reported via the EXPLAIN PLAN cannot cause problems for other cursors, simply because it is generated via EXPLAIN PLAN. To make this point clear, here is another script that demonstrates:

- How AUTOTRACE can lie
- How EXPLAIN PLAN cursors are unshared by default

-- Demonstrate that AUTOTRACE can lie
set echo on linesize 200 pagesize 0 trimspool on tab off

drop table t;

purge table t;

create table t
as
select
rownum as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 1000000
;

exec dbms_stats.gather_table_stats(null, 't')

create index t_idx on t (id);

-- Compare the execution plan
-- reported by AUTOTRACE
-- to the one reported by DBMS_XPLAN.DISPLAY_CURSOR
set autotrace on

var n number

exec :n := 500000

select /* FIND_ME */ * from (
select * from t where id > :n
)
where rownum > 1;

set autotrace off

select /* FIND_ME */ * from (
select * from t where id > :n
)
where rownum > 1;

select * from table(dbms_xplan.display_cursor(null, null));

-- Demonstrate that EXPLAIN PLAN cursors get special treatment
-- They are unshared by default
set echo off timing off feedback off long 1000000 longchunksize 1000000

spool %TEMP%\explain_plan_example.sql

select * from (
select
sql_fulltext
from
v$sqlstats
where
sql_text like 'EXPLAIN PLAN%/* FIND_ME */%rownum > 1%'
and sql_text not like '%v$sql%'
order by
last_active_time desc
)
where
rownum <= 1
;

spool off

-- Each execution of the same parent EXPLAIN PLAN cursor
-- leads to a new child cursor
set echo on feedback on timing on pagesize 999

@%TEMP%\explain_plan_example
/

/

column sql_id new_value sql_id

select * from (
select
sql_id
from
v$sqlstats
where
sql_text like 'EXPLAIN PLAN%/* FIND_ME */%rownum > 1%'
and sql_text not like '%v$sql%'
order by
last_active_time desc
)
where
rownum <= 1
;

select
sql_id
, child_number
, explain_plan_cursor
from
v$sql_shared_cursor
where
sql_id = '&sql_id';

set serveroutput on

@sql_shared_cursor &sql_id

So if you run this script you'll see an example where AUTOTRACE gets it wrong because the plan generated via EXPLAIN PLAN is different from the actual plan used. Furthermore the plan generated via EXPLAIN PLAN can only match other EXPLAIN PLAN cursors, and on top these are then unshared by default - so no threat to any other SQL issued possible.

Here's a sample output I got from 11.2.0.1:

SQL> -- Demonstrate that AUTOTRACE can lie
SQL> set echo on linesize 200 pagesize 0 trimspool on tab off
SQL>
SQL> drop table t;

Table dropped.

Elapsed: 00:00:00.03
SQL>
SQL> purge table t;

Table purged.

Elapsed: 00:00:00.04
SQL>
SQL> create table t
2 as
3 select
4 rownum as id
5 , rpad('x', 100) as filler
6 from
7 dual
8 connect by
9 level <= 1000000
10 ;

Table created.

Elapsed: 00:00:02.38
SQL>
SQL> exec dbms_stats.gather_table_stats(null, 't')

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.40
SQL>
SQL> create index t_idx on t (id);

Index created.

Elapsed: 00:00:01.63
SQL>
SQL> -- Compare the execution plan
SQL> -- reported by AUTOTRACE
SQL> -- to the one reported by DBMS_XPLAN.DISPLAY_CURSOR
SQL> set autotrace on
SQL>
SQL> var n number
SQL>
SQL> exec :n := 500000

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL>
SQL> select /* FIND_ME */ * from (
2 select * from t where id > :n
3 )
4 where rownum > 1;

no rows selected

Elapsed: 00:00:01.51

Execution Plan
----------------------------------------------------------
Plan hash value: 2383791439

---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 50000 | 5175K| 162 (0)| 00:00:02 |
| 1 | COUNT | | | | | |
|* 2 | FILTER | | | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| T | 50000 | 5175K| 162 (0)| 00:00:02 |
|* 4 | INDEX RANGE SCAN | T_IDX | 9000 | | 23 (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(ROWNUM>1)
4 - access("ID">TO_NUMBER(:N))

Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
15390 consistent gets
15385 physical reads
0 redo size
304 bytes sent via SQL*Net to client
349 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed

SQL>
SQL> set autotrace off
SQL>
SQL> select /* FIND_ME */ * from (
2 select * from t where id > :n
3 )
4 where rownum > 1;

no rows selected

Elapsed: 00:00:00.98
SQL>
SQL> select * from table(dbms_xplan.display_cursor(null, null));
SQL_ID 8q13ghbwgsmkv, child number 0
-------------------------------------
select /* FIND_ME */ * from ( select * from t where id > :n ) where
rownum > 1

Plan hash value: 4220795399

----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 4204 (100)| |
| 1 | COUNT | | | | | |
|* 2 | FILTER | | | | | |
|* 3 | TABLE ACCESS FULL| T | 500K| 50M| 4204 (1)| 00:00:51 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(ROWNUM>1)
3 - filter("ID">:N)

22 rows selected.

Elapsed: 00:00:00.12
SQL>
SQL> -- Demonstrate that EXPLAIN PLAN cursors get special treatment
SQL> -- They are unshared by default
SQL> set echo off timing off feedback off long 1000000 longchunksize 1000000
EXPLAIN PLAN SET STATEMENT_ID='PLUS6552708' FOR select /* FIND_ME */ * from (
select * from t where id > :n
)
where rownum > 1

SQL>
SQL> @%TEMP%\explain_plan_example
SQL> EXPLAIN PLAN SET STATEMENT_ID='PLUS6552708' FOR select /* FIND_ME */ * from (
2 select * from t where id > :n
3 )
4 where rownum > 1
5
SQL> /

Explained.

Elapsed: 00:00:00.00
SQL>
SQL> /

Explained.

Elapsed: 00:00:00.00
SQL>
SQL> column sql_id new_value sql_id
SQL>
SQL> select * from (
2 select
3 sql_id
4 from
5 v$sqlstats
6 where
7 sql_text like 'EXPLAIN PLAN%/* FIND_ME */%rownum > 1%'
8 and sql_text not like '%v$sql%'
9 order by
10 last_active_time desc
11 )
12 where
13 rownum <= 1
14 ;

SQL_ID
-------------
ctms62wkwp7nz

1 row selected.

Elapsed: 00:00:00.03
SQL>
SQL> select
2 sql_id
3 , child_number
4 , explain_plan_cursor
5 from
6 v$sql_shared_cursor
7 where
8 sql_id = '&sql_id';

SQL_ID CHILD_NUMBER E
------------- ------------ -
ctms62wkwp7nz 0 N
ctms62wkwp7nz 1 Y
ctms62wkwp7nz 2 Y

3 rows selected.

Elapsed: 00:00:00.03
SQL>
SQL> set serveroutput on
SQL>
SQL> @sql_shared_cursor &sql_id
SQL> declare
2 c number;
3 col_cnt number;
4 col_rec dbms_sql.desc_tab;
5 col_value varchar2(4000);
6 ret_val number;
7 begin
8 c := dbms_sql.open_cursor;
9 dbms_sql.parse(c,
10 'select q.sql_text, s.*
11 from v$sql_shared_cursor s, v$sql q
12 where s.sql_id = q.sql_id
13 and s.child_number = q.child_number
14 and q.sql_id = ''&1''',
15 dbms_sql.native);
16 dbms_sql.describe_columns(c, col_cnt, col_rec);
17
18 for idx in 1 .. col_cnt loop
19 dbms_sql.define_column(c, idx, col_value, 4000);
20 end loop;
21
22 ret_val := dbms_sql.execute(c);
23
24 while(dbms_sql.fetch_rows(c) > 0) loop
25 for idx in 1 .. col_cnt loop
26 dbms_sql.column_value(c, idx, col_value);
27 if col_rec(idx).col_name in ('SQL_ID', 'ADDRESS', 'CHILD_ADDRESS',
28 'CHILD_NUMBER', 'SQL_TEXT') then
29 dbms_output.put_line(rpad(col_rec(idx).col_name, 30) ||
30 ' = ' || col_value);
31 elsif col_value = 'Y' then
32 dbms_output.put_line(rpad(col_rec(idx).col_name, 30) ||
33 ' = ' || col_value);
34 end if;
35 end loop;
36 dbms_output.put_line('--------------------------------------------------');
37 end loop;
38
39 dbms_sql.close_cursor(c);
40 end;
41 /
SQL_TEXT = EXPLAIN PLAN SET STATEMENT_ID='PLUS6552708' FOR select /* FIND_ME */ * from ( select * from t where id > :n ) where rownum > 1
SQL_ID = ctms62wkwp7nz
ADDRESS = 000007FF0DD90180
CHILD_ADDRESS = 000007FF0DD87E70
CHILD_NUMBER = 0
--------------------------------------------------
SQL_TEXT = EXPLAIN PLAN SET STATEMENT_ID='PLUS6552708' FOR select /* FIND_ME */ * from ( select * from t where id > :n ) where rownum > 1
SQL_ID = ctms62wkwp7nz
ADDRESS = 000007FF0DD90180
CHILD_ADDRESS = 000007FF0DCD0D10
CHILD_NUMBER = 1
EXPLAIN_PLAN_CURSOR = Y
--------------------------------------------------
SQL_TEXT = EXPLAIN PLAN SET STATEMENT_ID='PLUS6552708' FOR select /* FIND_ME */ * from ( select * from t where id > :n ) where rownum > 1
SQL_ID = ctms62wkwp7nz
ADDRESS = 000007FF0DD90180
CHILD_ADDRESS = 000007FF0DCAAA20
CHILD_NUMBER = 2
EXPLAIN_PLAN_CURSOR = Y
--------------------------------------------------

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.10
SQL>

Other Autotrace Options

The perhaps less expected aspect comes into the picture if you attempt to use AUTOTRACE differently - there are various options and when using a particular combination AUTOTRACE doesn't really execute the statement but reports only the execution plan, so if you change the first example above from:

SET AUTOTRACE ON

to

SET AUTOTRACE TRACEONLY EXPLAIN

then have a close look at the SQL trace generated:

.
.
.
=====================
PARSING IN CURSOR #2 len=45 dep=0 uid=91 oct=3 lid=91 tim=416642144779 hv=3626603586 ad='7ff13a1c8b0' sqlid='9pj321gc2m522'
select /* FIND_ME */ * from dual where 1 = :n
END OF STMT
PARSE #2:c=0,e=64,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3752461848,tim=416642144777
CLOSE #2:c=0,e=14,dep=0,type=0,tim=416642145372
=====================
.
.
.
=====================
PARSING IN CURSOR #3 len=93 dep=0 uid=91 oct=3 lid=91 tim=416642148753 hv=2987003528 ad='7ff13cd8ea0' sqlid='fu0myxft0n3n8'
EXPLAIN PLAN SET STATEMENT_ID='PLUS6510526' FOR select /* FIND_ME */ * from dual where 1 = :n
END OF STMT
PARSE #3:c=0,e=689,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=416642148749
=====================
.
.
.
=====================
PARSING IN CURSOR #6 len=50 dep=0 uid=91 oct=3 lid=91 tim=416642233676 hv=37196885 ad='7ff138c8570' sqlid='f8cyn9w13g52p'
select /* FIND_ME */ * from dual where dummy = 'X'
END OF STMT
PARSE #6:c=0,e=116,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=272002086,tim=416642233673
CLOSE #6:c=0,e=32,dep=0,type=0,tim=416642237105
=====================
.
.
.
=====================
PARSING IN CURSOR #3 len=98 dep=0 uid=91 oct=50 lid=91 tim=416642243694 hv=390050481 ad='7ff1374bcf8' sqlid='8vvq0ncbmzcpj'
EXPLAIN PLAN SET STATEMENT_ID='PLUS6510526' FOR select /* FIND_ME */ * from dual where dummy = 'X'
END OF STMT
PARSE #3:c=0,e=1261,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=272002086,tim=416642243691
=====================
.
.
.

Can you spot the difference? SQL*Plus now only parses the SQL before actually running the EXPLAIN PLAN command.

Let's see what happens if the second example from above gets executed with the AUTOTRACE TRACEONLY EXPLAIN option:

-- Demonstrate that AUTOTRACE TRACEONLY EXPLAIN
-- can cause problems for other SQL executions
set echo on linesize 200 pagesize 0 trimspool on tab off

drop table t;

purge table t;

create table t
as
select
rownum as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 1000000
;

exec dbms_stats.gather_table_stats(null, 't')

create index t_idx on t (id);

set autotrace traceonly explain

var n number

exec :n := 500000

select /* FIND_ME */ * from (
select * from t where id > :n
)
where rownum > 1;

set autotrace off

select /* FIND_ME */ * from (
select * from t where id > :n
)
where rownum > 1;

-- Now the execution plan generated by the PARSE call issued by SQL*Plus
-- will be re-used by the subsequent executions
select * from table(dbms_xplan.display_cursor(null, null));

Here's again a sample output from 11.2.0.1:

SQL> drop table t;

Table dropped.

SQL>
SQL> purge table t;

Table purged.

SQL>
SQL> create table t
2 as
3 select
4 rownum as id
5 , rpad('x', 100) as filler
6 from
7 dual
8 connect by
9 level <= 1000000
10 ;

Table created.

SQL>
SQL> exec dbms_stats.gather_table_stats(null, 't')

PL/SQL procedure successfully completed.

SQL>
SQL> create index t_idx on t (id);

Index created.

SQL>
SQL> set autotrace traceonly explain
SQL>
SQL> var n number
SQL>
SQL> exec :n := 500000

PL/SQL procedure successfully completed.

SQL>
SQL> select /* FIND_ME */ * from (
2 select * from t where id > :n
3 )
4 where rownum > 1;

Execution Plan
----------------------------------------------------------
Plan hash value: 2383791439

---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 50000 | 5175K| 162 (0)| 00:00:02 |
| 1 | COUNT | | | | | |
|* 2 | FILTER | | | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| T | 50000 | 5175K| 162 (0)| 00:00:02 |
|* 4 | INDEX RANGE SCAN | T_IDX | 9000 | | 23 (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(ROWNUM>1)
4 - access("ID">TO_NUMBER(:N))

SQL>
SQL> set autotrace off
SQL>
SQL> select /* FIND_ME */ * from (
2 select * from t where id > :n
3 )
4 where rownum > 1;

no rows selected

SQL>
SQL> -- Now the execution plan generated by the PARSE call issued by SQL*Plus
SQL> -- will be re-used by the subsequent executions
SQL> select * from table(dbms_xplan.display_cursor(null, null));
SQL_ID 8q13ghbwgsmkv, child number 0
-------------------------------------
select /* FIND_ME */ * from ( select * from t where id > :n ) where
rownum > 1

Plan hash value: 2383791439

---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 162 (100)| |
| 1 | COUNT | | | | | |
|* 2 | FILTER | | | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| T | 50000 | 5175K| 162 (0)| 00:00:02 |
|* 4 | INDEX RANGE SCAN | T_IDX | 9000 | | 23 (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(ROWNUM>1)
4 - access("ID">:N)

23 rows selected.

SQL>

So that's interesting: By using the TRACEONLY EXPLAIN option I now ended up with a potentially "wrong" execution plan that is actually eligible for sharing with other executions.

What surprised me most was the fact that I expected a bind variable type mismatch (CHAR vs. NUMBER, check the "Predicate Information" section) between the parse and the execution and therefore a re-optimization that actually peeked at the bind variables rather than re-using and sharing the existing cursor, but obviously the cursor was eligible for sharing. Very likely this is due to the fact that the parse call didn't actually bind any variables hence the mentioned mismatch wasn't possible.

Summary

So in summary I think what can be said is this:

- Don't use AUTOTRACE if you want to get the actual execution plan

- The potentially "wrong" execution plans reported by AUTOTRACE usually do not represent a threat because these are EXPLAIN PLAN cursor

- The potential threat of AUTOTRACE variants that actually execute the statement is the fact that the plan used by this actual execution is definitely eligible for sharing with other executions, but this no different from any other execution, so nothing special about AUTOTRACE here either

- There is a potential threat when using the AUTOTRACE TRACEONLY EXPLAIN option - the PARSE only but not execute behaviour could leave undesirable cursors behind that are eligible for sharing. This applies in particular to SQL statements using bind variables

Ouch!

Here’s a set of Instance Activity stats I’ve never seen before, and I’d rather never see again. From an active standby running 11.1.0.7 on AIX:

select
        name, value
from    v$sysstat
where
        name in (
                'consistent gets - examination',
                'consistent gets',
                'session logical reads'
        )
or      name like 'transaction tables%'
;

NAME                                                                            VALUE
---------------------------------------------------------------- --------------------
session logical reads                                                 102,731,023,313
consistent gets                                                       102,716,499,376
consistent gets - examination                                          98,170,595,252
transaction tables consistent reads - undo records applied             96,590,314,116
transaction tables consistent read rollbacks                                2,621,019

5 rows selected.

The instance has been up for about 60 hours – and 95% of the work it has done has been trying to find the commit times for transactions affecting blocks that are in need of cleanout. If you look at the two statistics about the transaction tables (those are the things in the undo segment header blocks) you can see that the average work done to find a commit time was a massive 48,000 visits to undo blocks.

The solution was fairly simple – kill all the reports which had been running for the last six hours, because they were the ones that were causing a problem, while simultaneously suffering most from the problem – at the time I killed the worst offender it was managing to read about 50 blocks per minute from the database, and doing about 100,000 buffer visits to undo blocks per second.

You probably won’t see this every again, but if you do, a quick check is:

select * from v$sess_io order by consistent_changes; 

Repeat a couple of times and check if any sessions are doing a very large number (viz: tens of thousands) of consistent changes per second.

Too Many Flashbacks

With apologies to Dr. Seuss.

Did I ever tell you the makers of RAC
had seven features and named each flashback?

Well they did, and it wasn’t a smart thing to do.
You see, when the customers wanted a clue
as to how to keep data from getting deleted
the RAC folks said “flashback” and customers heeded.

They turned on all seven of those flashback features
Each one was a slightly dissimilar creature.
Some used the UNDO, some used flashback files
Some just renamed tables to bin$ styles.

One was a place you keep things for recovering
Another was just for forensic discovering
With so many features called by the same name
when thinks broke no one knew just which one they should blame.

On a Friday three minutes past seventeen hundred,
users ask, “What’s the deal? Our data’s been plundered.”
“It looks like all names in the customer table,
are now ‘John Q Public,’ a certain mislabel.”

It was Nimrod, an intern fresh from his instruction.
What he thought was just test was really production.
“No Problem,” says Morton, the wise DBA,
“Flashback is on. I’ll restore it today.”

First Morton asked Nimrod what time he committed
his update that had any where clause omitted.
“I ran it at just past noon yesterday lunch,
It ran for so long that I went for a munch.”

By now it had been almost thirty one hours
but Morton knew that he could call on his powers,
and on the mere fact that that the undo_retention
was set to two days, a quite lengthy extension.

But woe! When he tried to engage Flashback Query,
he got “snapshot too old” and it ruined his theory
that undo_retention makes Oracle hold
all undo data no matter how old.

It turns out that there was no undo_guarantee
or autoextend which would also be key
to use all Flashback features reliant on Undo
A realization he would slowly come to.

Now Flashback Transaction and Versions Between
were out of the picture, although unforseen.
But he still had four more kinds of Flashback to try
So he thought them each through, to see if they might fly.

Undropping the table would use flashback syntax,
But that wouldn’t help, since the table was intact.
With database flashback all could be reverted
But subsequent changes would then be subverted

Do you know who was governor of California
When Flashback Data Archive came out and I’ll warn ya,
it will not help poor Morton, though it’s called Total Recall.
He’d have had to enable it for tables to see all.

The last of the flashbacks was the old FRA,
the Flashback Recovery Area they say.
And it’s just a directory where things are kept
Like logs and old backups made while Morton slept.

So after all that there was nary a way
to use Flashback of any sort to save the day.
Old-fashioned LogMiner was what Morton used
To restore all the rows that poor Nimrod abused.

With so many flashbacks and so much confusion
I bet that Oracle regrets the profusion
of so many things that they call the same name.
But now its too late and there’s no one to blame.

Related posts:

  1. In-Memory Undo
  2. Emergency Database Repair
  3. Recovering a Schema From InnoDB .frm Files

Nologging redo size

It is probably easy to calculate hourly redo rate or daily redo rate using AWR data. For example, my script awr_redo_size.sql can be used to calculate daily redo rate, and awr_redo_size_history.sql can be used to calculate hourly redo rate. Hourly redo rate is especially useful since you can export to an excel spreadsheet, graph it to see redo rate trend.

Update: I added another script to calculate redo rate if you don’t have AWR license. redo_size_archived_log.sql.

Introduction to Direct Mode Writes

Direct mode operations write directly in to the database file skipping buffer cache. Minimal redo(aka invalidation redo) is generated, if the database is not in force logging mode. Keeping the database in no force logging mode is peachy as long as you don’t use Data guard, Streams, or Golden Gate.

Suddenly, business decide to use one of these log mining based replication products. This means that you must turn on Force logging at the database level so that replication tools can capture (just replay in the case of Data guard) the redo information correctly and consistently.

But, what if your application performs high amount of direct mode operation, such as insert /*+ append */ operations? Now, you need to estimate the redo size to identify the effect of FORCE LOGGING mode That estimation gets little tricky.

Direct writes

During direct mode operation, blocks are pre-formatted and written directly to the disk bypassing buffer cache. If the database is altered to Force logging mode, then still direct mode operations will write to the block. In addition to direct mode writes, these operations will generate redo for the blocks written directly, almost similar to writing the whole block in to the redo log files. This will increase redo size.

There are few statistics capturing the direct mode writes. Using these statistics, we can estimate the redo size for direct mode operations.

Statistics

Statistics ‘physical writes direct’ includes mostly three component statistics as given below.

Physical writes direct = < writes to data file due to direct mode operations> +
                              physical writes direct to temporary tablespace +
                              physical writes direct (LOB)

To identify the size of direct writes to data file, excluding temp files, then the formula becomes trivial as :

Physical writes to datafile = block_size * ( physical writes direct -
                                                physical writes direct to temporary tablespace )

Script awr_redo_nologging_size.sql uses this formula to estimate the amount of redo size if the database is altered to FORCE Logging mode. One caution with this script is that, this script assumes an uniform block size( of what you specify , 8192 is default). If you use multiple block sizes in your database, then specify the biggest block size in use (or average!). Script will overestimate it, but it is better than underestimation.

awr_redo_nologging_size.sql v1.00 by Riyaj Shamsudeen @orainternals.com

To generate Report about Redo rate from AWR tables

Enter the block size(Null=8192):
Enter past number of days to search for (Null=30):21

DB_NAME   REDO_DATE                redo_size (MB) phy_writes_dir (MB) phy_writes_dir_temp(MB)
--------- ------------------- ------------------- ------------------- -----------------------
...
TEST1      01/09/2012 00:00:00          554,967.92        4,337,470.54            4,048,463.09
TEST1      01/10/2012 00:00:00          725,161.69        7,631,308.52            7,311,254.35
TEST1      01/11/2012 00:00:00        1,417,910.43       11,022,558.04           10,424,339.66
TEST1      01/12/2012 00:00:00          162,109.27        2,756,108.79            2,658,140.35
TEST1      01/13/2012 00:00:00          736,137.74        5,449,356.39            5,107,896.82
TEST1      01/14/2012 00:00:00          880,102.10        3,494,355.88            3,119,470.18
...

In the code output above, notice the line for 1/11/2012. Estimated total redo size is ~1,417GB if we alter the database to FORCE LOGGING mode at database level. Out of that 1417 GB redo size, ~600 GB of redo will be generated due to direct mode operations from the calculation: 11,022GB will be generated due to direct mode operations minus adjustment for direct writes to temporary tablespace of size 10,424GB (over 10TB writes to temporary tablespace).

Example #2
In this example, notice 28-DEC-11. 62GB of redo estimated if alter the database to force logging mode. Out of that just 600MB of redo will be generated due to direct mode operation.

DB_NAME   REDO_DATE            redo_size (MB) phy_writes_dir (MB) phy_writes_dir_temp(MB)
--------- --------------- ------------------- ------------------- -----------------------
...
TEST2     24-DEC-11                 19,149.11            2,796.57                2,361.68
TEST2     25-DEC-11                 18,362.74            1,630.83                1,379.95
TEST2     26-DEC-11                 60,097.50            3,867.92                3,303.37
TEST2     27-DEC-11                 55,696.98            3,266.89                2,756.84
TEST2     28-DEC-11                 62,971.37            4,650.37                4,096.75
TEST2     29-DEC-11                 62,167.32            3,839.07                3,255.70
TEST2     30-DEC-11                 64,072.57            4,462.38                3,788.39
...

Summary
In summary, we can estimate the amount of redo size if we alter the database to FORCELOGGING mode. This is a very useful estimation while implementing these replication tools.

Thanks to Kirti Deshpande and Kalyan Maddali for testing out my script. Of course, any mistake in the script is mine, only mine.

Non-Specific Index Hints

January 24, 2012 (Modified January 25, 2012) As I read the “Troubleshooting Oracle Performance” book for the second time a couple of months ago, I made note on page 385 that it was possible to specify table columns in an index hint, rather than specifying specific index names (or just specifying the table name).  This might be [...]

Dropped Partitions do not go in the Recycle Bin

If you alter table TEST drop partition Q1, does it go in the recycle bin?

That is the question I was asked today. “Of course it….” Pause. More pause.

No, I did not know. I knew I’d seen partitions in the recyclebin on Oracle but I thought there was no option to state PURGE when you dropped a partition.

So, a quick test was needed.

First I tried a drop of a partition I knew I could live without {NB this is on version 11.2.0.3, I am 99.8% sure this is the same on 10}

mdw1123> alter table person_call drop partition d_20111205 purge
  2  /
alter table person_call drop partition d_20111205 purge
                                                  *
ERROR at line 1:
ORA-14048: a partition maintenance operation may not be combined with other operations


mdw1123> alter table person_call drop partition d_20111205
  2  /

Table altered.

mdw1123> select count(*) from dba_recyclebin
  2  /
Any Key>

  COUNT(*)
----------
         0

1 row selected.

That’s pretty conclusive then, Oracle does not like the PURGE option when dropping a partitions, which suggests it does not go into the recyclebin, and in fact my recyclebin remains empty when I correctly drop the partition. {NB I had purged the recyclebin before this test}.

However, I want to make sure that it is indeed just the partition operation that does not go to the recyclebin (you can of course turn off the recyclebin and I have not proven that I have not done this). So:


mdw1123> create table mdw (id number,vc1 varchar2(10))
  2  partition by range (id)
  3  (partition p1 values less than (10)
  4  ,partition p2 values less than (20)
  5  ,partition p3 values less than (30)
  6  ,partition pm values less than (maxvalue)
  7  )
  8
mdw1123> /
Table created.

mdw1123> insert into mdw
  2  select rownum,'AAAAAAAA'
  3  from dual
  4  connect by level <40
  5  /
39 rows created.

mdw1123> select * from dba_recyclebin
  2  /
no rows selected

mdw1123> alter table mdw drop partition p3
  2  /
Table altered.

mdw1123> select * from dba_recyclebin
  2  /
no rows selected

mdw1123> drop table mdw
  2  /
Table dropped.

mdw1123> select * from dba_recyclebin;
Any Key>
OWNER                          OBJECT_NAME                    ORIGINAL_NAME
------------------------------ ------------------------------ --------------------------------
OPERATION TYPE                      TS_NAME                        CREATETIME
--------- ------------------------- ------------------------------ -------------------
DROPTIME               DROPSCN PARTITION_NAME                   CAN CAN    RELATED BASE_OBJECT
------------------- ---------- -------------------------------- --- --- ---------- -----------
PURGE_OBJECT      SPACE
------------ ----------
MDW                            BIN$DI/3ZwiMRneOxLrJ4jidyg==$0 MDW
DROP      Table Partition           INDEX_01                       2012-01-24:16:13:55
2012-01-24:16:15:33    2787392                                  NO  NO       77672       77672
       77672       1024
MDW                            BIN$DI/3ZwiMRneOxLrJ4jidyg==$0 MDW
DROP      Table Partition           INDEX_01                       2012-01-24:16:13:55
2012-01-24:16:15:33    2787392                                  NO  NO       77672       77672
       77672       1024
MDW                            BIN$DI/3ZwiMRneOxLrJ4jidyg==$0 MDW
DROP      Table Partition           INDEX_01                       2012-01-24:16:13:55
2012-01-24:16:15:33    2787392                                  NO  NO       77672       77672
       77672       1024
MDW                            BIN$DI/3ZwiMRneOxLrJ4jidyg==$0 MDW
DROP      TABLE                                                    2012-01-24:16:13:55
2012-01-24:16:15:33    2787393                                  YES YES      77672       77672
       77672

4 rows selected.

So, if you go and drop the wrong partition, you can’t go getting it back without some sort of restore or recovery (as there is nothing to get it back from) but you can if you drop the whole table. And I was not going mad, I had seen partitions in the Recyclebin (there is, after all, a PARTITION_NAME column in the table)

This seems a little odd to me, I would imagine that dropping the wrong partition is something you would want to use flashback query to fix (as you can with accidentally dropping a table) but there you go. You can’t.

Back to the day job….

Website Updates: Cleanup Complete…

I wrote last month about some issues with my website as a result of an issue with my hosting provider. It’s been a very long and arduous process, but I think I’ve finally finished the last of the cleanup. It’s certainly not something I would like to repeat again. Having said that, there are lots of positives about the process.

  • When I started the website, over 11 years ago, my knowledge of HTML was very limited. In going back through all the pages to clean up the logical corruption it gave me an opportunity to clean up the HTML as well. I don’t think anyone on the outside looking in would notice the difference, but it makes me happier to know things are a little more in order.
  • I used my web stats to decide which pages to sort out first, figuring it would make sense to focus on the stuff that people are actually reading first. That in itself was a bit of a revelation. There are a lot of very old articles on the site and I kinda figured people don’t look at them much these days. It appears that’s not exactly true. This presented a couple of problems. First, some articles were really bad! I managed to turn a blind eye to this when I thought people weren’t reading them, but once I saw people were still hitting them I decided to rewrite them. That took a lot more time than I expected. In other cases, all that was required were more obvious links to the updated content. Many articles now have “Related articles” links at the top.
  • I use a home-grown CMS to manage the site. This process has forced me improve it and fix some of the things that were bugging me.

Even with all the positives, I still feel a little frustrated by all this. The time this has eaten up has impacted badly on my to-do list. I started this process the evening after I attended Cary Millsap‘s Mastering Oracle Trace Data course. As a result, I’ve not reviewed the material from the course, or even downloaded the free tools yet. That’s pretty high up my list.

Cheers

Tim…

 




Kscope 12 – Here we Come!

San Antonio is the site for Kscope12. I had a chance to visit the site of Kscope12 at the JW Marriott Hill Country in San Antonio. I thought the conference committee must have been exaggerating but this is truly a huge, beautiful place. We’re reserving an extra room the weekend before so that some of my family can join us in reveling in the water park, golf course, and nature preserve. Of course, the conference will be outstanding but this venue is so cool you might find it tempting to “play hooky” and miss a few sessions!