Here’s a little detail that appeared in 11gR2 that may help you answer questions about open cursors. Oracle has added a “cursor type” column to the view v$open_cursor, so you can now see which cursors have been held open because of the pl/sql cursor cache, which have been held by the session cursor cache, and various other reasons why Oracle may take a short-cut when you fire a piece of SQL at it.
The following is the output showing the state of a particular session just after it has started up in SQL*Plus and called a PL/SQL procedure to run a simple count:
I don’t think this is likely to happen on a production system (until 12c) – but look what you can do if you try hard enough:
1 select 2 index_name, column_name from user_ind_columns 3 where 4 table_name = 'T1' 5 order by 6* index_name , column_position SQL> / INDEX_NAME COLUMN_NAME -------------------- -------------------- T1_I1 N1 V1 T1_I2 N1 V1 4 rows selected.
That’s a straight cut-n-paste from an Oracle 18.104.22.168 SQL*Plus session. (You can tell I typed it in real time because I missed the return before the FROM, and couldn’t be bothered to go back and do it again ;) )
By some strange coincidence, the “London Bus” effect perhaps, there have been three posts on the OTN database forum in the last couple of days relating to deadlocks; and they have prompted me to indulge in a little rant about the myth of Oracle and deadlock detection; it’s the one that goes:
“Oracle detects and resolves deadlocks automatically.”
Oracle may detect deadlocks automatically, but it doesn’t resolve them, it simply reports them (by raising error ORA-00060 and rolling back one statement) then leaves the deadlocked sessions stuck until the session that received the report resolves the problem or an external agent resolves the problem.
Consider the following example (which, I have to admit, I wrote without access to a live instance):
Here’s a little follow-on from Friday’s posting. I’ll start it off as a quiz, and follow up tomorrow with an explanation of the results (though someone will probably have given the correct solution by then anyway).
I have a simple heap table t1(id number(6,0), n1 number, v1 varchar2(10), padding varchar2(100)). The primary key is the id column, and the table holds 3,000 rows where id takes the values from 1 to 3,000. There are no other indexes. (I’d show you the code, but I don’t want to make it too easy to run the code, I want you to try to work it out in your heads).
I run the following pl/sql block.
It’s very easy to get a lot of information from an AWR (or Statspack) report – provided you remember what all the numbers represent. From time to time I find that someone asks me a question about some statistic and my mind goes completely blank about the exact interpretation; but fortunately it’s always possible to cross check because so many of the statistics are cross-linked. Here’s an example of a brief mental block I ran into a few days ago – I thought I knew the answer, but realised that I wasn’t 100% sure that my memory was correct:
In this Load Profile (for an AWR report of 60.25 minutes), what does that Transactions figure actually represent ?
The example I gave last week showing how a SORT operation in an execution plan might include the work of resolving function calls in your SQL and might, therefore, be reporting much higher resource utilisation than expected reminded me of some problems I’ve had with gaps in execution plans in the past. So I thought I’d give a little demonstration of the way in which the completeness of execution plans can develop over time.
We’ll start with the same two tables I had in last week’s demo.
Here’s a little quirk of execution plans that came up recently on the Oak Table network. If you call a function in a query, and do some sorting with the results, where does the work of calling the function get reported in the execution plan if you trace the query or look at the in-memory rowsource execution stats. Let’s take a look at a simple example:
create table t1 as select rownum id, lpad(rownum,200) padding from all_objects where rownum <= 2500 ; create table t2 as select * from t1 ; -- collect stats create or replace function f (i_target in number) return number as m_target number; begin select max(id) into m_target from t1 where id <= i_target; return m_target; end; /
Warning – this is a catch question, and I haven’t given you enough information to have any idea of the right answer; though, by telling you that I haven’t given you enough information to have any idea of the right answer, you now have some information that might help you to get closer to the right answer.
I have a simple heap table with no indexes. Immediately after flushing the buffer_cache I’ve run a query that looks ike this:
select max(column_ZZZ) from table_X;
The most significant session stats for this operation are as follows:
A recent (Jan 2013) post on the OTN database forum reported a performance problem on Oracle 22.214.171.124 (so no AWR), and posted a complete statspack report to one of the public file-sharing sites. It’s been some time since I did a quick run through the highlights of trouble-shooting with statspack, so I’ve picked out a few points from this one to comment on.
As usual, although this specific report is Statspack, the same analysis would have gone into looking at a more modern AWR report, although I will make a couple of comments at the end about the extra material that would have been available by default with the AWR report that would have helped us help the OP.
When doing updates with statements that use correlated subqueries, it’s important to make sure that your brackets (parentheses) are in the right place. Here are two statements that look fairly similar but have remarkably different results – and sometimes people don’t realise how different the statements are:
update t1 set padding = ( select t2.padding from t2 where t2.id = t1.id ) where t1.small_vc <= lpad(20,10,'0') ; update t1 set padding = ( select t2.padding from t2 where t2.id = t1.id and t1.small_vc <= lpad(20,10,'0') ) ;