The new release of DB Optimizer , version 3.o, from Embarcadero has awesome new feature : explain plan overlay onto VST diagrams!
Let’s take an example query:
SELECT COUNT (*) FROM a, b, c WHERE b.val2 = 100 AND a.val1 = b.id AND b.val1 = c.id;
There are indexes on b.id and c.id. Diagramming the query in DB Optimizer gives
The red lines with crows feet mean that as far as the definitions go, the relations could be many to many.
Question is “what is the optimal execution path for this query?”
One of the best execution plans is to
There is one filter in the diagram, represented by the green F on table B. Table B has a filter criteria in the query “b.val2=100″.
Ok, table B is where we start the query. Now where do we go from B? Who is the parent and who is the child? It’s not defined in the constraints nor indexes on these tables so it’s hard for us to know. Guess what ? It’s also hard for Oracle to figure it out. Well, what does Oracle decide to do? This is where the cool part of DB Optimizer 3.o comes in.
The super cool thing with DB Optimizer 3.0 is we can overlay the diagram with the actual execution path (I think this is so awesome)
For the digram we can see Oracle starts with B and joins to A. The result if this is joined to C. Is this the optimal path?
Well, let’s keep the same indexes and just add some constraints:
alter table c add constraint c_pk_con unique (id); alter table b add constraint b_pk_con unique (id);
Now let’s diagram the query with DB Optimizer:
We can now see who the parent and child is, so we can determine the optimal query path which is to start at B, the only filter and join to the child C then to the parent A. Now what does Oracle do with the added constraint info:
Guess what? The execution plan has now changed with the addition of constraints and now Oracle’s execution path goes from a suboptimal plan to the optimal path. Moral of the story is to make sure and define constraint information because it helps the optimizer, but what I wanted to show here was the explain plan overlay on the diagram which makes comparing execution plans much easier. Putting the queries VST diagrams side by side along with the overlay of execution path we can clearly and quickly see the differences:
I plan to blog more about this awesome feature. It’s really cool.
Here is an example from an article by Jonathan Lewis
The query Jonathan discusses is
SELECT order_line_data FROM customers cus INNER JOIN orders ord ON ord.id_customer = cus.id INNER JOIN order_lines orl ON orl.id_order = ord.id INNER JOIN products prd1 ON prd1.id = orl.id_product INNER JOIN suppliers sup1 ON sup1.id = prd1.id_supplier WHERE cus.location = 'LONDON' AND ord.date_placed BETWEEN '04-JUN-10' AND '11-JUN-10' AND sup1.location = 'LEEDS' AND EXISTS (SELECT NULL FROM alternatives alt INNER JOIN products prd2 ON prd2.id = alt.id_product_sub INNER JOIN suppliers sup2 ON sup2.id = prd2.id_supplier WHERE alt.id_product = prd1.id AND sup2.location != 'LEEDS')
which diagrammed looks like
There are multiple filters, so we need to know which one is the most selective to know where to start, so we ask DB Optimizer to display the statistics as well (blue below a table is the filter %, green above is # of rows in table and numbers on join lines are rows returned by a join of just those two tables)
Now that we can determine a candidate for best optimization path, does Oracle take it?
Can you find the optimization error?
Dark green is where execution starts. There are two starts: one for the main query body and one for the subquery.
The red is where query execution ends.
( also see this older blog on the designing VST http://dboptimizer.com/2010/08/03/product-design-vst/ )
PS a big part of this work is by the lead developer Matt Vegh. Many thanks to Matt for this awesome work.
PPS another example from Karl Arao
The dark green nodes are starts, so there are 4 separate starts. We can see how the result sets from each start are joined with each successive table join set. The red is the final step.
This is the first part in a series of blog posts that shed some light on the enhancements Oracle has introduced with the recent releases regarding the optimizations of logical I/O.http://www.blogger.com/img/blank.gif
Before we can appreciate the enhancements, though, we need to understand the baseline. This is what this blog post is about.
The example used throughout this post is based on a simple Nested Loop Join which is one area where Oracle has introduced significant enhancements.
It started its life as a comparison of using unique vs. non-unique indexes as part of a Nested Loop Join and their influence on performance and scalability.
This comparison on its own is very educating and also allows to demonstrate and explain some of the little details regarding logical I/O.
The excellent Madrid from the otn forums has a nice post here on a particular listener error caused by not following the install instructions precisely. I thought that I’d dig in a little further to illustrate the interplay between the O/S and our wonderful C program that is oracle.exe The message Hector got was [...]
In this first installment on metric tables we’ll look at system statistics. I future posts we’ll look at waits (events) , file metrics, and possibly some other metrics after that. Here are the tuning metrics tables (SQL stats are not in “metric” tables per say)
(*DBA_HIST_…_HISTORY views are sort of confusing. AFAI remember they were storing alert history, but apparently they are used for adaptive thresholds – an area for future investigation)
I’ve noticed a number of people posting queries using DBA_HIST_SYSSTAT instead of DBA_HIST_SYSMETRIC_SUMMARY which leads me to believe that there is some confusion or lack of information on the metric tables.
Oracle 10g introduced metric tables which compute deltas and rates of statistics thus hugely simplifying the ability to answer simple questions like “what is the I/O rate on my databases right now.” This question, before 10g, was surprisingly tedious to answer. To answer the question one would have to query v$sysstat for example:
Select value from v$sysstat where name=’physical reads’;
but querying v$sysstat just once fails to answer the question but instead answers the question “How much I/O has been done since the database was started”. To answer the original question one would have to query v$sysstat twice and take the delta between the two values:
Getting these deltas and rates could be a pesky task especially working with a customer over the phone. Then 10g Oracle introduced metric tables which answer the questions in one single query using
Select VALUE , METRIC_UNIT,INTSIZE_CSEC from v$sysmetric where metric_name='Physical Reads Per Sec';
VALUE METRIC_UNIT INTSIZE_CSEC ---------- ----------------- ------------ 654.6736 Reads Per Second 5959 134.9835 Reads Per Second 1515
Notice that the query returns 2 rows. The first row is the the last minute (ie 59.59 seconds) and the second row is the last 15 seconds (ie 15.15 seconds). Oracle collects both the deltas and rates for 60 second and 15 second intervals.
Oracle has the average, maximum, minimum for the values for the last hour in
that one can query like:
select MAXVAL,MINVAL,AVERAGE,STANDARD_DEVIATION from V$SYSMETRIC_SUMMARY where metric_name='Physical Reads Per Sec'; MAXVAL MINVAL AVERAGE STANDARD_DEVIATION ---------- ---------- ---------- ------------------ 3.71784232 0 .076930034 .478529283
Also for the last hour Oracle stores the 60 second intervals and for the last 3 minutes the 15 second intervals in
One issue with using
is becoming familiar with the statistics names which are different from v$sysstat. We can look at
For the group_names (statistic definitions)
For trending data over multiple days, the view DBA_HIST_SYSMETRIC_SUMMARY can be used. The view has a history of all the System Metrics Long Duration statistics. If you store multiple databases in the same AWR repository you can check the statistics available to a particular DBID with the view DBA_HIST_METRIC_NAME. The view DBA_HIST_SYSMETRIC_SUMMARY can be queried easily for trending metrics, such as the simple query for bytes read by the database per second:
select dbid, to_char( begin_time ,'YYYY/MM/DD HH24:MI'), round(average) from dba_hist_sysmetric_summary where metric_name= 'Physical Read Total Bytes Per Sec' /* and DBID=[dbid if share repository] */ order by begin_time;
Compare this to the same query on DBA_HIST_SYSSTAT (note there are a lot of stats in v$sysstat)
with stats as ( select sn.dbid, st.stat_name, to_char(cast(begin_interval_time as date ), 'YYYY/MM/DD HH24:MI') btime, Lag (st.value) OVER( PARTITION BY st.stat_name,st.dbid ORDER BY sn.snap_id) value_beg, st.value value_end, (cast(end_interval_time as date) - cast(begin_interval_time as date )) * (24*60*60) delta from DBA_HIST_SYSSTAT st, DBA_HIST_SNAPSHOT sn where sn.snap_id=st.snap_id and sn.dbid=st.dbid and (st.stat_name= 'physical read total bytes') order by begin_interval_time ) select dbid, btime, round((value_end-value_beg)/delta) rate_per_sec from stats where (value_end-value_beg) > 0 ;
Its a bit disconcerting to note that the above two queries don’t return the exact same data on my laptop. If it was roughly the same that would be fine, and in general the stats are the similar but there are cases where they differ dramatically. I don’t see anything obvious in the way the queries are written. Possibly has to do with database bounces or the way the database is affected by the laptop’s sleep and hibernate modes. Will have to look into this farther.
One trick to make the data easy to load into Excel is to use the html output format and spool to a file with an “.html” extension
SET markup HTML on spool output.html
List of DBA_HIST views
An excellent comment/question by mdinh made me realise my demos in Part I might be a little extreme in returning 0 rows and perhaps give the false impression that Not Equal conditions are only considered or applicable if no rows are returned. This is not the case and with the bitmap index now considered with Not [...]
July 6, 2011 Since there were so many unique solutions to the last blog article that posed a SQL challenge, I thought that I would try another blog article that asks a similar type of question. Assume that someone showed you the following output: C2 D --- --- 100 0 150 50 200 50 201 1 [...]
A question about partitioning came up on OTN a few days ago – but the really interesting part of the thread was about the choice of indexes, and how the choice of partitioning, combined with the data patterns, could make a big difference to the choice of indexes. I think the comments I made about indexes are worth seeing, so I’ve linked to the thread.
In other words – FREE STUFF!!!
Riyaj Shamsudeen does a free RAC hacking session on 12 July!
He will demonstrate how the LMS background process works, with the help of OS tracing tools like truss and DTrace.
Sign up here!
A video recording of the session can be found here: http://www.vimeo.com/26798681
I will be participating in an Exadata Virtual Conference which has been organized by Tanel Poder on August 3rd and 4th. This conference will follow the same format as the one Tanel and I did last year with Jonathan Lewis and Cary Millsap. It will be two half days which should provide some flexibility for people with busy schedules. The online format allows all participants to interact directly via chat while the speakers are presenting and then via voice during question and answer sessions. This is a great opportunity to talk to some guys that have done a bunch of work with Exadata. Andy Colvin will be discussing patching which has been problematic for some shops. Andy has done more patching than anyone I know.
You probably aware of 1 day event happening on July 7th Thursday 9AM-5PM EDT (virtually and physically). I will be talking about advanced UNIX tools to debug issues. You can find details of 1-day event here
Here is the outline of my presentation:
Advanced tools and techniques in Unix environment – Riyaj Shamsudeen – 07/07/2011, 3:15pm – 4:00pm EDT
Unix environments provides rich set of tools to debug performance issues. Even if the issue is complex to understand, if we use right tool for the job, we can identify the root cause of an issue quickly. In this presentation, the speaker will demo tools such as truss/strace, pfiles, pmap, prstat, vmstat, mpstat etc to show how you can debug complex issues. The speaker will also introduce Solaris/dtrace to understand performance issues.
Why not join us?