Search

OakieTags

Who's online

There are currently 0 users and 23 guests online.

Recent comments

Affiliations

performance

Performance Efficiency – Comparing Performance Across the Stack

I started writing this post in early January this year and before it gets completely old and (maybe) uninteresting, I hereby just post it, wondering what you might think. Of course I wanted to be it the ultimate perfect braindump but I realize that I can’t get my head around all the issues anyway and

Read More…

Why is a Full Table Scan Selected – Will the Execution Plan Provide a Clue?

June 20, 2012 I have not had much time to respond in OTN threads recently, although I do still occasionally read threads on the forum.  I was a little surprised by one of the late responses in one of the recent threads, where one of the responders suggested actually testing the problem with the assistance of [...]

Consultancy

I have a permanent job at the NetCracker‘s System Performance group. Recently I was offered to do one day job outside, on-site in another company, which coincidentally has an office close to NetCracker’s Moscow office. It was an opportunity to apply my skills in a completely different situation which I couldn’t miss; plus I’ve never done public presentations before and this was a good occasion to practice that. Here I’d like to write down some notes how the event went.

An Organizational Constraint that Diminishes Software Quality

One of the biggest problems in software performance today occurs when the people who write software are different from the people who are required to solve the performance problems that their software causes. It works like this:

  1. Architects design a system and pass the specification off to the developers.
  2. The developers implement the specs the architects gave them, while the architects move on to design another system.
  3. When the developers are “done” with their phase, they pass the code off to the production operations team. The operators run the system the developers gave them, while the developers move on to write another system.

The process is an assembly line for software: architects specialize in architecture, developers specialize in development, and operators specialize in operating. It sounds like the principle of industrial efficiency taken to its logical conclusion in the software world.

Subpartition stats

You might have expected the following query ought to run reasonably efficiently, after all it seems to be targeted very accurately at precisely the few rows of information I’m interested in:

select
	column_name,
	avg_col_len
from
	dba_subpart_col_statistics
where
	owner             = 'TEST_USER'
and 	table_name        = 'TEST_COMP'
and	subpartition_name = 'P_MAX_D'

I’m after some subpartition column stats (so that I can work out whether a subpartition of a local index on a composite partition is roughly the right size) and I’m querying the view by the only columns that seem to be there to allow me to access the data efficiently. Unfortunately the execution plan isn’t doing what I need it to do. The following plan is coming from a small 11.2.0.3 database with up to date statistics:

Execution Plan Changes when the OPTIMIZER_FEATURES_ENABLED Parameter is Changed, But Why?

May 30, 2012 A question appeared on the OTN Database forums yesterday that has yet to receive an answer.  The question was essentially, why did the execution plan change when the OPTIMIZER_FEATURES_ENABLED parameter was adjusted from 11.2.0.2 to 11.1.0.7, and why did the execution performance improve as a result of making the change?  A DBMS_XPLAN [...]

Logical tuning

Here’s a model of a problem I solved quite recently at a client site. The client’s query was much more complex and the volume of data much larger, but this tiny, two table, example is sufficient to demonstrate the key principle. (Originally I thought I’d have to use three tables to model the problem, which is why you may find my choice of table names a little odd). I ran this example on 11.2.0.2 – which was the client version:

TKPROF Elapsed Time Challenge – the Elapsed Time is Half of the Wait Event Time

May 20, 2012 An interesting quirk was recently brought to my attention by Mich Talebzadeh.  He generated a 10046 trace at level 8 for a session, executed some SQL statements, disabled the trace, and then processed the resulting trace file with TKPROF.  His TKPROF output included the following: UPDATE TESTWRITES SET PADDING1 = RPAD('y',4000,'y') WHERE [...]

IOsaturationtoolkit-v2 with Exadata IORM and AWESOME text graph

I’ve got a new version of IOsaturation toolkit which you can download here http://karlarao.wordpress.com/scripts-resources/ and it has a cool script called “smartscanloop” that shows you the Smart Scan MB/s per database across the Exadata  compute nodes.. it’s a per 2secs sample so that’s a pretty fine grained perf data and near real time text graph. Very useful for doing IORM demos and monitoring what database is currently hogging the IO resources and since it’s presented in a consolidated view you don’t have to go to each Enterprise Manager performance page and have a bunch of browser windows open.

Table High Water Mark and How Empty the Table Is

On a recent post about reducing the high water mark in a table by shrinking it someone asked if you could tell how much “space” there was in a table that could be recovered by compressing it.

Yes. As pseudo-code:

select number_of_rows*average_row_length. That is how much data there is.
Divided by the block size, to give how many blocks-worth of data you have.
Divided this by a “overhead” fudge factor to allow for wasted space (see later) of about 20%
Compare to the number of blocks used by the table.

Here is some code to do it, showing some extra data and how we calculate the effective percentage of space used (pct_spc_used). It runs against an example table ‘HWM’