Search

OakieTags

Who's online

There are currently 0 users and 38 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

OUI Updates Update

We’ve now received a response to the issue described in the previous entry. In short oui updates at install time are broken (apparently fixed in 11.2.0.3, but no updates are available for that release yet to check. Hi , The OUI looks for the metadata for the patches. There is an unpublished bug 11709996 on [...]

Idle Thoughts – SSD, Redo Logs, and Sector Size

December 18, 2011 It seems that I have been quite busy lately with computer related tasks that are not associated with Oracle Database, making it difficult to slow down and focus on items that have a foundation in logic.  Today I have had some spare time to dig further into the recently released “Oracle Core” [...]

Ad: The Oracle Query Optimizer 2-Day Seminar

The 31st of January and 1st of February 2012 I will present a 2-day seminar about the Oracle query optimizer in Ballerup (DK). The event is organized by Miracle A/S. The content, which is based on the chapters 2, 4, 5, 6, 9 and 10 of my book, is the following:

  • Chapter 1 describes the life cycle of SQL statements and when the database engine can share cursors.
  • Chapter 2 describes the aim and architecture of the query optimizer.
  • Chapter 3 and 4 discuss the statistics used by the query optimizer to carry out its work.
  • Chapter 5 describes the initialization parameters influencing the behavior of the query optimizer and how to set them.
  • Chapter 6 outlines different methods of obtaining execution plans, as well as how to read them and recognize inefficient ones.
  • Chapter 7 describes how to take advantage of available access structures in order to access data stored in a single table efficiently.
  • Chapter 8 goes beyond accessing a single table, by describing how to join data from several tables together.

The flyer and this page provide detailed information about the seminar.

ASH Exploding Across Versions

 

I sometimes forget which versions of ASH have which fields, so here is a bit of a graphic and textual cheat sheet. (see https://sites.google.com/site/oraclemonitor/dba_hist_active_sess_history for some queries on ASH)

 

 

10.2.0
10.2.0.3
11.1.0
11.2.0 (http://download.oracle.com/docs/cd/E11882_01/server.112/e10820/dynviews_...)
SAMPLE_IDSAMPLE_IDSAMPLE_IDSAMPLE_ID
SAMPLE_TIMESAMPLE_TIMESAMPLE_TIMESAMPLE_TIME
SESSION_IDSESSION_IDSESSION_IDSESSION_ID
SESSION_SERIAL#SESSION_SERIAL#SESSION_SERIAL#SESSION_SERIAL#
SESSION_TYPESESSION_TYPESESSION_TYPESESSION_TYPE
USER_IDUSER_IDUSER_IDUSER_ID
QC_INSTANCE_IDQC_INSTANCE_IDQC_INSTANCE_IDQC_INSTANCE_ID
QC_SESSION_IDQC_SESSION_IDQC_SESSION_IDQC_SESSION_ID
SERVICE_HASHSERVICE_HASHSERVICE_HASHSERVICE_HASH
PROGRAMPROGRAMPROGRAMPROGRAM
MODULEMODULEMODULEMODULE
ACTIONACTIONACTIONACTION
CLIENT_IDCLIENT_IDCLIENT_IDCLIENT_ID
QC_SESSION_SERIAL#QC_SESSION_SERIAL#
CONSUMER_GROUP_IDCONSUMER_GROUP_ID

MACHINE

PORT
SQL_IDSQL_IDSQL_IDSQL_ID
SQL_CHILD_NUMBERSQL_CHILD_NUMBERSQL_CHILD_NUMBERSQL_CHILD_NUMBER
SQL_OPCODESQL_OPCODESQL_OPCODESQL_OPCODE
SQL_PLAN_HASH_VALUESQL_PLAN_HASH_VALUESQL_PLAN_HASH_VALUESQL_PLAN_HASH_VALUE
XIDXIDXIDXID
FORCE_MATCHING_SIGNATUREFORCE_MATCHING_SIGNATUREFORCE_MATCHING_SIGNATUREFORCE_MATCHING_SIGNATURE

PLSQL_ENTRY_OBJECT_IDPLSQL_ENTRY_OBJECT_IDPLSQL_ENTRY_OBJECT_ID

PLSQL_ENTRY_SUBPROGRAM_IDPLSQL_ENTRY_SUBPROGRAM_IDPLSQL_ENTRY_SUBPROGRAM_ID
PLSQL_OBJECT_IDPLSQL_OBJECT_IDPLSQL_OBJECT_ID

PLSQL_SUBPROGRAM_IDPLSQL_SUBPROGRAM_IDPLSQL_SUBPROGRAM_ID

TOP_LEVEL_SQL_IDTOP_LEVEL_SQL_ID

TOP_LEVEL_SQL_OPCODETOP_LEVEL_SQL_OPCODE

SQL_PLAN_LINE_IDSQL_PLAN_LINE_ID

SQL_PLAN_OPERATIONSQL_PLAN_OPERATION

SQL_PLAN_OPTIONSSQL_PLAN_OPTIONS

SQL_EXEC_IDSQL_EXEC_ID

SQL_EXEC_STARTSQL_EXEC_START

IN_CONNECTION_MGMTIN_CONNECTION_MGMT
IN_PARSEIN_PARSE
IN_HARD_PARSEIN_HARD_PARSE
IN_SQL_EXECUTIONIN_SQL_EXECUTION

IN_PLSQL_EXECUTIONIN_PLSQL_EXECUTION

IN_PLSQL_RPCIN_PLSQL_RPC

IN_PLSQL_COMPILATIONIN_PLSQL_COMPILATION

IN_JAVA_EXECUTIONIN_JAVA_EXECUTION

IN_BINDIN_BIND

IN_CURSOR_CLOSEIN_CURSOR_CLOSE

IN_SEQUENCE_LOAD

TOP_LEVEL_CALL#

TOP_LEVEL_CALL_NAME
IS_SQLID_CURRENT
SQL_OPNAME
EVENTEVENTEVENTEVENT
EVENT_IDEVENT_IDEVENT_IDEVENT_ID
EVENT#EVENT#EVENT#EVENT#
SEQ#SEQ#SEQ#SEQ#
P1P1P1P1
P2P2P2P2
P3P3P3P3
WAIT_TIMEWAIT_TIMEWAIT_TIMEWAIT_TIME
SESSION_STATESESSION_STATESESSION_STATESESSION_STATE
TIME_WAITEDTIME_WAITEDTIME_WAITEDTIME_WAITED
CURRENT_OBJ#CURRENT_OBJ#CURRENT_OBJ#CURRENT_OBJ#
CURRENT_FILE#CURRENT_FILE#CURRENT_FILE#CURRENT_FILE#
CURRENT_BLOCK#CURRENT_BLOCK#CURRENT_BLOCK#CURRENT_BLOCK#
P1TEXTP1TEXTP1TEXTP1TEXT
P2TEXTP2TEXTP2TEXTP2TEXT
P3TEXTP3TEXTP3TEXTP3TEXT
WAIT_CLASSWAIT_CLASSWAIT_CLASSWAIT_CLASS
WAIT_CLASS_IDWAIT_CLASS_IDWAIT_CLASS_IDWAIT_CLASS_ID

CURRENT_ROW#CURRENT_ROW#
TM_DELTA_TIME
TM_DELTA_CPU_TIME
TM_DELTA_DB_TIME
DELTA_TIME
DELTA_READ_IO_REQUESTS

DELTA_WRITE_IO_REQUESTS
DELTA_READ_IO_BYTES
DELTA_WRITE_IO_BYTES
DELTA_INTERCONNECT_IO_BYTES
PGA_ALLOCATED
TEMP_SPACE_ALLOCATE
BLOCKING_SESSIONBLOCKING_SESSIONBLOCKING_SESSION_STATUSBLOCKING_SESSION_STATUS
BLOCKING_SESSION_STATUSBLOCKING_SESSION_STATUSBLOCKING_SESSIONBLOCKING_SESSION
BLOCKING_SESSION_SERIAL#BLOCKING_SESSION_SERIAL#BLOCKING_SESSION_SERIAL#BLOCKING_SESSION_SERIAL#

BLOCKING_INST_ID

BLOCKING_HANGCHAIN_INFO
FLAGSFLAGS
REMOTE_INSTANCE#REMOTE_INSTANCE#

TIME_MODEL
IS_AWR_SAMPLE

ECID
CAPTURE_OVERHEAD
REPLAY_OVERHEAD
IS_CAPTURED
IS_REPLAYED





10.1
SAMPLE_ID SAMPLE_TIME SESSION_ID SESSION_SERIAL# SESSION_TYPE USER_ID QC_SESSION_ID QC_INSTANCE_ID SERVICE_HASH PROGRAM MODULE ACTION CLIENT_ID SQL_ID SQL_CHILD_NUMBER SQL_OPCODE SQL_PLAN_HASH_VALUE EVENT EVENT_ID EVENT# SEQ# P1 P2 P3 WAIT_TIME SESSION_STATE TIME_WAITED CURRENT_OBJ# CURRENT_FILE# CURRENT_BLOCK#

I Wish

A couple of days ago I wrote about some things I’d like to see in index statistics, which means changing dbms_stats.gather_index_stats(). Here’s an idea for dbms_stats.gather_table_stats(). I’d like to see the number of chained and migrated rows counted separately in dba_tables when we run the packaged procedure. The optimizer will use information about chained or migrated rows, but the information is only collected if you use the analyze command (and even then the two figures are summed into the single value chain_cnt) .

I have to say that it does look as if the information that Oracle would need to count (and distinguish between) chained rows and migrated rows is already available in the blocks. If we check a bit of a block dump we can see that various flags seem to tell us everything that the code would need to know. Here’s an extract from a table block dump, showing the start of three rows (technically row pieces) in the block.

tab 0, row 0, @0x1c79
tl: 9 fb: --H----- lb: 0x2  cc: 0
nrid:  0x0180000b.0

tab 0, row 1, @0x1952
tl: 807 fb: --H-FL-- lb: 0x0  cc: 4
col  0: [200]
 78 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
 ...

tab 0, row 2, @0x162b
tl: 807 fb: --H-FL-- lb: 0x0  cc: 4
col  0: [200]
 78 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
 ...

Notice particularly the “flag byte (fb)”, and the presence of one row reporting a “next rowid (nrid)”.
Amongst other things, the flag byte tells us what the row pieces represent.

H – this is a row header
F – this is the first row piece of a row
L – this is the last row piece of a row

These three flag values start us in the right direction. If we have a simple migrated row then we will have a head piece (in the original row position), pointing to the complete row. The head piece will have an H flag byte, and an nrid pointing to the actual row; when we get to the actual row its flag byte will show only the FL bits, and there will be an hrid (head rowid) associated with it, pointing back to the original location.

If we have a simple chained row, the first section will have the H and F bits set, and an nrid entry; any “middle” rowpieces will have an nrid, but no flag bits set, and the last row piece will have the L bit set, and no nrid or hrid.

If a row has migrated, and then chained after migrating, the first piece will have the H bit set, and an nrid entry; the next piece will have the F bit set and both an hrid and nrid; any middle pieces will have an nrid but no flag bits set; and the last piece will have the L bit set, but no nrid or hrid.

There may be other alternatives, of course, e.g. a row that first chains on an update, leaving a few columns behind, and then migrates those columns out of the initial location; then there may be other variations when rows shrink and start to “unchain” or “migrate back (partially)”; and I haven’t even started to think about rows with more than 254 columns which have “intra-block” chaining – i.e. several row pieces in the same block. But looking at the flags and information about head/next row pieces, it looks as if a full scan of a table could identify the number of chained rows and the number of migrated rows separately. If it has an H only it has migrated (but may also have chained), if it’s got an HF it has chained (only). Any rows with no flag setting or L don’t add much value unless we want to work out an average number of pieces per chained row. Any rows with FL are the migrated rows.  Any rows with F only are rows which have migrated and then chained.

Inevitably there could be accounting errors on sampling, though. We might find 10 rows with H only, but only 7 rows with FL; what assumption should we make about the three outstanding rows ? Might they be just migrated, or could they be migrated and chained ? In view of the room for error, it might be sensible to implement the counts only as part of the approximate NDV mechanism that only runs if you choose the auto sample size option, and always seems to do a full segment scan.

Related articles.

Another difference between analyze and dbms_stats
Anomalies with counting continued row activity

All Things Oracle

Last year I wrote a few articles for Simpletalk, a web service created by Redgate for users of SQL Server. This year, Redgate is setting up a similar service called “All things Oracle” (I’ve added a link in my blogroll) for Oracle users, and I’ve volunteered to write articles for them occasionally.

Some of the stuff they publish will be complete articles on their website, some will be short introductions with links to the authors’ own websites. My first article for them has just been posted – it’s an article that captures a couple of key points from the optimizer presentation I did at the UKOUG conference a couple of weeks ago.

Friday Philosophy – In Search of a Woodlouse

I don’t carry business cards around with me. I just never, ever think to get some done (either properly or with my trusty printer) and maybe this says something about my personal failings to sell myself. If anyone wants to contact me I tell them my email address and if they look confused I just say “ahh, Google me”. You see, having a very odd Surname means I am easy to find. {Reading this back I guess it could be interpreted as saying “I am so famous you will find me” but that is way, way, way from my meaning – I am going on the very unusual name that I have and nothing other than that!}

If you google (Bing, Alta Vista, whatever) “Widlake” you will get, probably on the first page – Brian Widlake who was a BBC journalist and had a key interview with Nelson Mandela; Widlake Farm B&B down in Looe, Cornwall ; a court case with BAA (nothing to do with me); an accountancy firm called Holmes Widlake; Me! Hey, not bad for some IT geek! It shows there are not many “Widlake”s out there.

If you search on “Martin Widlake” it’s pretty much just lonely little me. This is good as it means I am easily found. In the past I’ve searched on the names of friends and found it really hard to identify the exact person as there are so many people called “Kate Bush” or “Nicole Kidman” or “Stephen Hawking”.

However, my suggestion is seriously flawed and I should know this due to a conversation I have at least once a week. “And your name is, Sir?” “Martin Widlake”. Pause, faint sounds of rustling paper…”I’m sorry, could you say that again?” “Martin Widlake, with an ‘I’” (rustle rustle rustle) “I’m sorry sir, I can’t find any ‘Woodlock’/'Woodlake’/'Woodleg’/'Wideleg’/'Wiglig’ at all.” {choose word of choice, there are several more}. Carefull spelling ensues and even then, something in the brain of some people cannot shake off the “Wood” and get to “Wid”. And yes, I know about the Martin, Martyn and suggestion about ‘I’.

I had someone come up to me at the OUG conference last week and say they had tried to track me down after last years’ event and could not. No “Martin Woodlouse” to be found. *sigh*.

“martin oracle” does not help, it finds that toe-rag Martin Bach {OK, I admit it, Martin Bach is pretty damned hot at Oracle, and oh so annoyingly a nice bloke), Martin Nash in Oracle Corp {fair enough, and again a nice bloke} , James Martin the cook {what the…? but that will please the realcuddleytoys}, oracle religious Association but I ain’t going anywhere near that…I’m a page or two in, which is not bad actually, I can be happy with that.

My wife has it just as bad. She had a nice, obvious Surname {Parker} before I conned her into marrying me {and I did suggest we adopt her Surname when we married}. She joined one company a few years back where, due to her speaking a couple of eastern European languages, they decided she was (phonetically) “Susan Vidlaaackay”. They seemed to find the real Surname more confusing than their assumption.

So, I am easy to find, but only if you actually know me and my odd Surname. Otherwise, “Martin Woodlock”, “Martin Woodlake”, “Martin Woodleg”, “Martin Wiglake”, “Martin Widesnake” {if only}, “Martin Wiglig”, “Martin Wideneck”, “Martin Wicklick”, “Martin Widelake”, “Martin Windleg”, “Martin Woodlouse” and (my favourite) “Marvin Wetleg” are all terms I somehow need to get into web search engines, if I want people to find me with ease.

Does anyone know of any other takes on my name that people think they know me by? Any rude suggestions or ones based on my being shorter than R2-D2 will be deleted with prejudice!

Oracle Nostalgia

When preparing the material for my “Oracle Lego – an introduction to Database Design” presentation for the UKOUG last week, I was looking back at my notes from a course on the topic from “a few years back”. There were a few bits which made me smile.

Oracle’s [SQL] implementation conforms to ANSI standard, although referential integrity will not be enforced until version 7

Any other old geezers having flashbacks? I am so glad my first major Oracle development project swapped to using a Beta of V7 very early, so we had the integrity turned on during most of development. I had to help a few projects go from V6 to V7 and turn on the RI – it was usually very, very painful. Or impossible. I always think back to those nightmare experiences when some bright spark suggests turning off referential integrity for “ease of development” or “performance” reasons. There are good performance reasons for altering how you implement RI but, as I said during my presentation on database design, I have never, ever, ever seen a system with RI turned off that did not have damaged data.

Oracle’s optimiser is rule-based. Designing efficient queries involves taking advantage of the optimiser behaviour

.

You can tell this course was run in the UK due to the lack of ‘z’ in ‘optimiser’ :-) . How many of use can now make a stab at the seven or eight significant rules from the 15 (16, 17 as versions advanced) in the list? Several rules were to do with Clusters so you did not care. Let’s think, what were the main things to keep in mind…

  • most significant table last in the FROM clause and order upwards in the order you wanted to visit {most significant being the one you felt you could most efficiently do the first filter against}
  •  WHERE clauses ordered downwards in the order you wanted them to be applied.
  • Order of preference to identify a row was something like ROWID, primary key, unique key, full non-unique key, partial unique key, partial non-unique key, full index scan, full table scan.
  • Disable index access by adding 0 to numeric columns and concatinating null to varchars.

I’ve not checked back in the manuals (I have a set of the V7 on my laptop) so I’m probably wrong.

Storage….selecting suitable values for storage parameters … will improve the final performance of the database

Considering the “suitable values for storage parameters” was perhaps my first real conscious step into being a performance/design guy {I was lucky to be on a project where designing for the RBO and matching those rules was just part of being any developer}, but the calculating of rows-per-block, initial and next extent, pctincrease (not always zero you know), initrans/maxtrans, segment to tablespace size… I learnt all about that and had spreadsheets for it all.

Now of course, all of the above about storage (and RBO) has pretty much disappeared. Oracle has made some of the contents of my brain redundant.

But some things have not changed at all in 18 years:

Users can be relied upon to know what they do NOT want, not what they want, which {unfortunately} is the premise from which analysis starts)

I think the above is the fundamental issue from which all iterative design methodologies spring. ie do not believe what the user says they want, show them something and fix it. It is probably human nature that we are not well able to express what we want but have no problem pointing out something is not at all what we want :-) . Add in all the issues in respect of forgetting about the exceptions, assumed knowledge, incompatible vocabularies (the words your users say to you are as confusing as the techno-babble you fire back at them) and all analysis is fundamentally flawed.

Do some analysis – but then prototype like crazy. With real users.

OUI Updates feature

One of the nice new features of the 11.2 installer for the Database (and indeed the installer for Enterprise Manager 12c) is the ability to tell the setup software to download any post release recommended updates for the product and to apply them at install time. The installer first of all prompts you for your [...]

Thanks Jonathan ...

... and Amit, of course!

I mentioned during my SPM presentation that Amit Poddar's excellent paper and presentation on One-Pass Distinct Sampling and Incremental Statistics had gone missing from the web, but Jonathan Lewis is now hosting the files on his original post about Amit's presentation.