Search

Top 60 Oracle Blogs

Recent comments

Infrastructure

Little sleeps

A peripheral question in a recent comment (made in response to me asking whether a loop had been written with a sleep time of 1/100th or 1/1000th of a second) asked “How do you sleep for 1/1000th of a second in pure PL/SQL?”

The answer starts with “How pure is pure ?” Here’s a “pure” PL/SQL solution that “cheats” by calling one of the routines in Oracle’s built-in Java library:

Nologging

Bobby Durrett recently published a note about estimating the volume of non-logged blocks written by an instance with the aim of getting some idea of the extra redo that would be generated if a database were switched to “force logging”.

Since my most recent blog notes have included various extracts and summaries from the symbolic dumps of redo logs it occurred to me that another strategy for generating the same information would be to dump the redo generated by Oracle when it wanted to log some information about non-logged blocks. This may sound like a contradiction, of course, but it’s the difference between data and meta-data: if Oracle wants to write data blocks to disc without logging their contents it needs to write a note into the redo log saying “there is no log of the contents of these blocks”.

Update restarts

Somewhere I think I’ve published a note about an anomaly that’s been bugging me since at least Oracle 10g – but if it is somewhere on the Internet it’s hiding itself very well and I can’t find it, though I have managed to find a few scripts on my laptop that make a casual reference to the side effects of the provlem. [Ed: a tweet from Timur Ahkmadeev has identified a conversation in the comments on an older post that could explain why I thought I’d written something about the issue.]

Anyway, I’ve decided to create some new code and write the article (all over again, maybe). The problem is a strange overhead that can appear when you do a simple but large update driving off a tablescan.

Quiz Night

Upgrades cause surprises – here’s a pair of results from a model that I constructed more than 15 years ago, and ran today on 12.2, then modified and ran again, then ran on 11.2.0.4, then on 12.1.0.2. It’s very simple, I just create a table, gather stats, then update every row.

sql_patch

This note is a short follow-up to a note I wrote some time ago about validating foreign key constraints where I examined the type of SQL Oracle generates internally to do the validation between parent and child tables.  In that article I suggested (before testing) that you could create an SQL patch for the generated SQL to over-ride the plan taken by Oracle – a plan dictated to some extent by hints (including a “deprecated” ordered hint) embedded in the code. I did say that the strategy might not work for SQL optimised by SYS, but it turned out that it did.

Here’s a little script I ran to test a few variations on the theme:

Join View

It’s strange how one thing leads to another when you’re trying to check some silly little detail. This morning I wanted to find a note I’d written about the merge command and “stable sets”, and got to a draft about updatable join views that I’d started in 2016 in response to a question on OTN (as it was at the time) and finally led to a model that I’d written in 2008 showing that the manuals were wrong.

Parse Solution

In the “Parse Puzzle” I posted a couple of days ago I showed a couple of extracts from an AWR report that showed contradictory results about the time the instance spent in parsing and hard parsing, and also showed an amazing factor of 4 difference between the DB Time and the “SQL ordered by Elapsed Time”. My example was modelling a real world anomaly I had come across, but was engineered to exaggerate the effect to make it easy to see what was going on.

Parse Puzzle

Here are some details from an AWR report covering a few minutes in the lifetime of an instance of 18.3. It’s a carefully constructed demonstration and all I’ve done is take an AWR snapshot, execute a single SQL statement, then take another snapshot, so the only thing captured by the report is the work done in that brief time interval. The purpose of the exercise is to demonstrate how some Oracle features can make a complete nonsense of the AWR. (I have, as I often do, produced a model that reproduces an affect that can appear in production but exaggerates the effect to make it more clearly visible.)

First the Time Model statistics:

LOB reads

This is a blog note that started life in September 2001 (which was before I started blogging, I think), got drafted as a blog in January 2014 because of a question on the Oracle-L list server, and has finally got to publication following a question on the ODC database forum. (But the comments in the blog don’t look as if they are relevant to the question.)

The question on Oracle-l was:  Why is the same CLOB datablock read multiple times?

The answer is basically: client programs tend to set a “sensible” fetch buffer size for LOBs and if the buffer size is less than the size of the LOB the block(s) the LOB is in have to be visited multiple times and for nocache LOBs that means the block(s) will be read multiple times.

This can be seen quite easily in SQL*Plus which has two helpful parameters (one dating back to at least v5):

Redo Dumps

A thread started on the Oracle-L list-server a few days ago asking for help analysing a problem where a simple “insert values()” (that handled millions of rows per day) was running very slowly. There are many reasons why this might happen, ranging from the trivial (someone has locked the table in exclusive mode), through the slightly subtle (we’re trying to insert a row that collides on a uniqueness constraint with an uncommitted insert from another session) to the subtle (Oracle has to read through the undo to check current versions of blocks against read-consistent versions) ending up at the esoteric (the ASSM space management blocks are completely messed up again).