Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Quiz Night

If you want to be good at trouble-shooting one of the most important skills you have to learn is how to examine the evidence. Here’s a section of a trace file that was published recently on the OTN database forum. There’s an interesting detail that you might pick out – how long will it take you [...]

Life Cycle of a Process Request

Oracle's Flashback Query facility lets you query a past version of a row by using the information in the undo segment.  The VERSIONS option lets you seen all the versions that are available. Thus, it is possible to write a simple query to retrieve the all values that changed on a process request record through its life cycle.

The Oracle parameter undo_retention determines how long that data remains in the undo segment. In my example, it is set to 900 seconds, so I can only query versions in the last 15 minutes. If I attempt to go back further than this I will get an error.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
column prcsinstance heading 'P.I.' format 9999999
column rownum heading '#' format 9
column versions_starttime format a22
column versions_endtime format a22

SELECT rownum, prcsinstance
, begindttm, enddttm
, runstatus, diststatus
, versions_operation, versions_xid
, versions_starttime, versions_endtime
FROM sysadm.psprcsrqst
VERSIONS BETWEEN timestamp
systimestamp - INTERVAL '15' MINUTE AND
systimestamp
WHERE prcsinstance = 2920185
/

# P.I. BEGINDTTM ENDDTTM RU DI V VERSIONS_XID VERSIONS_STARTTIME VERSIONS_ENDTIME
-- -------- ------------------- ------------------- -- -- - ---------------- --------------------- ----------------------
1 2920185 10:51:13 11/06/2010 10:52:11 11/06/2010 9 5 U 000F00070017BD63 11-JUN-10 10.52.10 AM
2 2920185 10:51:13 11/06/2010 10:52:11 11/06/2010 9 5 U 001A002C001CB1FF 11-JUN-10 10.52.10 AM 11-JUN-10 10.52.10 AM
3 2920185 10:51:13 11/06/2010 10:52:11 11/06/2010 9 7 U 002C001F000F87C0 11-JUN-10 10.52.10 AM 11-JUN-10 10.52.10 AM
4 2920185 10:51:13 11/06/2010 10:52:11 11/06/2010 9 1 U 000E000A001771CE 11-JUN-10 10.52.10 AM 11-JUN-10 10.52.10 AM
5 2920185 10:51:13 11/06/2010 10:52:02 11/06/2010 9 1 U 002A000F00125D89 11-JUN-10 10.52.01 AM 11-JUN-10 10.52.10 AM
6 2920185 10:51:13 11/06/2010 7 1 U 0021000B00132582 11-JUN-10 10.51.10 AM 11-JUN-10 10.52.01 AM
7 2920185 6 1 U 0004002000142955 11-JUN-10 10.51.10 AM 11-JUN-10 10.51.10 AM
8 2920185 5 1 I 0022002E0013F260 11-JUN-10 10.51.04 AM 11-JUN-10 10.51.10 AM

Now, I can see each of the committed versions of the record. Note that each version is the result of a different transaction ID.
Reading up from the last and earliest row in the report, you can see the history of this process request record.

  • At line 8 it was inserted (the value of psuedocolumn VERSION_OPERATION is 'I') at RUNSTATUS 5 (queued) by the component the operator used to submit the record.
  • At line 7, RUNSTATUS was updated to status 6 (Initiated) by the process scheduler.
  • At line 6 the process begins and updates the BEGINDTTM with the current database time, and sets RUNSTATUS to 7 (processing).
  • At line 5 the process completes, updates ENDDTTM to the current database time, and sets RUNSTATUS to 9 (success).
  • At line 4 the ENDDTTM is updated again. This update is performed by the Distribution Server process in the Process Scheduler domain as report output is posted to the report repository.  Note that the value is 1 second later than the VERSIONS_ENDTIME, therefore this time stamp is based on the operating system time for the host running the process scheduler. This server's clock is slightly out of sync with that of the database server.
  • At lines 3 to 1 there are 3 further updates as the distribution status is updated twice more.

For me, the most interesting point is that ENDDTTM is updated twice; first with the database time at which the process ended, and then again with the time at which any report output was successfully completed.

I frequently want measure the performance of a processes. I often write script that calculate the duration of the process as being the difference between ENDDTTM and BEGINDTTM, but now it is clear that this includes the time taken to post the report and log files to the report repository.

For Application Engine processes, you can still recover the time when the process ended. If batch timings are enabled and written to the database, the BEGINDTTM and ENDDTTM are logged in PS_BAT_TIMINGS_LOG.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
select * from ps_bat_timings_log where process_instance = 2920185

PROCESS_INSTANCE PROCESS_NAME OPRID RUN_CNTL_ID
---------------- ------------ ------------------------------ ------------------------------
BEGINDTTM ENDDTTM TIME_ELAPSED TIME_IN_PC TIME_IN_SQL TRACE_LEVEL
------------------- ------------------- ------------ ---------- ----------- -----------
TRACE_LEVEL_SAM
---------------
2920185 XXX_XXX_XXXX 52630500 16023
10:51:12 11/06/2010 10:52:02 11/06/2010 49850 35610 13730 1159
128

You can see above that ENDDTTM is the time when the process ended.

That gives me some opportunities. For Application Engine programs, I can measure the amount of time taken to posting report content, separately from the process execution time.  This query shows me that this particular process took 49 seconds, but the report output took a further 9 seconds to post.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
SELECT r.begindttm begindttm
, NVL(l.enddttm, r.enddttm) enddttm
, (NVL(l.enddttm, r.enddttm)-r.begindttm)*86400 exec_secs
, r.enddttm posttime
, (r.enddttm-NVL(l.enddttm, r.enddttm))*86400 post_secs
FROM sysadm.psprcsrqst r
LEFT OUTER JOIN sysadm.ps_bat_timings_log l
ON l.process_instance = r.prcsinstance
WHERE r.prcsinstance = 2920185

BEGINDTTM ENDDTTM EXEC_SECS POSTTIME POST_SECS
------------------- ------------------- ---------- ------------------- ----------
10:51:13 11/06/2010 10:52:02 11/06/2010 49 10:52:11 11/06/2010 9

For more detail on the Flashback Query syntax see the Oracle SQL Reference.

Related-Combine Operation „UNION ALL (RECURSIVE WITH)“

To make easier the interpretation of execution plans, in chapter 6 of TOP I defined three types of operations: standalone operations, unrelated-combine operations, and related-combine operations. For combine operations I also added a list of all operations of each type. Since in 11.2 a new related-combine operation is available, I decided to write this short [...]

What I learned about deadlines...

I learned that I am not the only one :) Seth's blog is one of the ones I read every time. They are short, to the point and almost always meaningful to me. Deadlines are the greatest motivator for me - if I do not have a deadline for something, I can pretty much guarantee you I will not finish it. I set my own little deadlines for things just to get finished. Whenever someone asks me to do something for them - write a foreword, make a recommendation, whatever - I typically say "sure and what is the drop dead date". If they know me, they'll give me a date before the true 'drop dead' just to have it in a timely fashion (because the odds they see it before then are slim to none).

Speaking of deadlines, I just finished the 2nd edition of Expert Oracle Database Architecture. Right now, this minute. Just have to dot I's and cross T's now - a few final copy edits and it'll be done. This will be the blurb on the back of the book (which you can expect to see soon)

Expert Oracle Database Architecture

Dear Reader,
I have a simple philosophy when it comes to the Oracle database: you can treat it as a black box and just stick data into it, or you can understand how it works and exploit it fully. If you choose the former, you will, at best, waste money and miss the potential of your IT environment. At worst, you will create nonscalable and incorrectly implemented applications—ones that damage your data integrity and, in short, give incorrect information. If you choose to understand exactly how the Oracle database platform should be used, then you will find that there are few information management problems that you cannot solve quickly and elegantly.

Expert Oracle Database Architecture is a book that explores and defines the Oracle database. In this book I’ve selected what I consider to be the most important Oracle architecture features, and I teach them in a proof-by-example manner, explaining not only what each feature is, but also how it works, how to implement software using it, and the common pitfalls associated with it. In this second edition, I’ve added new material reflecting the way that Oracle Database 11g Release 2 works, updated stories about implementation pitfalls, and new capabilities in the current release of the database. The number of changes between the first and second editions of this book might surprise you. Many times as I was updating the material – I myself was surprised to discover changes in the way Oracle worked that I was not yet aware of. In addition to updating the material to reflect the manner in which Oracle Database 11g Release 2 works – I’ve added an entirely new chapter on data encryption. Oracle Database 10g Release 2 added a key new capability – transparent column encryption – and Oracle Database 11g Release 1 introduced transparent tablespace encryption. This new chapter takes a look at the implementation details of these two key features as well as manual approaches to data encryption.

This book is a reflection of what I do every day. The material within covers topics and questions that I see people continually struggling with, and I cover these issues from a perspective of "When I use this, I do it this way." This book is the culmination of many years’ experience using the Oracle database, in myriad situations. Ultimately, its goal is to help DBAs and developers work together to build correct, high-performance, and scalable Oracle applications.

Thanks and enjoy!

Cursor_sharing : a picture is worth a 1000 words

Anyone who has been around Oracle performance over the years knows the grief that hard parsing SQL queries can cause on highly concurrent applications. The number one reason for hard parsing has been applications that don't use bind variables. Without bind variables queries that would otherwise be shared get recompiled because their text is different and Oracle treats them as different queries. Oracle addressed this issue with a parameter called cursor_sharing. The parameter cursor_sharing has three values
exact - the default
similar - replace literals with bind variables, if a histogram keep literal in place
force - replace literals with bind variables and use existing plan if it exists
Here is what the load looks like going from the default, exact, to the value force on a load of the same query but a query that doesn't use bind variables:

looks like a significant load savings - impressive!
Now many people tell me that they think there are bugs with "force" and that you should use "similar". The value similar does a similar thing but if there are histograms on the column, then Oracle will attempt, in certain cases, to have different plans based on different values. Sounds cool huh? Well their are bugs. Here is the same load with similar:
If we look at the different child cursors for this statement we find that Oracle, instead of sharing the children creates a different one for each execution:
This bug still seems to exist on 11gR2 :
Here is the code for the examples I (run by 8 users on 10g and 12 users on 11g)
--alter session set cursor_sharing=exact;
--alter session set cursor_sharing=force;
--alter session set cursor_sharing=similar;
declare
l_cursor integer default 0;
stmt varchar2(400);
ret number;
BEGIN
select hparse.nextval into ret from dual;
dbms_random.seed(ret);
FOR i IN 1..1000 LOOP
l_cursor:=dbms_sql.open_cursor;
stmt:='SELECT count(*) FROM t1 where c1 < '||
dbms_random.value()||' and c2 < '||dbms_random.value();
execute immediate stmt into ret;
dbms_sql.close_cursor(l_cursor);
END LOOP;
END;
/
Table t1 has no histograms. In the case above it had one row, but results were similar with no rows:
create table t1 (c1 number, c2 number);
insert into t1 values (0,0);
commit;
The issue should be addressed in 11g with a combination of cursor_sharing and adaptive cursor sharing

More VST Notation

Just got DB Optimizer 2.5 out the door two weeks ago and now work is going fast on 2.5.1 slated for beginning of August. Already added EXISTS and NOT EXISTS notation into the diagrams:

SELECT
cs.customerid,
cs.firstname,
cs.lastname,
mr.rentalid,
mr.duedate,
mr.totalcharge,
ri.itemnumber
FROM
MOVIES.customer cs,
MOVIES.movierental mr,
MOVIES.rentalitem ri
WHERE
LENGTH (cs.lastname) = 5 AND
cs.zip > 75062 AND
1 < cs.customerid + 2 AND
cs.phone BETWEEN 9625569900 AND 9999569900 AND
ROUND (ri.rentalid) > 10 AND
TRUNC (ri.itemnumber) > 1 AND
mr.totalcharge > (SELECT AVG (totalcharge)
FROM MOVIES.movierental) AND
ri.moviecopyid NOT IN (SELECT mc.moviecopyid
FROM MOVIES.moviecopy mc
WHERE
mc.copyformat = 'vhs' AND
mc.copycondition = 'old' AND
mc.movieid IN (SELECT mt.movieid
FROM MOVIES.movietitle mt
WHERE
mt.year < 1990 AND
mt.rating IN ('pg', 'r') AND
mt.categoryid IN (SELECT mc.categoryid
FROM MOVIES.moviecategory mc
WHERE mc.rentalprice = (SELECT MAX (rentalprice)
FROM MOVIES.moviecategory
WHERE categoryid = mc.categoryid)))) AND
mr.CUSTOMERID = cs.CUSTOMERID AND
ri.RENTALID = mr.RENTALID

Continued Rows

Do you think you know how Oracle records access to migrated or chained rows ? You may need to check your theories. Here’s a little demonstration that may amuse you. It uses an 8KB block size, a locally managed tablespaces (LMT) with uniform extent sizes of 1MB, and freelist management rather than automatic segment space [...]

Do It Yourself Exadata Performance! Really? Part II.

Yes, shamelessly, I’m still groveling for more votes. My Suggest-A-Session vote count is at a modest 45! Come on folks, I know we can do better than that. If you don’t get there and vote I fear there is no way for me to get a podium for this topic unless you all collectively petition [...]

Open Storage S7000 with Exadata… a good fit ETL/ELT operations.

I have worked on Exadata V2 performance projects with Kevin Closson for nearly a year now and have had the opportunity to evaluate several methods of loading data into a data warehouse. The most common, and by far the fastest method, involves the use of “External Tables”. External tables allow the user to define a table object made up of text files that live on a file system.   Using External Tables allows for standard SQL parallel query operations to be used to load data into permanent database tables.

SQL> alter session enable parallel dml ;
SQL> insert /*+ APPEND */ into mytable select * from ext_tab ;

With the size and power of Exadata, businesses are creating larger and larger data warehouses. There will often be dozens of machines that collect and stage data for ingest by the data warehouse.  So this means the staging area for these flat-files must be huge, real fast, and accessible from multiple networks.

What options are available for staging input files?

With Exadata V2, or any RAC environment, flat-file data has to be present on all nodes in order to fully utilize parallel query.  The natural first choice with Exadata V2 is to use DBFS.

DBFS comes with Exadata and allows for easy clustering across all Exadata database nodes.  The real data store for DBFS are database tables residing on a tablespace within the database machine.  The DBFS client program is then used to mount the DBFS filesystem such that they appear to the Linux user to be just another file system.  This allows for file system data to be managed just like any other database while using the full power of Exadata.  DBFS is quite fast and works well for housing external tables, but it does cut down on the storage available for the data warehouse.  Also, since DBFS is simply a client on an Exadata database node, it uses CPU resources on the database machine to initially transfer or create the flat files.

Open Storage S7000 a natural staging area for Exadata

If you want to extend the amount of storage to stage data for your warehouse, then the S7000 is an excellent choice.  The S7000 can stage files off traditional networks using 1gigE and 10gigE connections.  This allows for multiple machines to seamlessly connect to the S7000 in-order to stage data for ingest.  This activity has no effect on the Exadata users since the S7000 is a self contained storage server – unlike DBFS that uses CPU cycles from the database grid to manage and store the flat-file data.

Once the data is on the S7000, we can use IPoIB and connect directly into the high-bandwidth Infiniband network that is part of Exadata V2.  This allows the S7000 to be positioned neatly between Exadata and the traditional gigE networks.

what about performance?

As part of a larger project, I was able to run a quick test.  I had the following:

  • S7410 with 12 drives
  • 128 x 1GB files on a share
  • 8 db nodes active (x4170) with the share mounted on all nodes.

I created an external table across all the files and performed two tests:

  1. Select count(*).
    SQL> select count(*) from ext_tab;
  2. Insert as Select “IAS”
    SQL> alter session enable parallel dml ;
    SQL> insert /*+APPEND */ into mytable select * from ext_tab;

Both when querying and loading data with “IAS”, I was able to get 1.2GB/sec throughput as I saw with my earlier tests with S7000 and 10gigE.  That is over 4TB/hr with just one head node for the S7410.  With a clustered configuration and multiple mount points, the load rate could be increased even further.

summary

The Sun Open Storage S7410 server is an excellent choice for managing file system data.  With the ability to connect to multiple networks, it is a perfect fit to stage data for Exadata environments as well.

Filed under: Exadata, Oracle, Storage

Upcoming Index Seminar Dates (The Tourist)

I’ve just finalised some European dates for my Oracle Index Internals and Best Practices Seminar for later in the year. Location and dates are as follows: Turkey (Istanbul): 18-19 October 2010 Germany (Munich): 21-22 October 2010 Czech Republic (Prague) : 25-26 October 2010 As usual, always a good idea to book early to avoid disappointment. [...]