Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

The Helskinki approaches to WoD application development

[continuing from my previous post]In a very similar way as I did here for MVC, the Helsinki UI/BL/DL code classes can be mapped across the client, middle and data tiers too:What I do differently here compared to the earlier display of MVC mapping across the tiers, is that whenever the M is distributed across two tiers, I divide the M into BL and DL. The guideline of how to split up the M, is now

Read Consistency, "ORA-01555 snapshot too old" errors and the SCN_ASCENDING hint

Oracle uses for its read consistency model a true multi-versioning approach which allows readers to not block writers and vice-versa, writers to not block readers. Obviously this great feature allowing highly concurrent processing doesn't come for free, since somewhere the information to build multiple versions of the same data needs to be stored.

Oracle uses the so called undo information not only to rollback on-going transactions but also to re-construct old versions of blocks if required. Very simplified when reading data Oracle knows the point in time (which corresponds to an internal counter called SCN, System Change Number) that data needs to be consistent with. In the default READ COMMITTED isolation mode this point in time is defined when a statement starts to execute. You could also say at the moment a statement starts to run its result is pre-ordained. When Oracle processes a block it checks if the block is "old" enough and if it discovers that the block content is too new (has been changed by other sessions but the current access is not supposed to see this updated content according to the point-in-time assigned to the statement execution) it will start to create a copy of the block and use the information available from the corresponding undo segment to re-construct an older version of the block. Note that this process can be iterative: If after re-constructing the older version of the block it's still not sufficiently old more undo information will be used to go further back in time.

Since the undo information of transactions that have been committed is marked as re-usable Oracle is free to overwrite the corresponding undo data under certain circumstances (e.g. no more free space left in the UNDO tablespace). If now an older version of a block needs to be created but the corresponding undo information required to do so has been overridden, the infamous "ORA-01555 snapshot too old" error will be raised, since the required read-consistent view of the data can not be generated any longer.

In order to avoid this error starting from 10g on you only need to have a sufficiently large UNDO tablespace in automatic undo management mode so that the undo information required to create old versions of the blocks doesn't get overridden prematurely. In 9i you need to set the UNDO_RETENTION parameter according to the longest expected runtime of your queries and of course have sufficient space in the UNDO tablespace to allow Oracle to adhere to this setting.

So until now Oracle was either able to provide a consistent view of the data according to its read-consistency model, or you would get an error message if the required undo data wasn't available any longer.

Enter the SCN_ASCENDING hint: As already mentioned by Martin Berger and Chandra Pabba Oracle officially documented the SCN_ASCENDING hint for Oracle 11.1.0.7 in Metalink Note 6688108.8 (Enhancement: Allow ORA-1555 to be ignored during table scan).

I've run some tests using this hint on 9.2.0.8, 10.2.0.4 and 11.1.0.7.

In order to increase the probability to run into the dreaded ORA-01555 error you should perform the following preparation steps (note this applies to all examples that are provided here):

-- create a small undo tablespace
create undo tablespace undo_small datafile '' size 2M;

-- activate small UNDO tablespace
alter system set undo_tablespace = 'UNDO_SMALL' scope = memory;

-- small cache so that old copies of the blocks won't survive in the buffer cache
-- and delayed block cleanout probability increases
alter system set db_cache_size = 2M scope = memory;

Note that all examples here use DBMS_JOB to simulate the simultaneous modification and reading of data, therefore you need to have the JOB_QUEUE_PROCESSES parameter set accordingly, otherwise the job won't get executed.

I've started with a variation of Tom Kyte's example how to deliberately force an ORA-01555 error, which looks like this:

drop table t purge;

create table t
as
select
a.*, 1 as my_id
from
all_objects a
order by
dbms_random.random;

alter table t add constraint t_pk primary key (object_id);

exec dbms_stats.gather_table_stats(null, 'T', cascade=>true)

set serveroutput on timing on

alter session set nls_language = 'AMERICAN';

spool tomkyte_ora1555_error_demo_modified2.log

declare
cursor c
is
select
*
from
(
select /*+ first_rows */
my_id
from
t
order by
object_id
)
where rownum <= 3000;

l_my_id t.my_id%type;
l_rowcnt number := 0;
l_job_id binary_integer;

function submit_job(what in varchar2)
return binary_integer
is
pragma autonomous_transaction;
job_id binary_integer;
begin
dbms_job.submit(job_id, what);
commit;
return job_id;
end;
begin
select
distinct
my_id
into
l_my_id
from
t;

dbms_output.put_line('The MY_ID as of now: ' || l_my_id);

-- result of this query
-- is as of opening the cursor
-- so it needs to return the same MY_ID
-- for all rows as above query demonstrates
open c;

-- now start to wreck the undo
l_job_id := submit_job('
begin
for x in (
select
rowid as rid
from
t
where
rownum <= 10000
) loop
update
t
set
object_name = reverse(object_name),
my_id = 1 - my_id
where
rowid = x.rid;
commit;
end loop;
end;
');

-- start fetching from result set
loop
fetch c into l_my_id;
exit when c%notfound;
l_rowcnt := l_rowcnt + 1;
dbms_output.put_line('Row: ' || l_rowcnt || ' ID: ' || l_my_id);
dbms_lock.sleep(0.01);
end loop;
close c;
exception
when others then
dbms_output.put_line('Rows fetched: ' || l_rowcnt);
raise;
end;
/

spool off

What this code snippet basically does is the following:

1. It creates a table copy of the ALL_OBJECTS view ordered randomly, and adds a primary key index on the OBJECT_ID

2. It issues a query that uses the FIRST_ROWS hint to force an index access to the table because of the available primary key index and the corresponding ORDER BY. It's one of the built-in heuristic rules of the (deprecated) FIRST_ROWS cost based optimizer mode that an ORDER BY is going to use an index if possible to avoid a sort operation. By using this inefficient approach it is ensured that each block of the table will be accesses multiple times due to the random row access driven by the ordered index.

3. It then spawns a job simulating a separate session that starts to overwrite the data row-by-row the query is supposed to read. Specifically the MY_ID column which has been generated with 1 will be set to 0. By committing each single row update operation the small undo tablespace will eventually be filled up and old undo data can be and needs to be overwritten due to insufficient space.

4. While the update loop is running the data from the query gets slowly fetched. Due to the fact that each block will be visited many times according to the index access it's almost guaranteed that the undo information required to re-construct the old version of the block has been overwritten (due to the artificially small undo tablespace) and therefore the ORA-01555 error will occur.

And sure enough, when running this in 11.1.0.7 with the pre-requisites met, the output will look similar to the following. Note that the first line shows what we expect to get from the second query: Only one distinct value, namely 1

The MY_ID as of now: 1
Row: 1 ID: 1
Row: 2 ID: 1
Row: 3 ID: 1
Row: 4 ID: 1
Row: 5 ID: 1
Row: 6 ID: 1
Row: 7 ID: 1
Row: 8 ID: 1
Row: 9 ID: 1
Row: 10 ID: 1
.
.
.
Row: 1768 ID: 1
Row: 1769 ID: 1
Row: 1770 ID: 1
Row: 1771 ID: 1
Row: 1772 ID: 1
Row: 1773 ID: 1
Rows fetched: 1773
declare
*
ERROR at line 1:
ORA-01555: snapshot too old: rollback segment number 15 with name "_SYSSMU15$"
too small
ORA-06512: at line 83

Elapsed: 00:00:52.21

So you can see that the expected error occurred.

Now I've modified the example to use the SCN_ASCENDING hint for the query that fails:

.
.
.
declare
cursor c
is
select /*+ scn_ascending */
*
from
(
select /*+ first_rows */
my_id
from
t
order by
object_id
)
where rownum <= 3000;
.
.
.

Re-running the test case shows that you still get the same error, and obviously the hint doesn't help to avoid the error in this case.

Now if you read the Metalink note subject again, you might notice that it says: "Allow ORA-1555 to be ignored during table scan". Presumably since our example doesn't use a full table scan but an table access by ROWID the hint may be doesn't work as expected.

Let's modify our test case a little bit to use a full table scan instead of the index access path:

drop table t purge;

create table t
as
select
a.*, 1 as my_id
from
all_objects a
order by
dbms_random.random;

alter table t add constraint t_pk primary key (object_id);

exec dbms_stats.gather_table_stats(null, 'T', cascade=>true)

set serveroutput on timing on

alter session set nls_language = 'AMERICAN';

spool tomkyte_ora1555_error_demo_modified3_no_index_usage.log

declare
cursor c
is
select
*
from
(
select /*+ all_rows */
my_id
from
t
--order by
-- object_id

)
where rownum <= 5000;

l_my_id t.my_id%type;
l_rowcnt number := 0;
l_job_id binary_integer;

function submit_job(what in varchar2)
return binary_integer
is
pragma autonomous_transaction;
job_id binary_integer;
begin
dbms_job.submit(job_id, what);
commit;
return job_id;
end;
begin
select
distinct
my_id
into
l_my_id
from
t;

dbms_output.put_line('The MY_ID as of now: ' || l_my_id);

-- result of this query
-- is as of opening the cursor
-- so it needs to return the same MY_ID
-- for all rows as above query demonstrates
open c;

-- now start to wreck the undo
l_job_id := submit_job('
begin
for x in (
select
rowid as rid
from
t
where
rownum <= 10000
) loop
update
t
set
object_name = reverse(object_name),
my_id = 1 - my_id
where
rowid = x.rid;
commit;
end loop;
end;
');

-- start fetching from result set
loop
fetch c into l_my_id;
exit when c%notfound;
l_rowcnt := l_rowcnt + 1;
dbms_output.put_line('Row: ' || l_rowcnt || ' ID: ' || l_my_id);
dbms_lock.sleep(0.01);
end loop;
close c;
exception
when others then
dbms_output.put_line('Rows fetched: ' || l_rowcnt);
raise;
end;
/

spool off

So now we're reading simply the table row-by-row without the index usage, which will be a full table scan operation. Let's check the result without any special SCN_ASCENDING hint:

The MY_ID as of now: 1
Row: 1 ID: 1
Row: 2 ID: 1
Row: 3 ID: 1
Row: 4 ID: 1
Row: 5 ID: 1
Row: 6 ID: 1
Row: 7 ID: 1
Row: 8 ID: 1
Row: 9 ID: 1
Row: 10 ID: 1
.
.
.
Row: 4662 ID: 1
Row: 4663 ID: 1
Row: 4664 ID: 1
Row: 4665 ID: 1
Row: 4666 ID: 1
Row: 4667 ID: 1
Row: 4668 ID: 1
Row: 4669 ID: 1
Rows fetched: 4669
declare
*
ERROR at line 1:
ORA-01555: snapshot too old: rollback segment number 12 with name "_SYSSMU12$"
too small
ORA-06512: at line 83

Elapsed: 00:01:03.41

OK, great. It takes a bit longer, and may be you need to increase the ROWNUM limits accordingly to encounter the error, but it's still reproducible.

Let's try again with the hint:

.
.
.
declare
cursor c
is
select /*+ scn_ascending */
*
from
(
select /*+ all_rows */
my_id
from
t
--order by
-- object_id
)
where rownum <= 5000;
.
.
.

And here's the (scary) result:

The MY_ID as of now: 1
Row: 1 ID: 1
Row: 2 ID: 1
Row: 3 ID: 1
Row: 4 ID: 1
Row: 5 ID: 1
Row: 6 ID: 1
Row: 7 ID: 1
Row: 8 ID: 1
Row: 9 ID: 1
Row: 10 ID: 1
.
.
.
Row: 530 ID: 1
Row: 531 ID: 0
Row: 532 ID: 1
Row: 533 ID: 1
Row: 534 ID: 0
Row: 535 ID: 1
Row: 536 ID: 1
Row: 537 ID: 1
Row: 538 ID: 1
Row: 539 ID: 1
Row: 540 ID: 1
Row: 541 ID: 1
Row: 542 ID: 1
Row: 543 ID: 1
Row: 544 ID: 0
Row: 545 ID: 1
Row: 546 ID: 1
.
.
.
Row: 4973 ID: 1
Row: 4974 ID: 0
Row: 4975 ID: 1
Row: 4976 ID: 1
Row: 4977 ID: 1
Row: 4978 ID: 1
Row: 4979 ID: 1
Row: 4980 ID: 1
Row: 4981 ID: 1
Row: 4982 ID: 1
Row: 4983 ID: 1
Row: 4984 ID: 1
Row: 4985 ID: 1
Row: 4986 ID: 1
Row: 4987 ID: 1
Row: 4988 ID: 1
Row: 4989 ID: 1
Row: 4990 ID: 1
Row: 4991 ID: 1
Row: 4992 ID: 1
Row: 4993 ID: 1
Row: 4994 ID: 1
Row: 4995 ID: 1
Row: 4996 ID: 1
Row: 4997 ID: 1
Row: 4998 ID: 1
Row: 4999 ID: 1
Row: 5000 ID: 1

PL/SQL procedure successfully completed.

Elapsed: 00:00:54.26

It can be clearly seen that those 0 returned by the query shouldn't be there according to the first line of the output, so this scary feature seems to have worked in this case.

Interestingly you get the same result and behaviour when running the test case against 10.2.0.4, so although the hint is not documented for that version it seems to work there, too.

I couldn't reproduce this on 9.2.0.8, so obviously it wasn't backported there.

Here's another, slightly more complex but even more impressive test case, which basically does the same, but introduces some further stuff. Note that it might require the following to be granted as user SYS to the user executing the test case. The grant on DBMS_LOCK is actually also required for Tom Kyte's demonstration code above:

grant execute on sys.dbms_pipe to cbo_test;

grant execute on sys.dbms_lock to cbo_test;

Here is the code:

drop table scn_ascending_demo purge;

create table scn_ascending_demo
as
select
1 as col1
, rpad('x', 100, 'x') as filler
from dual
connect by level <= 2000;

create index scn_ascending_demo_idx1 on scn_ascending_demo(filler);

drop table scn_ascending_demo_wreck_undo purge;

create table scn_ascending_demo_wreck_undo
as
select * from all_objects
where rownum <= 10000;

create or replace function slow_fetch (the_cursor sys_refcursor)
return sys.ku$_objnumset
-- use this in 9i
-- return mdsys.sdo_numtab authid current_user
pipelined
is
n_num number;
begin
loop
fetch the_cursor into n_num;
if the_cursor%notfound then
close the_cursor;
exit;
end if;
pipe row(n_num);
dbms_lock.sleep(0.01);
end loop;
return;
end slow_fetch;
/

set serveroutput on timing on

alter session set nls_language = 'AMERICAN';

spool scn_ascending_demo.log

declare
job_id binary_integer;
msg_buffer varchar2(2000);
pipe_id integer;
pipe_name constant varchar2(20) := 'scn_ascending_demo';
pipe_status integer;
c sys_refcursor;
l sys_refcursor;
n_result number;
n_row number;
function local_submit_job(what in varchar2)
return binary_integer
is
pragma autonomous_transaction;
job_id binary_integer;
begin
dbms_job.submit(job_id, what);
commit;
return job_id;
end;
begin
pipe_id := dbms_pipe.create_pipe(pipe_name);
open c for
select
col1
from
scn_ascending_demo;

job_id := local_submit_job('
declare
n_status integer;
begin
update
scn_ascending_demo
set
col1 = 1 - col1
, filler = rpad(''y'', 100, ''y'')
;
commit;
dbms_pipe.pack_message(''DONE'');
n_status := dbms_pipe.send_message(''' || pipe_name || ''');
exception
when others then
dbms_pipe.pack_message(''ERROR: '' || sqlerrm);
n_status := dbms_pipe.send_message(''' || pipe_name || ''');
end;
');

pipe_status := dbms_pipe.receive_message(pipe_name);
dbms_pipe.unpack_message(msg_buffer);
if msg_buffer != 'DONE' then
raise_application_error(-20001, 'Error in updating scn_ascending_demo: ' || msg_buffer);
end if;

job_id := local_submit_job('
declare
n_status integer;
snapshot_too_old exception;
pragma exception_init(snapshot_too_old, -1555);
no_space_left exception;
pragma exception_init(no_space_left, -30036);
begin
loop
begin
update
scn_ascending_demo_wreck_undo
set
owner = dbms_random.string(''a'', 30)
, object_name = dbms_random.string(''a'', 30)
, subobject_name = dbms_random.string(''a'', 30)
, object_type = dbms_random.string(''a'', 18)
;
commit;
exception
when snapshot_too_old or no_space_left then
commit;
end;
n_status := dbms_pipe.receive_message(''' || pipe_name || ''', 0);
exit when n_status != 1;
dbms_lock.sleep(0.5);
end loop;
commit;
dbms_pipe.pack_message(''DONE'');
n_status := dbms_pipe.send_message(''' || pipe_name || ''');
end;
');

begin
open l for
select
rownum as r_no
, value(d) as result
from
table(slow_fetch(c)) d;
loop
fetch l into n_row, n_result;
exit when l%notfound;
dbms_output.put_line('Row ' || n_row || ':' || n_result);
end loop;
close l;
/*
n_row := 0;
loop
fetch c into n_result;
exit when c%notfound;
n_row := n_row + 1;
dbms_output.put_line('Row ' || n_row || ':' || n_result);
dbms_lock.sleep(0.01);
end loop;
close c;
*/
exception
when others then
dbms_output.put_line('Error: ' || sqlerrm);
end;

dbms_pipe.pack_message('DONE');
pipe_status := dbms_pipe.send_message(pipe_name);
dbms_lock.sleep(5);
pipe_status := dbms_pipe.receive_message(pipe_name, 5);
pipe_id := dbms_pipe.remove_pipe(pipe_name);
declare
job_does_not_exist exception;
pragma exception_init(job_does_not_exist, -23421);
begin
dbms_job.remove(job_id);
commit;
exception
when job_does_not_exist then
dbms_output.put_line('Job: ' || job_id || ' does not exist any longer.');
end;
exception
when others then
pipe_id := dbms_pipe.remove_pipe(pipe_name);
declare
job_does_not_exist exception;
pragma exception_init(job_does_not_exist, -23421);
begin
dbms_job.remove(job_id);
commit;
exception
when job_does_not_exist then
dbms_output.put_line('Job: ' || job_id || ' does not exist any longer.');
end;
raise;
end;
/

spool off

This code does the following:

1. Creates two tables, one that will be modified and read, and another one whose sole purpose is to ensure that the undo will be overwritten

2. Uses a pipelined table function to fetch data slowly from a passed ref cursor object. Note that this is purely optional for demonstration purposes of a pipelined table function and as you can see the commented part simply fetches from the initial cursor directly to achieve the same result.

3. Uses DBMS_PIPE to perform very rudimentary synchronisation between the spawned jobs and the main session.

4. The basic principle is similar, but somewhat different to the previous test case:
- We open a cursor. At that moment the result is pre-ordained.
- Then we spawn a separate job that modifies the complete table that the query is based on.
- Once this is successfully done we spawn another job that attempts to fill up and overwrite our small undo tablespace.
- While this job is running we start to fetch from the initially opened cursor.
- As soon as the fetch is complete, either due to errors or successfully completed, we tell the job to stop the update operation and finally clean up if the job for whatever reason is still running (which should not happen).

Here is the result from 11.1.0.7 (and 10.2.0.4 which behaves the same) without the SCN_ASCENDING hint:

Row 1:1
Row 2:1
Row 3:1
Row 4:1
Row 5:1
Row 6:1
Row 7:1
Row 8:1
Row 9:1
Row 10:1
.
.
.
Row 584:1
Row 585:1
Row 586:1
Row 587:1
Row 588:1
Row 589:1
Row 590:1
Row 591:1
Row 592:1
Row 593:1
Row 594:1
Error: ORA-01555: snapshot too old: rollback segment number 11 with name
"_SYSSMU11_1238392578$" too small
Job: 170 does not exist any longer.

PL/SQL procedure successfully completed.

Elapsed: 00:00:17.06

So we get the expected error. Now let's try with the SCN_ASCENDING hint:

.
.
.
open c for
select /*+ scn_ascending */
col1
from
scn_ascending_demo;
.
.
.

Here's the (even more obvious) result:

Row 1:1
Row 2:1
Row 3:1
Row 4:1
Row 5:1
Row 6:1
Row 7:1
Row 8:1
Row 9:1
Row 10:1
.
.
.
Row 456:1
Row 457:1
Row 458:1
Row 459:1
Row 460:1
Row 461:1
Row 462:1
Row 463:0
Row 464:0
Row 465:0
Row 466:0
Row 467:0
Row 468:0
Row 469:0
Row 470:0
.
.
.
Row 1993:0
Row 1994:0
Row 1995:0
Row 1996:0
Row 1997:0
Row 1998:0
Row 1999:0
Row 2000:0
Job: 181 does not exist any longer.

PL/SQL procedure successfully completed.

Elapsed: 00:10:38.18

So again the hint has worked and we can see the inconsistent reads that should have been ORA-01555 errors.

What if we change this test case slightly so that an index access is used?

.
.
.
open c for
select /*+ scn_ascending first_rows */
col1
from
scn_ascending_demo
order by
filler;

.
.
.

In this case again I couldn't prevent the ORA-01555 error, so this seems to be corroborate the theory that only full table scans are able to use the SCN_ASCENDING request successfully.

So in summary I have to say that this feature seems to be quite questionable, may be even buggy, and even when it works it looks quite scary given the otherwise very robust multi-versioning capabilities of Oracle which represent one of the cornerstones of its fundamental architecture.

I haven't checked yet if the hint does also modify the behaviour of DML statements, but since these employ already their "write" consistency as it is called by Tom Kyte, it's quite unlikely that the SCN_ASCENDING hint is applicable. This means that an update DML statement (or SELECT FOR UPDATE) that while processing encounters that the data accessed has been modified in the meantime by others is going to "restart". This effectively means that any potential changes already applied are going to be rolled back, and the statement again starts from scratch based on the latest data. Note that this restart can happen multiple times, and yes, the amount of undo and redo generated will be increased if this is going to happen, although Oracle seems to rollback the changes only once and from then on switch to a SELECT FOR UPDATE mode first. This is a bit similar to the SCN_ASCENDING behaviour, but the crucial difference is that the DML statement is able to re-start its work, whereas the query might have already fetched numerous rows that already have been processed by the client, so whereas the DML statement is still consistent because it starts all over again, the query results are potentially inconsistent since there is no re-start possible if the client has already processed a part of the result set.

As a side note: The "restart" effect of the "write" consistency can actually lead to triggers being fired multiple times for the same row and is one of the reasons why you should never perform non-transactional operations (typically sending an email) from within a trigger. The non-transactional operation cannot be rolled back and therefore will be potentially repeated, e.g. sending out emails to the same recipient multiple times. One possible solution to this problem is to encapsulate the non-transactional operation into something that is transactional, e.g. a job submitted via DBMS_JOB, because DBMS_JOB is transactional and the job creation will be rolled back as part of the DML restart.

The "restart" behaviour of the SELECT FOR UPDATE statement is somewhat documented in the Advanced Application Developer's Guide.

Oracle Discoverer - help people write ugly code :)

There's been a discussion going on among some of my friends about all this horrible-looking (and often badly performing) auto-generated SQL coming out of Discoverer and other tools. Here are some of the comments made during the discussion, and some of my memories of how I got started with Oracle with the help of my good friend Mogens Egan...

=======================

Me:

"Oracle Discoverer - helping developers write ugly code for more than a decade."

=======================

NN:

"no no no!

the real beauty of discoverer (and similar tools) is not that it lets developers write ugly code, but it lets people who don't know what code is (business users), write code and share it with other users who also don't know what code is. It's entire purpose in life is to let people who don't know what they are doing, do it. developers do what they do with some understanding and can, sometimes, be educated. accountants and hr people can't."

=======================

Me:

"This brings me back. From 1987 to 1990 I was in a bank, sharing an office with Mogens Egan (the father of Morten Egan) and basically creating a datawarehouse (although we didn't know it) for internal users in the bank.

Our strategy was this:

1. Every night (or once a week or whatever) we would transfer data from the banks mainframe system via a SNA gateway to our VAX. The data came from IMS databases and was delivered as flat ASCII files (one physical record = one logical record) which often resultet in very very long records, of course, since IMS is hierachical. We would then load it into tables and let the users access it.

2. I would hold one- or two-day courses where I'd teach the attendees (who had probably only used a PC for a very short time) how to log onto the VAX using Smarterm, how to use VMS basic commands (including the editor), how to use SQL and SQL*Plus, how to create default forms in Forms 2.3 - and some other stuff.

3. Mogens Egan's idea was that it was better to turn users/experts (SME's in todays jargon) into "programmers" than vice versa. And then it should be our job to fix run-away jobs (read: SQL that performed bad or messed up things for others).

A rather anarchistic approach, you could say. But man, it worked. In three years we had 1000 users, some of who turned out to be natural super users, who started creating systems that helped their co-workers.

Since they were not officially named super users they couldn't demand to be given time to develop something they thought could be useful - they were by natural selection only allowed to spend time on something their co-workers thought useful.

Mogens and I are still in contact with many of those users. The machine is now an Alpha cluster, the data it manages runs a rather large banks' trading stuff, and all that - but its name is still Samson. And the super user we created back then is still called Supermule, which is the Danish name for Super Goof. With the introduction of English-speaking consultants in the last 10 years it has proved a minor mistake - they all ask "What's a super mule?"

So yes, we had many incidents of run-away jobs where the poor user had issued a SQL statement without the proper where-clause, etc. But then we would discover it, kill it, help the user - and all of the victims of this bad SQL knew it could be their turn one day, so they didn't get mad or upset.

That playground which we created back then generated a lot of Oracle-lovers who are still around in various higher positions, and perhaps it would have been even easier for them back then if we had had Discoverer.

So I think you're absolutely right: Discoverer will help computer-illeterates write really bad code even faster. But at least it gets them to use Oracle, and it creates wonderful problems that finances our fantastic lifestyles.

Mogens

PS: In the World as a whole, I think Discoverer had a presence (penetration) of about 2% of customers. In Denmark it was 20% due to my ex-wife Laila (Nathalie's mother), then product sales rep for Discoverer, who insisted that every single customer should have this product, like it or not. And notice how well Miracle is doing here. Perhaps there's a relationship."

===================================

What would you do with 8 disks?

Yesterday, David Best posted this question at Oracle-L:

If you had 8 disks in a server what would you do? From watching this list I can see alot of people using RAID 5 but i'm wary of the performance implicatons. (http://www.miracleas.com/BAARF/)

I was thinking maybe RAID 5 (3 disks) for the OS, software and
backups. RAID 10 (4 disks + 1 hot spare) for the database files.

Any thoughts?

I do have some thoughts about it.

There are four dimensions in which I have to make considerations as I answer this question:

  1. Volume
  2. Flow
  3. Availability
  4. Change

Just about everybody understands at least a little bit about #1: the reason you bought 8 disks instead of 4 or 16 has something to do with how many bytes of data you're going to store. Most people are clever enough to figure out that if you need to store N bytes of data, then you need to buy N + M bytes of capacity, for some M > 0 (grin).

#2 is where a lot of people fall off the trail. You can't know how many disks you really need to buy unless you know how many I/O calls per second (IOPS) your application is going to generate. You need to ensure that your sustained IOPS rate on each disk will not exceed 50% (see Table 9.3 in Optimizing Oracle Performance for why .5 is special). So, if a disk drive is capable of serving N 8KB IOPS (your disk's capacity for serving I/O calls at your Oracle block size), then you better make sure that the data you put on that disk is so interesting that it motivates your application to execute no more than .5N IOPS to that disk. Otherwise, you're guaranteeing yourself a performance problem.

Your IOPS requirement gets a little trickier, depending on which arrangement you choose for configuring your disks. For example, if you're going to mirror (RAID level 1), then you need to account for the fact that each write call your application makes will motivate two physical writes to disk (one for each copy). Of course, those write calls are going to separate disks, and you better make sure they're going through separate controllers, too. If you're going to do striping with distributed parity (RAID level 5), then you need to realize that each "small" write call is going to generate four physical I/O calls (two reads, and two writes to two different disks).

Of course, RAID level 5 caching complicates the analysis at low loads, but for high enough loads, you can assume away the benefits of cache, and then you're left with an analysis that tell you that for write-intensive data, RAID level 5 is fine as long as you're willing to buy 4× more drives than you thought you needed. ...Which is ironic, because the whole reason you considered RAID level 5 to begin with is that it costs less than buying 2× more drives than you thought you needed, which is why you didn't buy RAID level 1 to begin with.

If you're interested in RAID levels, you should peek at a paper I wrote a long while back, called Configuring Oracle Server for VLDB. It's an old paper, but a lot of what's in there still holds up, and it points you to deeper information if you want it.

You have to think about dimension #3 (availability) so that you can meet your business's requirements for your application to be ready when its users need it. ...Which is why RAID levels 1 and 5 came into the conversation to begin with: because you want a system that keeps running when you lose a disk. Well, different RAID levels have different MTBF and MTTR characteristics, with the bottom line being that RAID level 5 doesn't perform quite as well (or as simply) as RAID level 1 (or, say 1+0 or 0+1), but RAID level 5 has the up-front gratification advantage of being more economical (unless you get a whole bunch of cache, which you pretty much have to, because you want decent performance).

The whole analysis—once you actually go through it—generally funnels you into becoming a BAARF Party member.

Finally, dimension #4 is change. No matter how good your analysis is, it's going to start degrading the moment you put your system together, because from the moment you turn it on, it begins changing. All of your volumes and flows will change. So you need to factor into your analysis how sensitive to change your configuration will be. For example, what % increase in IOPS will require you to add another disk (or pair, or group, etc.)? You need to know in advance, unless you just like surprises. (And you're sure your boss does, too.)

Now, after all this, what would I do with 8 disks? I'd probably stripe and mirror everything, like Juan Loaiza said. Unless I was really, really (I mean really, really) sure I had a low write-rate requirement (think "web page that gets 100 lightweight hits a day"), in which I would consider RAID level 5. I would make sure that my sustained utilization for each drive is less than 50%. In cases where it's not, I would have a performance problem on my hands. In that case, I'd try to balance my workload better across drives, and I would work persistently to find any applications out there that are wasting I/O capacity (naughty users, naughty SQL, etc.). If neither of those actions reduced the load by enough, then I'd put together a justification/requisition for more capacity, and I would brace myself to explain why I thought 8 disks was the right number to begin with.

Statistics Management for Partitioned Objects in PeopleSoft

I have implemented partitioned objects in a number of PeopleSoft systems on Oracle. Recently, I was working on a system where a table was partitioned into weekly range partitions, and I encountered a performance problem when Oracle's automatic maintenance window job to collect statistics did not run between populating the new partition for the first time, and running a batch process that referenced that partition. Oracle, understandably produced a execution plan for a statement that assumed the partition was empty, but as the partition actually had quite a lot of data, the statement ran for a long time.

The solution was to tell Oracle the truth by gathering statistics for that partition. However, I didn't want to refresh the statistics for the whole table. There were many partitions with historical data that has not changed, so I don't need to refresh those partitions. I only need to refresh just the stale partitions, and here is the problem. Unfortunately, dbms_stats package will let you gather stale and missing statistics for all tables in a given schema, or the whole database, but not for a named table. It is not completely unreasonable, if you are targeting a single table then you ought to know what needs to be refreshed.

I have written a PL/SQL procedure to flush the table monitoring statistics to the data dictionary and determine whether the statistics on the table, any of its partitions and sub-partitions are stale or missing, and if so gather statistics on those segments. It uses (I believe) the same criteria as dbms_stats to determine stale objects: 10% change relative to last gathered statistics, or if the segment has been truncated. I have incorporated the new refresh_stats procedure into my PL/SQL packaged procedure wrapper which can be called by the %UpdateStats PeopleCode macro via a customised DDL model. The new procedure is only called for partitioned tables.

All that is necessary it to use the %UpdateStats macro in an Application Engine program.

This is all still work-in-progress, but so far, the results are encouraging.

Statistics Management for Partitioned Objects in PeopleSoft

I have implemented partitioned objects in a number of PeopleSoft systems on Oracle. Recently, I was working on a system where a table was partitioned into weekly range partitions, and I encountered a performance problem when Oracle's automatic maintenance window job to collect statistics did not run between populating the new partition for the first time, and running a batch process that referenced that partition. Oracle, understandably produced a execution plan for a statement that assumed the partition was empty, but as the partition actually had quite a lot of data, the statement ran for a long time.

The solution was to tell Oracle the truth by gathering statistics for that partition. However, I didn't want to refresh the statistics for the whole table. There were many partitions with historical data that has not changed, so I don't need to refresh those partitions. I only need to refresh just the stale partitions, and here is the problem. Unfortunately, dbms_stats package will let you gather stale and missing statistics for all tables in a given schema, or the whole database, but not for a named table. It is not completely unreasonable, if you are targeting a single table then you ought to know what needs to be refreshed.

I have written a PL/SQL procedure to flush the table monitoring statistics to the data dictionary and determine whether the statistics on the table, any of its partitions and sub-partitions are stale or missing, and if so gather statistics on those segments. It uses (I believe) the same criteria as dbms_stats to determine stale objects: 10% change relative to last gathered statistics, or if the segment has been truncated. I have incorporated the new refresh_stats procedure into my PL/SQL packaged procedure wrapper which can be called by the %UpdateStats PeopleCode macro via a customised DDL model. The new procedure is only called for partitioned tables.

All that is necessary it to use the %UpdateStats macro in an Application Engine program.

This is all still work-in-progress, but so far, the results are encouraging.

Statistics Management for PeopleSoft Temporary Records in Application Engine Programs

Last year, I wrote about Oracle Optimizer Statistics and Optimizer Dynamic Sampling with PeopleSoft Temprorary Records. Earlier this year, I encountered a situation where Optimizer Dynamic Sampling was not sufficient, and I did need properly gathered statistics on an object. I modified my PL/SQL packaged procedure wrapper which can be called by the %UpdateStats PeopleCode macro via a customised DDL model to collects statistics.

I still recommend locking statistics on PeopleSoft Temporary Record, so that table is omitted from schema or database wide operations to refresh statistics. If the statistics on a table are locked, and it is not a Global Temporary Table, then the wrapper package will force collection and update of statistics on the table (previously it suppressed gathering of statistics on tables with locked statistics).

However, when the Application Engine program completes, any statistics collected on those temporary tables are no longer needed. Worse, the statistics refer to data that will be deleted and replaced by some future program, and if the table were not reanalysed, the statistics would be misleading, and could cause the database to produce an inappropriate execution plan. Some temporary records are shared by multiple programs, so you cannot guarantee that statistics will always be refreshed when the table is next used.

When an Application Engine program completes successfully, or when the process request is cancelled, specific instances of temporary records that were allocated when the program began are deallocated by deleting the row from PS_AETEMPTBLMGR. Therefore, I propose the following trigger that will delete the statistics for that record when that row is deleted.


NB: The trigger must use an autonomous transaction because dbms_stats also commits its updates.

You can test the trigger like this: First I will populate the control table with a dummy record, and collect statistics

INSERT INTO ps_aetemptblmgr
(PROCESS_INSTANCE, RECNAME, CURTEMPINSTANCE, OPRID, RUN_CNTL_ID, AE_APPLID
,RUN_DTTM, AE_DISABLE_RESTART, AE_DEDICATED, AE_TRUNCATED)
VALUES
(0,'TL_EXCEPT_WRK',24,'PS','Wibble','TL_TIMEADMIN',sysdate,' ', 1,0)
/
execute dbms_stats.gather_table_stats(ownname=>'SYSADM',tabname=>'PS_TL_EXCEPT_WRK24',force=>TRUE);

column table_name format a18
SELECT table_name, num_rows, last_analyzed
FROM user_tables where table_name = 'PS_TL_EXCEPT_WRK24'
/

TABLE_NAME NUM_ROWS LAST_ANALYZED
------------------ ---------- -------------------
PS_TL_EXCEPT_WRK24 0 14:36:12 06/04/2009

Now I will delete the row, and the trigger will delete the statistics for me.

DELETE FROM ps_aetemptblmgr
WHERE process_instance = 0
and curtempinstance = 24
and recname = 'TL_EXCEPT_WRK'
/

SELECT table_name, num_rows, last_analyzed
FROM user_tables where table_name = 'PS_TL_EXCEPT_WRK24'
/

TABLE_NAME NUM_ROWS LAST_ANALYZED
------------------ ---------- -------------------
PS_TL_EXCEPT_WRK24

Statistics Management for PeopleSoft Temporary Records in Application Engine Programs

Last year, I wrote about Oracle Optimizer Statistics and Optimizer Dynamic Sampling with PeopleSoft Temprorary Records. Earlier this year, I encountered a situation where Optimizer Dynamic Sampling was not sufficient, and I did need properly gathered statistics on an object. I modified my PL/SQL packaged procedure wrapper which can be called by the %UpdateStats PeopleCode macro via a customised DDL model to collects statistics.

I still recommend locking statistics on PeopleSoft Temporary Record, so that table is omitted from schema or database wide operations to refresh statistics. If the statistics on a table are locked, and it is not a Global Temporary Table, then the wrapper package will force collection and update of statistics on the table (previously it suppressed gathering of statistics on tables with locked statistics).

However, when the Application Engine program completes, any statistics collected on those temporary tables are no longer needed. Worse, the statistics refer to data that will be deleted and replaced by some future program, and if the table were not reanalysed, the statistics would be misleading, and could cause the database to produce an inappropriate execution plan. Some temporary records are shared by multiple programs, so you cannot guarantee that statistics will always be refreshed when the table is next used.

When an Application Engine program completes successfully, or when the process request is cancelled, specific instances of temporary records that were allocated when the program began are deallocated by deleting the row from PS_AETEMPTBLMGR. Therefore, I propose the following trigger that will delete the statistics for that record when that row is deleted.


NB: The trigger must use an autonomous transaction because dbms_stats also commits its updates.

You can test the trigger like this: First I will populate the control table with a dummy record, and collect statistics

INSERT INTO ps_aetemptblmgr
(PROCESS_INSTANCE, RECNAME, CURTEMPINSTANCE, OPRID, RUN_CNTL_ID, AE_APPLID
,RUN_DTTM, AE_DISABLE_RESTART, AE_DEDICATED, AE_TRUNCATED)
VALUES
(0,'TL_EXCEPT_WRK',24,'PS','Wibble','TL_TIMEADMIN',sysdate,' ', 1,0)
/
execute dbms_stats.gather_table_stats(ownname=>'SYSADM',tabname=>'PS_TL_EXCEPT_WRK24',force=>TRUE);

column table_name format a18
SELECT table_name, num_rows, last_analyzed
FROM user_tables where table_name = 'PS_TL_EXCEPT_WRK24'
/

TABLE_NAME NUM_ROWS LAST_ANALYZED
------------------ ---------- -------------------
PS_TL_EXCEPT_WRK24 0 14:36:12 06/04/2009

Now I will delete the row, and the trigger will delete the statistics for me.

DELETE FROM ps_aetemptblmgr
WHERE process_instance = 0
and curtempinstance = 24
and recname = 'TL_EXCEPT_WRK'
/

SELECT table_name, num_rows, last_analyzed
FROM user_tables where table_name = 'PS_TL_EXCEPT_WRK24'
/

TABLE_NAME NUM_ROWS LAST_ANALYZED
------------------ ---------- -------------------
PS_TL_EXCEPT_WRK24

OS Thread Startup

Recently I encountered a performance problem scenario where a simple sqlplus “/ as sysdba” took about 2minutes to finish, this is critical to the client’s business because they have a local C program that loads Call Detail Reports on the database making use of local authentication for most of its operations and Sql*Loader to load the data, so this “2minutes of waiting” when accumulated greatly consumes significant time on their operations and greatly impacts the business.

When I arrived on the client I first checked the alert logs of both ASM (they have a separate home for ASM) and RDBMS, there were no errors…

Then I checked on the server to see if there were any CPU, IO, memory, swap, and network bottlenecks going on

The CPU run queue was zero and most of the time 90% idle

The disks were also most of the time idle

The memory utilization was low with 430MB free

Real memory Usage
-----------------------------
Total Installed : 6.00000 GB
Total Used : 5.57032 GB
Total Free : 439.97264 MB
REAL MEMORY USAGE : % : 92.800
-----------------------------

The paging on the server is not significant which is 1.1GB out of 16GB

Paging Space Usage
-----------------------------
Total Pg Space : 16640.00000 MB
Pg Space Used : 1187.77340 MB
Percent Used PAGING : 7.100
-----------------------------

Then I did sqlplus “/ as sysdba” to do some SELECTs on some views (…yeah it really took about 2minutes…)

I looked at the free memory of the SGA memory structures… well, I think SGA is not the issue but I still have to find out…

select pool, round(bytes/1024/1024,2) size_mb from v$sgastat where name like '%free memory%';

POOL            SIZE_MB
------------ ----------
shared pool      655.57
large pool        302.6
java pool            16
streams pool      15.99

At that time, there were only 4 users connected (that includes me) to the database and the C program was not running. I suspected that there are some contentions (latch) or concurrency problems that I’ll see on the wait interface… here’s a portion of the AWR report

   Elapsed:               60.11 (mins)
   DB Time:              121.50 (mins)

Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
os thread startup                     3,278       3,115    950   42.7 Concurrenc
latch free                            6,298       1,822    289   25.0      Other
flashback buf free by RVWR            1,771       1,721    972   23.6      Other
SQL*Net more data to client          60,087       1,574     26   21.6    Network
CPU time                                          1,088          14.9
          -------------------------------------------------------------

The DB Time was 3940.8 seconds (121.50 * 60) and the top event which is the “os thread startup” consumes 83% (3276/3940.8) of the total DB Time

And just for comparison, below is their normal workload which is generated on the same period (but different day) as the above

   Elapsed:               59.52 (mins)
   DB Time:               72.42 (mins)

Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
CPU time                                          2,009          46.2
db file scattered read              151,781         423      3    9.7   User I/O
db file sequential read              66,212         301      5    6.9   User I/O
latch: shared pool                      384          67    175    1.5 Concurrenc
log file sequential read                404          19     46    0.4 System I/O
          -------------------------------------------------------------

Hmmm so what is “os thread startup”? The ADDM report also outputs this event as a top finding

FINDING 1: 79% impact (3113 seconds)
------------------------------------
Wait event "os thread startup" in wait class "Concurrency" was consuming
significant database time.

   RECOMMENDATION 1: Application Analysis, 79% benefit (3113 seconds)
      ACTION: Investigate the cause for high "os thread startup" waits. Refer
         to Oracle's "Database Reference" for the description of this wait
         event. Use given SQL for further investigation.
      RATIONALE: The SQL statement with SQL_ID "4gd6b1r53yt88" was found
         waiting for "os thread startup" wait event.
         RELEVANT OBJECT: SQL statement with SQL_ID 4gd6b1r53yt88

   RECOMMENDATION 2: Application Analysis, 79% benefit (3113 seconds)
      ACTION: Investigate the cause for high "os thread startup" waits in
         Service "SYS$BACKGROUND".

   SYMPTOMS THAT LED TO THE FINDING:
      SYMPTOM: Wait class "Concurrency" was consuming significant database
               time. (80% impact [3156 seconds])

The event is not even documented on 8i, 9i, 10g, 11g official doc… I found some articles when I Google’d but they are not related to my issue… and Metalink does not have any helpful documents on this event for my situation…

There was already a Service Request regarding the issue, and the Support Engineer defined it as

“This wait event might be seen if the database server is executing on a platform that supports multi-threading. We enter this waiting state while a thread is starting up and leave the wait state when the thread has started or the startup request is cancelled.
This indicates some high contention at OS level avoiding even new process startup.
Issue is related to OS please involve system admin to solve same.”

The manager decided that we do a hard shutdown of the LPAR (we’re on P570 AIX 5.3) from the management console, and let’s see what it can do…

The DBA told me that once they are connected, all the operations are okay (DML, DDL, SQL*Loader).

So after the restart we tried to run the program for a while just to observe the behavior and what’s going on… apparently the local authentication is still slow!

For me, the challenge is how to have a response time profile of a session from the time it starts to connect with local authentication up to the time the session is connected so I would know what part of the operation is taking a lot of time.

What I remember, is when I was doing some queries to do some drill down on a session that was connecting using sqlplus “/ as sysdba” I see the background processes being spawned (QMNC, ARCn, etc..) are taking a lot of time in WAITING state with SECONDS_IN_WAIT reaching up to 20 seconds and has a wait event of “os thread startup”, then after a while the user gets connected then he’s now ready to do his thing without problems…Meaning after he’s connected there are no more complains…

SQL> select w.sid, s.sql_hash_value, s.program, w.event, w.wait_time/100 t, w.seconds_in_wait seconds_in_wait, w.state, w.p1, w.p2, w.p3
  2  from v$session s, v$session_wait w
  3  where s.sid = w.sid
  4  and w.state = 'WAITING'
  5  and w.event like '%os thread startup%'
  6  order by 6 asc;

   SID SQL_HASH_VALUE PROGRAM                 EVENT                       T SECONDS_IN_WAIT STATE                     P1         P2         P3
------ -------------- ----------------------- ------------------ ---------- --------------- ----------------- ---------- ---------- ----------
   121              0 oracle@ps570roc01 (QMNC os thread startup)          0              18 WAITING                    0          0          0

To know what’s going on when a user connects and for me to account the operation where it takes significant time; I created a database user TRACE_USER and created a logon trigger that invokes Extended SQL Trace Level 12. This will give me the timing information that I need when the user is connecting to the database.

To create the user:

create user trace_user identified by trace_user account unlock;
grant create session,dba to trace_user;

To create the logon trigger:

create or replace trigger sys.set_trace
after logon on database
when (user = 'trace_user')
declare
lcommand varchar(200);
begin
execute immediate 'alter session set statistics_level=all';
execute immediate 'alter session set max_dump_file_size=unlimited';
execute immediate 'alter session set events ''10046 trace name context forever, level 12''';
end set_trace;
/

After the user and the trigger are created, logon as the TRACE_USER by doing the following command:

sqlplus "trace_user/trace_user" << eof
exit;
eof

This will generate an Extended SQL Trace (10046) trace file under the $ORACLE_BASE/admin//udump directory

Then see if there’s significant time on the database calls (PARSE,BIND,EXEC,FETCH) by looking at the “e=” section which means elapsed duration consumed by the database call

And check if there’s significant time on wait events by looking at the “ela=” section which means elapsed duration consumed by the wait event (probably by doing a system call)

Below is a small portion of the trace file generated, we are interested on the highlighted portions…

PARSING IN CURSOR #5 len=131 dep=1 uid=0 oct=3 lid=0 tim=15687899202263 hv=1389591971 ad='ca9a7948'
select privilege#,nvl(col#,0),max(mod(nvl(option$,0),2))from objauth$ where obj#=:1 and grantee#=:2 group by privilege#,nvl(col#,0)
END OF STMT
EXEC #5:c=0,e=145,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=15687899202260
WAIT #5: nam='db file sequential read' ela= 30005 file#=1 block#=24208 blocks=1 obj#=-1 tim=15687899232346
FETCH #5:c=0,e=30075,p=1,cr=2,cu=0,mis=0,r=0,dep=1,og=4,tim=15687899232396

There were no database calls that consumed significant time (numbers below are microseconds, that’s for 9i and above):

less oracle_ora_1118270.trc | grep -i ",e=" | cut -d , -f2 > karlarao.txt ; sed -n 's/e=/ /p' karlarao.txt | sort -nr | less

 358746
 183162
 61293
 44661
 32580
 30075
 28695
 26950
 25837
 24244
 23519
 20543
 20132
 19449
 19290
 18809
 18250
 17230
 16507
 16349
 13438
 11716
 11690
 11284
 10137

There were no wait events that consumed significant time (numbers below are microseconds, that’s for 9i and above):

less oracle_ora_1118270.trc | grep -i "ela=" | cut -d " " -f8 | sort -nr | less

30005
28624
13253
11592
9650

There was no “os thread startup” event!

less oracle_ora_1118270.trc | grep -i "ela=" | cut -d "=" -f2 | uniq

'SQL*Net message to client' ela
'SQL*Net message from client' ela
'SQL*Net message to client' ela
'SQL*Net message from client' ela
'SQL*Net message to client' ela
'SQL*Net message from client' ela
'SQL*Net message to client' ela
'SQL*Net message from client' ela
'db file sequential read' ela
'SQL*Net message to client' ela
'SQL*Net message from client' ela
'db file sequential read' ela
'SQL*Net message to client' ela
'db file sequential read' ela
'SQL*Net message from client' ela
'SQL*Net message to client' ela
'SQL*Net message from client' ela
'db file sequential read' ela
'SQL*Net message to client' ela
'SQL*Net message from client' ela
'SQL*Net message to client' ela
'SQL*Net message from client' ela
'SQL*Net message to client' ela
'SQL*Net message from client' ela
'SQL*Net message to client' ela
'SQL*Net message from client' ela

I need to have some instrumentation on the system calls, so I’ll use a UNIX tool,

presenting… TRUSS…

This tool will let you do some application tracing, for us we are interested in tracing SQL*Plus, and display the calls that an application makes to external libraries and the kernel… sounds cool</p />
</p></div>

    	  	<div class=

Issues with current trend

In my last post I have introduced the code classification used by the Helsinki declaration (as opposed to MVC used by JEE):User Interface (UI) code: all code that creates UI and responds to events in the UI, the same as JEE's View and ControlData Logic (DL) code: all code that maintains data integrity constraints, a well defined subset of JEE's ModelBusiness Logic (BL) code: all other code, the