Search

OakieTags

Who's online

There are currently 0 users and 36 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

phpBB 3.1 Ascraeus Released

Just a quick heads-up for those that use it, phpBB 3.1 Ascraeus as been released. It’s a feature release, so the upgrade is a bit messy. I did the “automatic” upgrade. There was so much manual work involved, I would recommend you take the approach of deleting the old files, replacing with the new ones, then running the database upgrade from there. I’ve not tried that approach, but the docs say it is OK to do it that way…

I figured I might as well upgrade, even though the forum is locked. :)

Cheers

Tim…


phpBB 3.1 Ascraeus Released was first posted on October 28, 2014 at 8:41 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Getting started with XQuery Update Facility 1.0

DeleteXML, InsertXML, UpdateXML, appendChildXML, insertChildXML, insertchildXMLafter, insertChildXMLbefore, insertXMLafter and insertXMLbefore are dead (& deprecated) from…

First Rows

Following on from the short note I published about the first_rows optimizer mode yesterday here’s a note that I wrote on the topic more than 2 years ago but somehow forgot to publish.

I can get quite gloomy when I read some of the material that gets published about Oracle; not so much because it’s misleading or wrong, but because it’s clearly been written without any real effort being made to check whether it’s true. For example, a couple of days ago [ed: actually some time around May 2012] I came across an article about optimisation in 11g that seemed to be claiming that first_rows optimisation somehow “defaulted” to first_rows(1) , or first_rows_1, optimisation if you didn’t supply a final integer value.

For at least 10 years the manuals have described first_rows (whether as a hint or as a parameter value) as being available for backwards compatibility; so if it’s really just a synonym for first_rows_1 (or first_rows(1)) you might think that the manuals would actually mention this. Even if the manuals didn’t mention it you might just consider a very simple little test before making such a contrary claim, and if you did make such a test and found that your claim was correct you might actually demonstrate (or describe) the test so that other people could check your results.

It’s rather important, of course, that people realise (should it ever happen) that first_rows has silently changed into first_rows_1 because any code that’s using it for backwards compatibility might suddenly change execution path when you did the critical upgrade where the optimizer changed from “backwards compatibility” mode to “completely different optimisation strategy” mode. So here’s a simple check (run from 11.2.0.4 – to make sure I haven’t missed the switch):

begin
        dbms_stats.set_system_stats('MBRC',16);
        dbms_stats.set_system_stats('MREADTIM',10);
        dbms_stats.set_system_stats('SREADTIM',5);
        dbms_stats.set_system_stats('CPUSPEED',1000);
end;
/

create table t2 as
select
        mod(rownum,200)         n1,
        mod(rownum,200)         n2,
        rpad(rownum,180)        v1
from    all_objects
where rownum <= 3000
;

begin
        dbms_stats.gather_table_stats(
                user,
                't2',
                method_opt => 'for all columns size 1'
 
        );
end;
/

create index t2_i1 on t2(n1);

SQL> select /*+ all_rows */ n2 from t2 where n1 = 15;

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    15 |   120 |    12   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T2   |    15 |   120 |    12   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N1"=15)

You’ll notice that I’ve created my data in a way that means I’ll have 15 rows with the value 15, scattered evenly through the table. As a result of the scattering the clustering_factor on my index is going to be similar to the number of rows in the table, and the cost of fetching all the rows by index is going to be relatively high. Using all_rows optimization Oracle has chosen a tablescan.

So what happens if I use the first_rows(1) hint, and how does this compare with using the first_rows hint ?

SQL> select /*+ first_rows(1) */ n2 from t2 where n1 = 15;

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     2 |    16 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T2    |     2 |    16 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T2_I1 |       |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N1"=15)

SQL> select /*+ first_rows */ n2 from t2 where n1 = 15;

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    15 |   120 |    16   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T2    |    15 |   120 |    16   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T2_I1 |    15 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N1"=15)

You might not find it too surprising to see that Oracle used the indexed access path in both cases. Does this mean that first_rows really means (or defaults to) first_rows(1) ?

Of course it doesn’t - you need only look at the estimated cost and cardinality to see this. The two mechanisms are clearly implemented through difference code paths. The first_rows method uses some heuristics to restrict the options it examines, but still gives us the estimated cost and cardinality of fetching ALL the rows using the path it has chosen. The first_rows(1) method uses arithmetic to decide on the best path for getting the first row, and adjusts the cost accordingly to show how much work it thinks it will have to do to fetch just that one row.

Of course, no matter how inane a comment may seem to be, there’s always a chance that it might be based on some (unstated) insight. Is there any way in which first_rows(n) and first_rows are related ? If so could you possibly manage to claim that this establishes a “default value” link?

Funnily enough there is a special case: if you try hinting with first_rows(0) – that’s the number zero – Oracle will use the old first_rows optimisation method – you can infer this from the cost and cardinality figures, or you can check the 10053 trace file, or use a call to dbms_xplan() to report the outline.  It’s an interesting exercise (left to the reader) to decide whether this is the lexical analyzer deciding to treat the “(0)” as a new – and meaningless – token following the token “first_rows”, or whether it is the optimizer recognising the lexical analyzer allowing “first_rows(0)” as a token which the optimizer is then required to treat as first_rows.

Mind you, if you only want the first zero rows of the result set there’s a much better trick you can use to optimise the query – don’t run the query.

 

SIOUG and HROUG Fall Conferences, 2014!

I’ve just returned from my honeymoon in Venice with the wonderful Tim Gorman, but before we enjoyed our week in this picturesque city, we spent a week doing joint keynotes and two sessions each on our own tech topics at these two incredible conferences in neighboring countries.

SIOUG, (Slovenia Oracle User Group) in Ljubljana, Slovenia held their 2014 conference Oct. 13th-14th at the Plaza Hotel.  Tim and I did a joint keynote, “Assumptions, the Death of First Dates and Tech Projects Around the World” which was great fun, as it’s loaded with a number of Pulp Fiction stills that transcends any language barrier and gets everyone finding humor in the obvious, but also understanding how important it is to never assume anything in the tech world.

As soon as we arrived via car, driving in from Venice, we were hosted to a wonderful, traditional Slovenian dinner by Joze, @joc1954 and Lily Senegacnik and started hanging out with Milena Gerova, @milenagerova and Nikolay Kovachev, (both from the Bulgarian Oracle User Group).  A great time was had by all and the conference was wonderful!  Throughout both conferences, both Milena and Nikolay were in the middle of a number of technical and social events, always having a great time!

sioug

Once finished with the two day conference in Slovenia, we then took a wonderful drive down to Croatia to present the same sessions and keynote at the HROUG, (Croatian Oracle User Group) fall conference in Rovinj, Croatia.  This is a beautiful small town on the coast and we were not disappointed.

I’d met the HROUG president at the IOUC summit back at the end of September and seeing him in his “element” was well worth the time!  Davor is bigger than life and treated us as if we were all family.  He sang with the wonderful band that played at many of the events and ensured we had both a tour of the wonderful town of Rovinj, along with an incredible meal!

davor

The actual 2014 conference was held at the Hotel Istra on the island near Rovinj.  It was an absolutely beautiful location to do a keynote and tech sessions.  We were humbled by the incredibly positive feedback Tim and I received from both conferences keynote attendees and thrilled that it was received so well.  We also appreciated the support of Zoran Pavlovic, @ChallengeZoran and Maja Vaselica, @orapassion, (Serbian Oracle User Group) as we pointed them out as a fellow and rare Oracle couple.  It was wonderful to finally meet them and get to know these two wonderfully warm and technically bright folks!  I was also impressed that as many folks as there were attended the advanced EM12c concepts and of course, the AWR Warehouse had a great turnout, as expected.

hroug

Both conferences are well worth the time and effort, my only recommendation is, if you decide to drive there with a rental car, just make sure that you purchase a Vignette BEFORE coming into the country, (or Switzerland, Austria or Bulgaria, too!) I’ll leave you folks to ask Tim what happens when you don’t! :)

 

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [SIOUG and HROUG Fall Conferences, 2014!], All Right Reserved. 2014.

First Rows

I received an email earlier on this year asking me my opinion of the first_rows option for the optimizer mode. My correspondent was looking at a database with the following settings:

optimizer_mode=first_rows
_sort_elimination_cost_ratio=4

He felt that first_rows was a very old optimizer instruction that might cause suboptimal execution plans in it’s attempt to avoid blocking operations. As for the cost ratio, no-one seemed to be able to explain why it was there.

He was correct; I’ve written the first_rows option a few times in the past – it was left in for backwards compatibility, and reported as such from 9i onwards!

As for the _sort_elimination_cost_ratio – it’s (probably) there to work around the problems caused by first_rows optimisation when you have an ORDER BY clause that could be met by walking an index in competition with a WHERE clause that could be met from another index. Under first_rows the optimizer is highly likely to choose the index for the order by to avoid sorting; but the _sort_elimination_cost_ratio says:  “if the cost of using the index for the ORDER BY is more than N times the cost of using the other index for the WHERE clause then use an index on the WHERE clause and sort the result.”

 The fact that the parameter has been set so low in this case suggests that the end-user:
  1. set first_rows because “it’s an OLTP system” - a myth that even the manuals promoted
  2. found lots of queries taking silly index-driven execution plans because they’d use (say) a primary key index to access and discard vast amounts of  data in the required order, instead of picking up a small amount of data using a better choice of index and then sorting it.
I’ve said many times it in the past: you probably don’t need any first_rows(n) or first_rows_N optimisation, but if you’re using first_rows (i.e. the old option) you really ought to get away from it. Depending on the time you have for testing and your aversion to risk, you might go straight to all_rows, or switch to first_rows_10.  (First_rows_1 can be over-aggressive and introduce some of the same side effects as first_rows).

Heuristic TEMP Table Transformation

There are at least three different ways how the Oracle optimizer can come up with a so called TEMP table transformation, that is materializing an intermediate result set:- As part of a star transformation the repeated access to dimensions can be materialized- As part of evaluating GROUPING SETs intermediate result sets can be materialized- Common Subquery/Table Expressions (CTE, WITH clause)Probably the most common usage of the materialization is in conjunction with the WITH clause.This is nothing new but since I came across this issue several times recently, here's a short demonstration and a reminder that this so called "TEMP Table Transformation" - at least in the context of the WITH clause - isn't really cost-based, in contrast to most other optimizer transformations nowadays - although the unnest transformation of subqueries also has a "no-brainer" variant where costing isn't considered.The logic simply seems to be: If the CTE expression is referenced more than once AND the CTE expression contains at least some (filter or join) predicate then it will be materialized.While in most cases this makes sense to avoid the otherwise repeated evaluation of the CTE expression, there are cases where additional predicates that could be pushed inside the CTE would lead to different, significantly more efficient access paths than materializing the full CTE expression without applying the filters and filtering on the TEMP table afterwards.Here are just two very simple examples that demonstrate the point, both based on this sample table setup:


create table t1
as
select
rownum as id
, rpad('x', 100) as filler
from
dual
connect by
level <=1e5;

exec dbms_stats.gather_table_stats(null, 't1')

create index t1_idx on t1 (id);

The index on T1.ID opens up potentially a very precise access to rows.Here is example number one:


with
a as
(
select /* inline */
id
, filler
from
t1
where
filler != 'x'
)
select
t1.*
, a1.filler
, a2.filler
from
t1
, a a1
, a a2
where
a1.id = t1.id
and a2.id = t1.id
and t1.id = 1
and a1.id = 1
and a2.id = 1
;

-- 11.2.0.3 Plan without INLINE hint
------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 236 | 1207 (1)| 00:00:01 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D6619_229329 | | | | |
|* 3 | TABLE ACCESS FULL | T1 | 99999 | 10M| 420 (1)| 00:00:01 |
|* 4 | HASH JOIN | | 1 | 236 | 787 (1)| 00:00:01 |
|* 5 | HASH JOIN | | 1 | 171 | 394 (1)| 00:00:01 |
| 6 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 106 | 2 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | T1_IDX | 1 | | 1 (0)| 00:00:01 |
|* 8 | VIEW | | 99999 | 6347K| 392 (1)| 00:00:01 |
| 9 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6619_229329 | 99999 | 10M| 392 (1)| 00:00:01 |
|* 10 | VIEW | | 99999 | 6347K| 392 (1)| 00:00:01 |
| 11 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6619_229329 | 99999 | 10M| 392 (1)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

-- 11.2.0.4 Plan without INLINE hint
--------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 9999M| 2197G| | 28468 (92)| 00:00:02 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | | |
| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D661A_229329 | | | | | |
|* 3 | TABLE ACCESS FULL | T1 | 99999 | 10M| | 420 (1)| 00:00:01 |
|* 4 | HASH JOIN | | 9999M| 2197G| 7520K| 28048 (93)| 00:00:02 |
|* 5 | VIEW | | 99999 | 6347K| | 392 (1)| 00:00:01 |
| 6 | TABLE ACCESS FULL | SYS_TEMP_0FD9D661A_229329 | 99999 | 10M| | 392 (1)| 00:00:01 |
|* 7 | HASH JOIN | | 99999 | 16M| | 394 (1)| 00:00:01 |
| 8 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 106 | | 2 (0)| 00:00:01 |
|* 9 | INDEX RANGE SCAN | T1_IDX | 1 | | | 1 (0)| 00:00:01 |
|* 10 | VIEW | | 99999 | 6347K| | 392 (1)| 00:00:01 |
| 11 | TABLE ACCESS FULL | SYS_TEMP_0FD9D661A_229329 | 99999 | 10M| | 392 (1)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------------

-- 11.2.0.3/11.2.0.4 Plan with INLINE hint
-----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 318 | 6 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 1 | 318 | 6 (0)| 00:00:01 |
| 3 | NESTED LOOPS | | 1 | 212 | 4 (0)| 00:00:01 |
|* 4 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 106 | 2 (0)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | T1_IDX | 1 | | 1 (0)| 00:00:01 |
| 6 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 106 | 2 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | T1_IDX | 1 | | 1 (0)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | T1_IDX | 1 | | 1 (0)| 00:00:01 |
|* 9 | TABLE ACCESS BY INDEX ROWID | T1 | 1 | 106 | 2 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

The filter in the CTE expression is just there to fulfill the rules I've stated above, without it the TEMP table transformation wouldn't be considered at all. It could also be a (non-filtering) join condition, for example.Notice the big difference in cost estimates between the plans with and without materialization. Clearly a cost-based evaluation should have rejected the TEMP table transformation, simply because it is a bad idea to materialize 100K rows and afterwards access this TEMP table twice to filter out exactly a single row, instead of accessing the original, untransformed row source twice via precise index access.This is by the way an example of another anomaly that was only recently introduced (apparently in the 11.2.0.4 patch set / 12.1 release): Notice the bad cardinality estimate in the 11.2.0.4 plan with the TEMP table transformation - the filter on the TEMP table isn't evaluated properly (was already there in previous releases) and in addition the join cardinality is way off - 10G rows instead of a single row is not really a good estimate - and as a side effect the HASH JOIN uses a bad choice for the build row sources.Another possible, perhaps less common variant is this example:


with
a as
(
select /* inline */
id
, filler
from
t1
where
filler != 'x'
)
select
id
, (select filler from a where id = x.id) as scal_val1
, (select filler from a where id = x.id) as scal_val2
from
t1 x
;

-- 12.1.0.2 Plan without INLINE hint
--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 100K| 488K| 77M (1)| 00:50:26 |
|* 1 | VIEW | | 99999 | 6347K| 392 (1)| 00:00:01 |
| 2 | TABLE ACCESS FULL | SYS_TEMP_0FD9D660F_229329 | 99999 | 10M| 392 (1)| 00:00:01 |
|* 3 | VIEW | | 99999 | 6347K| 392 (1)| 00:00:01 |
| 4 | TABLE ACCESS FULL | SYS_TEMP_0FD9D660F_229329 | 99999 | 10M| 392 (1)| 00:00:01 |
| 5 | TEMP TABLE TRANSFORMATION | | | | | |
| 6 | LOAD AS SELECT | SYS_TEMP_0FD9D660F_229329 | | | | |
|* 7 | TABLE ACCESS FULL | T1 | 99999 | 10M| 420 (1)| 00:00:01 |
| 8 | TABLE ACCESS FULL | T1 | 100K| 488K| 420 (1)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

-- 12.1.0.2 Plan with INLINE hint
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 100K| 488K| 398K (1)| 00:00:16 |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 1 | 106 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T1_IDX | 1 | | 1 (0)| 00:00:01 |
|* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 1 | 106 | 2 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | T1_IDX | 1 | | 1 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL | T1 | 100K| 488K| 420 (1)| 00:00:01 |
----------------------------------------------------------------------------------------------

This time I've shown plans from 12.1.0.2 - the latest available release as I write this - to demonstrate that this hasn't changed yet. What has changed in 12c is that the scalar subqueries are now actually represented in the final cost - in pre-12c these costs wouldn't be part of the total cost. So although due to that the cost difference between the two plans in 12c is much more significant than in pre-12c the optimizer still opts for materializing the CTE expression and running full table scans in the scalar subqueries on that temp table instead of taking advantage of the precise access path available - again very likely a pretty bad idea at runtime.So whenever you make use of the WITH clause make sure you've considered the access paths that might be available when not materializing the result set.

Footnote

As of Oracle 12.1 the MATERIALIZE and INLINE hints are still not officially documented.

Website Outage Warning : 26 Oct 2014 20:00-00:00 GMT

DiagnosticsJust a quick note to say the website will be out of action this evening for 3-4 hours.

There have been a couple of random failures recently. With nothing in the logs to work with, I figured I’d try testing the hardware. Yesterday I tested the disks and they came back OK. Tonight it’s the turn of the memory. The plan is for the site to go down about 20:00 UK Time (GMT) and be up by midnight.

Sorry if this annoys anyone, but I’ve been looking through the site statistics trying to find the best time to do this and Sunday night seems to be the quietest time.

I’ll let you know how it goes. :)

Cheers

Tim…

PS. You can read the stuff from Google’s cache in the short term. Search for the item on Google. When you get it, click the down-arrow next to the URL and select “Cached”. No need to miss anything… :)

GoogleCache

Update: It didn’t happen as the data centre people got back to me too late to start it (this morning). I’ll pencil this in for next week…


Website Outage Warning : 26 Oct 2014 20:00-00:00 GMT was first posted on October 26, 2014 at 9:38 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Short and sweet: What do you think of when you hear “Republicans?”

Oddly this was a discussion topic on Amazon. Most of it was vitriol from folks who have been schooled to hate Republicans. As a Republican, I know we get a bad rap and often our predominant touted feature is a confusion that we are against stated goals rather than being against the stated means of attempting to achieve those goals.

So here is my answer: What do you think of when you hear “Republicans?”

Folks who want to efficiently create a safe and prosperous society who end up spending all their time trying to stop the Democrats from pursuing stupid, hateful, and wasteful policies that would never work to achieve the laudable goals they claim the policies pursue. Unfortunately the Republicans are then cast as being against those goals, most of which they also support.

Minimising Parse Time in Application Engine with ReUseStatement

This article explains how to determine the overhead of using literal values rather than bind variables in SQL submitted by PeopleSoft Application Engine programs. Using a combination of PeopleSoft Application Engine Batch Timings and Oracle Active Session History (ASH), it is possible to determine where it is most effective to change to alter this behaviour by setting the ReUse attribute.

ReUse Statement Flag

I originally wrote about the Performance Benefits of ReUseStatement in Application Engine over 5 years ago, and the problem is no less significant today than it was then.  There are still many places in the delivered PeopleSoft application that would benefit from it.  However, it is not possible to simply set the attribute across all Application Engine programs because it will cause errors in steps where SQL is dynamically generated, so each case must be considered.  Where the ReUse attributed is changed, it must be tested and migrated like any other customisation.

Recently, I have been encountering problems in a number of places on a Financials system which were resolved by enabling ReUseStatement.  So I started by calculating how much time was lost by not setting it.

Application Engine Batch Timings

If an Application Engine step is not reused, then it is compiled prior to every execution during which the Application Engine bind variables are resolved to literals. The number and duration of compilations and executions are reported in the Batch Timings, the production of which are controlled with the TraceAE flag in the Process Scheduler configuration file (psprcs.cfg).  

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
;-------------------------------------------------------------------------
; AE Tracing Bitfield
;
; Bit Type of tracing
; --- ---------------
...
; 128 - Timings Report to AET file
...
; 1024 - Timings Report to tables
...
TraceAE=1152
;------------------------------------------------------------------------

Whatever other TraceAE flags you set, I would always recommend that you set 128 and 1024 so that batch timings are always emitted to both file and database.  The overhead of enabling them is negligible because they are managed in memory at run time and physically written once when the Application Engine program ends.

NB: The settings in the configuration file can be overridden by command line options.  If you specify -TRACE parameter in the Process Scheduler definitions remember to also set these flags.

Batch timings are written to the AE Trace file at end of an Application Engine program, and to PS_BAT_TIMINGS PeopleTools tables at the successful end of an Application Engine program. 
It can be useful to have batch timings in the trace file of an Application Engine that failed because they are not written to the database.  As your system runs, you will build up batch timings for all of your successful Application Engine processes (which will be most of them.  This is a useful source of performance metrics.

Compilations, Execution and ReUse

In this example, the number of compilations is equal to the number of executions because ReUseStatement is not set.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
                          PeopleSoft Application Engine Timings
(All timings in seconds)

C o m p i l e E x e c u t e F e t c h Total
SQL Statement Count Time Count Time Count Time Time
------------------------------ ------- -------- ------- -------- ------- -------- --------
99XxxXxx.Step02.S 8453 2.8 8453 685.6 0 0.0 688.4
...

With ReUse Statement enabled, there is now only a single compilation, and most of the time is saved in execution not compilation.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
                               C o m p i l e    E x e c u t e    F e t c h        Total
SQL Statement Count Time Count Time Count Time Time
------------------------------ ------- -------- ------- -------- ------- -------- --------
99XxxXxx.Step02.S 1 0.0 8453 342.3 0 0.0 342.3
...

So we can use the batch timings to identify steps where ReUseStatement is not set because they have as many compilations as executions, and then we can profile the top statements.

 Profile Compilations

This query produces a simple profile of batch timings for statements. 

  • In sub-query x it extract batch timings for statements with more than one compilation in processes that ended in the last 7 days.
  • There is a long-standing bug in batch timings where negative timings can be returned when the clock that returns milliseconds recycles back to zero every 216 milliseconds.  Sub-query y calculates an adjustment that is applied in sub-query z if the timing is negative.
  • Arbitrarily, I am only looking at statements with more than a total of 10000 compilations.


#eeeeee; border: 0px solid #000000; height: 400px; overflow: auto; padding-left: 4px; padding-right: 4px; width: 98%;">
REM ReUseCand.sql
REM (c)Go-Faster COnsultancy Ltd. 2014
COLUMN detail_id FORMAT a32
COLUMN step_time FORMAT 999990 HEADING 'AE|Step|Secs'
COLUMN compile_count HEADING 'AE|Compile|Count'
COLUMN execute_count HEADING 'AE|Execute|Count'
COLUMN processes HEADING 'Num|Process|Instances'
COLUMN process_name HEADING 'Process|Name'
SPOOL ReUseCand
WITH x AS (
SELECT l.process_instance, l.process_name
, l.time_elapsed/1000 time_elapsed
, l.enddttm-l.begindttm diffdttm
, d.bat_program_name||'.'||d.detail_id detail_id
, d.compile_count, d.compile_time/1000 compile_time
, d.execute_time/1000 execute_time
FROM ps_bat_Timings_dtl d
, ps_bat_timings_log l
WHERE d.process_instance = l.process_instance
AND d.compile_count = d.execute_count
AND d.compile_count > 1
AND l.enddttm > SYSDATE-7
), y as (
SELECT x.*
, GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
+EXTRACT(hour FROM diffdttm))
+EXTRACT(minute FROM diffdttm))
+EXTRACT(second FROM diffdttm)-x.time_elapsed) delta
FROM x)
, z as (
SELECT process_instance, process_name, detail_id
, CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
ELSE time_elapsed END time_elapsed
, compile_count
, CASE WHEN compile_time < 0 THEN compile_time+delta
ELSE compile_time END AS compile_time
, CASE WHEN execute_time < 0 THEN execute_time+delta
ELSE execute_time END AS execute_time
FROM y
), a as (
SELECT process_name, detail_id
, SUM(compile_time+execute_time) step_time
, SUM(compile_count) compile_count
, COUNT(DISTINCT process_instance) processes
FROM z
GROUP BY process_name, detail_id)
SELECT * FROM a
WHERE compile_count >= 10000
ORDER BY step_time DESC
/
SPOOL OFF

So now I have a list of steps with lots of compilations.  I know how long they took, but I don't know how much time I might save by enabling ReUseStatement. That will save some time in Application Engine, but it will also cut down database parse time.  So now I need determine the parse time from ASH data.

#eeeeee; border: 0px solid #000000; overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
WITH x AS (
Process Step Compile Process
Name DETAIL_ID SEcs Count Instances
------------ -------------------------------- ------ ---------- ----------
AP_PSTVCHR AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10
AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step03.S 4313 49536 28
FS_VATUPDFS FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3
AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step02.S 2799 49536 28
PC_BI_TO_PC PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10
FS_VATUPDFS FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3
GL_JEDIT_0 FS_CEDT_ECFS.iTSELog.iTSELog.S 1628 13104 519
AP_APY2015 AP_APY2015.V_CREATE.Step14.H 1192 11318 19

This query is based on the previous one, but includes scalar queries on the ASH data for each step.

  • WARNING: This query can run for a long period because it has to scan ASH data for each entry in BAT_TIMINGS_DTL.
  • This time in sub-query x I am looking for a rolling 7-day period up to the last hour, because ASH data will have been flushed to the ASH repository.
  • In sub-query y there are two scalar queries that retrieve the DB time spent on hard parse, and all DB time for each batch timing entry.  These queries count 10 seconds for each distinct sample ID to estimate elapsed time rather than total DB time.
  • The query does not group by process name because one step can be called from many Application Engine programs and I want to aggregate the total time across all of them.


#eeeeee; border: 0px solid #000000; height: 400px; overflow: auto; padding-left: 4px; padding-right: 4px; width: 98%;">
REM ReUseCandASH.sql
REM ReUseCandASH.sql
REM (c)Go-Faster Consultancy Ltd. 2014
COLUMN processes HEADING 'Num|Process|Instances'
COLUMN process_name HEADING 'Process|Name'
COLUMN detail_id FORMAT a32
COLUMN step_time HEADING 'AE|Step|SEcs' FORMAT 999990
COLUMN compile_count HEADING 'AE|Compile|Count'
COLUMN execute_count HEADING 'AE|Execute|Count'
COLUMN hard_parse_secs HEADING 'Hard|Parse|Secs' FORMAT 99990
COLUMN ash_secs HEADING 'DB|Time' FORMAT 99990
SPOOL ReUseCandASH
WITH x AS (
SELECT l.process_instance, l.process_name
, l.time_elapsed/1000 time_elapsed
, l.begindttm, l.enddttm
, l.enddttm-l.begindttm diffdttm
, d.bat_program_name||'.'||d.detail_id detail_id
, d.compile_count, d.compile_time/1000 compile_time
, d.execute_time/1000 execute_time
FROM ps_bat_timings_dtl d
, ps_bat_timings_log l
WHERE d.process_instance = l.process_instance
AND d.compile_count = d.execute_count
AND d.compile_count > 1
AND l.enddttm >= TRUNC(SYSDATE-7,'HH24')
AND l.enddttm < TRUNC(SYSDATE,'HH24')
), y as (
SELECT x.*
, GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
+EXTRACT(hour FROM diffdttm))
+EXTRACT(minute FROM diffdttm))
+EXTRACT(second FROM diffdttm)-x.time_Elapsed) delta
FROM x)
, z as (
SELECT process_instance, process_name, detail_id
, CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
ELSE time_elapsed END AS time_elapsed
, compile_count
, CASE WHEN compile_time < 0 THEN compile_time+delta
ELSE compile_time END AS compile_time
, CASE WHEN execute_time < 0 THEN execute_time+delta
ELSE execute_time END AS execute_time
, (
SELECT 10*COUNT(DISTINCT h.sample_id)
FROM psprcsque q
, dba_hist_snapshot x
, dba_hist_active_sess_history h
WHERE q.prcsinstance = y.process_instance
AND x.begin_interval_time <= y.enddttm
AND X.END_INTERVAL_TIME >= y.begindttm
AND h.sample_time between y.begindttm and y.enddttm
AND h.SNAP_id = x.SNAP_id
AND h.dbid = x.dbid
AND h.instance_number = x.instance_number
AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
AND h.action = y.detail_id
AND h.in_hard_parse = 'Y'
) hard_parse_secs
, (
SELECT 10*COUNT(DISTINCT h.sample_id)
FROM psprcsque q
, dba_hist_snapshot x
, dba_hist_active_sess_history h
WHERE q.prcsinstance = y.process_instance
AND x.begin_interval_time <= y.enddttm
AND X.END_INTERVAL_TIME >= y.begindttm
AND h.sample_time between y.begindttm and y.enddttm
AND h.SNAP_id = X.SNAP_id
AND h.dbid = x.dbid
AND h.instance_number = x.instance_number
AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
AND h.action = y.detail_id
) ash_secs
FROM y
), a AS (
SELECT /*process_name ,*/ detail_id
, SUM(compile_time+execute_time) step_time
, SUM(compile_count) compile_count
, COUNT(DISTINCT process_instance) processes
, SUM(hard_parse_secs) hard_parse_secs
, SUM(ash_secs) ash_secs
FROM z
GROUP BY /*process_name,*/ detail_id)
SELECT a.*
FROM a
WHERE compile_count >= 10000
ORDER BY step_time DESC
/
spool off

Now we can also see how much time the database is spending on hard parse on each step, and it is this time that is likely to be saved by enabling ReUseStatement.
However, before we can enable the ReUseStatement attribute we must first manually inspect each step in Application Designer and determine whether doing so would break anything.  The Comment column in this profile was added manually as I did that.  Some statements I can change, some I have to accept the overhead.

#eeeeee; border: 0px solid #000000; overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
                                   Step    Compile    Process      Parse         DB
DETAIL_ID Secs Count Instances Secs Time Comment
-------------------------------- ------ ---------- ---------- ---------- ---------- …………………………………………………………………………………………………………………………………
AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10 11820 11920 Set ReUseStatement
FS_CEDT_ECMB.4EditCDT.uValCDT.S 5531 10289 679 620 5870 Dynamic SQL, can't set ReUseStatement
APVEDTMOVE.UPDQTY03.Step03.S 4306 49471 27 4020 4100 Set ReUseStatement
FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3 3150 3860 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECFS.iTSELog.iTSELog.S 3332 19073 716 2130 3520 Dynamic SQL, can't set ReUseStatement
APVEDTMOVE.UPDQTY03.Step02.S 2796 49471 27 2730 2820 Set ReUseStatement
PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10 230 1920 Set ReUseStatement
FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3 0 0 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.4uAnchCT.uAnchCDT.S 1319 10289 679 510 1290 Dynamic SQL, can't set ReUseStatement
AP_APY2015.V_CREATE.Step14.H 1169 11094 19 0 0 Set ReUseStatement
GL_JETSE.GA100.CHKEDT.S 1121 15776 569 860 930 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.iTSELog.iTSELog.S 988 10289 679 450 990 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.uMarkVal.uMarkVal.S 711 10289 679 50 670 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.uMarkInv.uMarkInv.S 668 10289 679 40 790 Dynamic SQL, can't set ReUseStatement
  • Due to a bug in the instrumentation of Application Engine, the session's action attribute is not set for Do Select (type D) and Do When (type H) steps.  ASH data cannot therefore be matched for them.
  • More DB Time is reported for FS_CEDT_ECMB.uMarkInv.uMarkInv.S than is reported by batch timings.  This is a consequence of ASH sampling, where we count 10 seconds for each sample.

Conclusion

Setting ReUseStatement is very simple because it doesn't involve changing SQL, but there are lots of places where it could be set.  This technique picks out the relatively few places where doing so could potentially have a significant effect.

    Minimising Parse Time in Application Engine with ReUseStatement

    This article explains how to determine the overhead of using literal values rather than bind variables in SQL submitted by PeopleSoft Application Engine programs. Using a combination of PeopleSoft Application Engine Batch Timings and Oracle Active Session History (ASH), it is possible to determine where it is most effective to change to alter this behaviour by setting the ReUse attribute.

    ReUse Statement Flag

    I originally wrote about the Performance Benefits of ReUseStatement in Application Engine over 5 years ago, and the problem is no less significant today than it was then.  There are still many places in the delivered PeopleSoft application that would benefit from it.  However, it is not possible to simply set the attribute across all Application Engine programs because it will cause errors in steps where SQL is dynamically generated, so each case must be considered.  Where the ReUse attributed is changed, it must be tested and migrated like any other customisation.

    Recently, I have been encountering problems in a number of places on a Financials system which were resolved by enabling ReUseStatement.  So I started by calculating how much time was lost by not setting it.

    Application Engine Batch Timings

    If an Application Engine step is not reused, then it is compiled prior to every execution during which the Application Engine bind variables are resolved to literals. The number and duration of compilations and executions are reported in the Batch Timings, the production of which are controlled with the TraceAE flag in the Process Scheduler configuration file (psprcs.cfg).  

    #eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
    ;-------------------------------------------------------------------------
    ; AE Tracing Bitfield
    ;
    ; Bit Type of tracing
    ; --- ---------------
    ...
    ; 128 - Timings Report to AET file
    ...
    ; 1024 - Timings Report to tables
    ...
    TraceAE=1152
    ;------------------------------------------------------------------------

    Whatever other TraceAE flags you set, I would always recommend that you set 128 and 1024 so that batch timings are always emitted to both file and database.  The overhead of enabling them is negligible because they are managed in memory at run time and physically written once when the Application Engine program ends.

    NB: The settings in the configuration file can be overridden by command line options.  If you specify -TRACE parameter in the Process Scheduler definitions remember to also set these flags.

    Batch timings are written to the AE Trace file at end of an Application Engine program, and to PS_BAT_TIMINGS PeopleTools tables at the successful end of an Application Engine program. 
    It can be useful to have batch timings in the trace file of an Application Engine that failed because they are not written to the database.  As your system runs, you will build up batch timings for all of your successful Application Engine processes (which will be most of them.  This is a useful source of performance metrics.

    Compilations, Execution and ReUse

    In this example, the number of compilations is equal to the number of executions because ReUseStatement is not set.

    #eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
                              PeopleSoft Application Engine Timings
    (All timings in seconds)

    C o m p i l e E x e c u t e F e t c h Total
    SQL Statement Count Time Count Time Count Time Time
    ------------------------------ ------- -------- ------- -------- ------- -------- --------
    99XxxXxx.Step02.S 8453 2.8 8453 685.6 0 0.0 688.4
    ...

    With ReUse Statement enabled, there is now only a single compilation, and most of the time is saved in execution not compilation.

    #eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
                                   C o m p i l e    E x e c u t e    F e t c h        Total
    SQL Statement Count Time Count Time Count Time Time
    ------------------------------ ------- -------- ------- -------- ------- -------- --------
    99XxxXxx.Step02.S 1 0.0 8453 342.3 0 0.0 342.3
    ...

    So we can use the batch timings to identify steps where ReUseStatement is not set because they have as many compilations as executions, and then we can profile the top statements.

     Profile Compilations

    This query produces a simple profile of batch timings for statements. 

    • In sub-query x it extract batch timings for statements with more than one compilation in processes that ended in the last 7 days.
    • There is a long-standing bug in batch timings where negative timings can be returned when the clock that returns milliseconds recycles back to zero every 216 milliseconds.  Sub-query y calculates an adjustment that is applied in sub-query z if the timing is negative.
    • Arbitrarily, I am only looking at statements with more than a total of 10000 compilations.


    #eeeeee; border: 0px solid #000000; height: 400px; overflow: auto; padding-left: 4px; padding-right: 4px; width: 98%;">
    REM ReUseCand.sql
    REM (c)Go-Faster COnsultancy Ltd. 2014
    COLUMN detail_id FORMAT a32
    COLUMN step_time FORMAT 999990 HEADING 'AE|Step|Secs'
    COLUMN compile_count HEADING 'AE|Compile|Count'
    COLUMN execute_count HEADING 'AE|Execute|Count'
    COLUMN processes HEADING 'Num|Process|Instances'
    COLUMN process_name HEADING 'Process|Name'
    SPOOL ReUseCand
    WITH x AS (
    SELECT l.process_instance, l.process_name
    , l.time_elapsed/1000 time_elapsed
    , l.enddttm-l.begindttm diffdttm
    , d.bat_program_name||'.'||d.detail_id detail_id
    , d.compile_count, d.compile_time/1000 compile_time
    , d.execute_time/1000 execute_time
    FROM ps_bat_Timings_dtl d
    , ps_bat_timings_log l
    WHERE d.process_instance = l.process_instance
    AND d.compile_count = d.execute_count
    AND d.compile_count > 1
    AND l.enddttm > SYSDATE-7
    ), y as (
    SELECT x.*
    , GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
    +EXTRACT(hour FROM diffdttm))
    +EXTRACT(minute FROM diffdttm))
    +EXTRACT(second FROM diffdttm)-x.time_elapsed) delta
    FROM x)
    , z as (
    SELECT process_instance, process_name, detail_id
    , CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
    ELSE time_elapsed END time_elapsed
    , compile_count
    , CASE WHEN compile_time < 0 THEN compile_time+delta
    ELSE compile_time END AS compile_time
    , CASE WHEN execute_time < 0 THEN execute_time+delta
    ELSE execute_time END AS execute_time
    FROM y
    ), a as (
    SELECT process_name, detail_id
    , SUM(compile_time+execute_time) step_time
    , SUM(compile_count) compile_count
    , COUNT(DISTINCT process_instance) processes
    FROM z
    GROUP BY process_name, detail_id)
    SELECT * FROM a
    WHERE compile_count >= 10000
    ORDER BY step_time DESC
    /
    SPOOL OFF

    So now I have a list of steps with lots of compilations.  I know how long they took, but I don't know how much time I might save by enabling ReUseStatement. That will save some time in Application Engine, but it will also cut down database parse time.  So now I need determine the parse time from ASH data.

    #eeeeee; border: 0px solid #000000; overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
    WITH x AS (
    Process Step Compile Process
    Name DETAIL_ID SEcs Count Instances
    ------------ -------------------------------- ------ ---------- ----------
    AP_PSTVCHR AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10
    AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step03.S 4313 49536 28
    FS_VATUPDFS FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3
    AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step02.S 2799 49536 28
    PC_BI_TO_PC PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10
    FS_VATUPDFS FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3
    GL_JEDIT_0 FS_CEDT_ECFS.iTSELog.iTSELog.S 1628 13104 519
    AP_APY2015 AP_APY2015.V_CREATE.Step14.H 1192 11318 19

    This query is based on the previous one, but includes scalar queries on the ASH data for each step.

    • WARNING: This query can run for a long period because it has to scan ASH data for each entry in BAT_TIMINGS_DTL.
    • This time in sub-query x I am looking for a rolling 7-day period up to the last hour, because ASH data will have been flushed to the ASH repository.
    • In sub-query y there are two scalar queries that retrieve the DB time spent on hard parse, and all DB time for each batch timing entry.  These queries count 10 seconds for each distinct sample ID to estimate elapsed time rather than total DB time.
    • The query does not group by process name because one step can be called from many Application Engine programs and I want to aggregate the total time across all of them.


    #eeeeee; border: 0px solid #000000; height: 400px; overflow: auto; padding-left: 4px; padding-right: 4px; width: 98%;">
    REM ReUseCandASH.sql
    REM ReUseCandASH.sql
    REM (c)Go-Faster Consultancy Ltd. 2014
    COLUMN processes HEADING 'Num|Process|Instances'
    COLUMN process_name HEADING 'Process|Name'
    COLUMN detail_id FORMAT a32
    COLUMN step_time HEADING 'AE|Step|SEcs' FORMAT 999990
    COLUMN compile_count HEADING 'AE|Compile|Count'
    COLUMN execute_count HEADING 'AE|Execute|Count'
    COLUMN hard_parse_secs HEADING 'Hard|Parse|Secs' FORMAT 99990
    COLUMN ash_secs HEADING 'DB|Time' FORMAT 99990
    SPOOL ReUseCandASH
    WITH x AS (
    SELECT l.process_instance, l.process_name
    , l.time_elapsed/1000 time_elapsed
    , l.begindttm, l.enddttm
    , l.enddttm-l.begindttm diffdttm
    , d.bat_program_name||'.'||d.detail_id detail_id
    , d.compile_count, d.compile_time/1000 compile_time
    , d.execute_time/1000 execute_time
    FROM ps_bat_timings_dtl d
    , ps_bat_timings_log l
    WHERE d.process_instance = l.process_instance
    AND d.compile_count = d.execute_count
    AND d.compile_count > 1
    AND l.enddttm >= TRUNC(SYSDATE-7,'HH24')
    AND l.enddttm < TRUNC(SYSDATE,'HH24')
    ), y as (
    SELECT x.*
    , GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
    +EXTRACT(hour FROM diffdttm))
    +EXTRACT(minute FROM diffdttm))
    +EXTRACT(second FROM diffdttm)-x.time_Elapsed) delta
    FROM x)
    , z as (
    SELECT process_instance, process_name, detail_id
    , CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
    ELSE time_elapsed END AS time_elapsed
    , compile_count
    , CASE WHEN compile_time < 0 THEN compile_time+delta
    ELSE compile_time END AS compile_time
    , CASE WHEN execute_time < 0 THEN execute_time+delta
    ELSE execute_time END AS execute_time
    , (
    SELECT 10*COUNT(DISTINCT h.sample_id)
    FROM psprcsque q
    , dba_hist_snapshot x
    , dba_hist_active_sess_history h
    WHERE q.prcsinstance = y.process_instance
    AND x.begin_interval_time <= y.enddttm
    AND X.END_INTERVAL_TIME >= y.begindttm
    AND h.sample_time between y.begindttm and y.enddttm
    AND h.SNAP_id = x.SNAP_id
    AND h.dbid = x.dbid
    AND h.instance_number = x.instance_number
    AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
    AND h.action = y.detail_id
    AND h.in_hard_parse = 'Y'
    ) hard_parse_secs
    , (
    SELECT 10*COUNT(DISTINCT h.sample_id)
    FROM psprcsque q
    , dba_hist_snapshot x
    , dba_hist_active_sess_history h
    WHERE q.prcsinstance = y.process_instance
    AND x.begin_interval_time <= y.enddttm
    AND X.END_INTERVAL_TIME >= y.begindttm
    AND h.sample_time between y.begindttm and y.enddttm
    AND h.SNAP_id = X.SNAP_id
    AND h.dbid = x.dbid
    AND h.instance_number = x.instance_number
    AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
    AND h.action = y.detail_id
    ) ash_secs
    FROM y
    ), a AS (
    SELECT /*process_name ,*/ detail_id
    , SUM(compile_time+execute_time) step_time
    , SUM(compile_count) compile_count
    , COUNT(DISTINCT process_instance) processes
    , SUM(hard_parse_secs) hard_parse_secs
    , SUM(ash_secs) ash_secs
    FROM z
    GROUP BY /*process_name,*/ detail_id)
    SELECT a.*
    FROM a
    WHERE compile_count >= 10000
    ORDER BY step_time DESC
    /
    spool off

    Now we can also see how much time the database is spending on hard parse on each step, and it is this time that is likely to be saved by enabling ReUseStatement.
    However, before we can enable the ReUseStatement attribute we must first manually inspect each step in Application Designer and determine whether doing so would break anything.  The Comment column in this profile was added manually as I did that.  Some statements I can change, some I have to accept the overhead.

    #eeeeee; border: 0px solid #000000; overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
                                       Step    Compile    Process      Parse         DB
    DETAIL_ID Secs Count Instances Secs Time Comment
    -------------------------------- ------ ---------- ---------- ---------- ---------- …………………………………………………………………………………………………………………………………
    AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10 11820 11920 Set ReUseStatement
    FS_CEDT_ECMB.4EditCDT.uValCDT.S 5531 10289 679 620 5870 Dynamic SQL, can't set ReUseStatement
    APVEDTMOVE.UPDQTY03.Step03.S 4306 49471 27 4020 4100 Set ReUseStatement
    FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3 3150 3860 Dynamic SQL, can't set ReUseStatement
    FS_CEDT_ECFS.iTSELog.iTSELog.S 3332 19073 716 2130 3520 Dynamic SQL, can't set ReUseStatement
    APVEDTMOVE.UPDQTY03.Step02.S 2796 49471 27 2730 2820 Set ReUseStatement
    PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10 230 1920 Set ReUseStatement
    FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3 0 0 Dynamic SQL, can't set ReUseStatement
    FS_CEDT_ECMB.4uAnchCT.uAnchCDT.S 1319 10289 679 510 1290 Dynamic SQL, can't set ReUseStatement
    AP_APY2015.V_CREATE.Step14.H 1169 11094 19 0 0 Set ReUseStatement
    GL_JETSE.GA100.CHKEDT.S 1121 15776 569 860 930 Dynamic SQL, can't set ReUseStatement
    FS_CEDT_ECMB.iTSELog.iTSELog.S 988 10289 679 450 990 Dynamic SQL, can't set ReUseStatement
    FS_CEDT_ECMB.uMarkVal.uMarkVal.S 711 10289 679 50 670 Dynamic SQL, can't set ReUseStatement
    FS_CEDT_ECMB.uMarkInv.uMarkInv.S 668 10289 679 40 790 Dynamic SQL, can't set ReUseStatement
    • Due to a bug in the instrumentation of Application Engine, the session's action attribute is not set for Do Select (type D) and Do When (type H) steps.  ASH data cannot therefore be matched for them.
    • More DB Time is reported for FS_CEDT_ECMB.uMarkInv.uMarkInv.S than is reported by batch timings.  This is a consequence of ASH sampling, where we count 10 seconds for each sample.

    Conclusion

    Setting ReUseStatement is very simple because it doesn't involve changing SQL, but there are lots of places where it could be set.  This technique picks out the relatively few places where doing so could potentially have a significant effect.