Search

OakieTags

Who's online

There are currently 0 users and 28 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

I’ve Got A Ram

Just in case anyone thinks I might lack the passion to make happen what I say I will make happen (end domestic poverty and the war on drugs together with all its bad effects, for starters), here is this link to my youtube channel and a song I sang for the Bel Canto benefit concert (sorry in advance for the poor video focus. The sound should be okay).

https://www.youtube.com/watch?v=5_H5Bq3Kd5E

You can also see me directing the University Chorus of the Upper Valley and plenty of other things on my channel pudge1954.

Enjoy.

The Next Big Thing

Oracle’s 12.1.0.2 was released a few weeks ago (You can download it from OTN here: Oracle 12.1.0.2 Download). While technically a minor point release, it contains a couple of major features that would normally be rolled out in a more substantial version change like 12cR2 or perhaps V13. Of course the most highly anticipated feature is a new option (Oracle In-Memory Option) that provides a column oriented, in-memory store. Enkitec was in the Beta program, so we’ve been testing it out for quite a while now and we are impressed. Here’s a link to a video of a conversation between myself, Tanel Poder and Cary Millsap about the In-memory Option published prior to the general release. Note: the three of us are also scheduled to do a webcast on the topic on Sep. 17th at 9:00AM CDT. You can sign up here if you are interested: In-Memory Webcast

But back to the topic: What this new option provides is a radical departure from the way Oracle has traditionally managed data access. In the past, all data access was done using row-major format, which is a foundation of the Oracle RDBMS architecture (I’m of course leaving out some esoteric formats such as the hybrid columnar compressed (HCC) format that is available on Exadata). At any rate, this columnar format is a major change in the way data is accessed for Oracle, and while the name of the option indicates that the secret sauce is the fact that the data is accessed from memory, I’m going to argue that the “memory” part is not the most important factor. In my opinion, the column-oriented format is why it’s “The Next Big Thing”.

While accessing data from RAM is definitely faster than reading it off disk, it’s important to note that Oracle has been serving data from memory for decades via the standard buffer cache. In fact, you could describe the Oracle RDBMS as a very sophisticated disk caching mechanism. That’s certainly a vast over simplification, but it’s really not too far from reality. Many Oracle systems spend most of their time accessing data from the buffer cache. Back in the day, DBA’s even invented a metric to describe the effectiveness of the caching. The much maligned “buffer cache hit ratio” was used for that purpose and is still present in the modern day AWR reports. While tuning artificial ratios like this one has long since gone out of fashion, it’s important to note that it is not uncommon to see this ratio in the upper 90′s. (i.e. 99% of blocks being accessed from RAM is common) And in fact, we can pin tables in the buffer cache so that all rows are accessed from memory. So if that’s the case, then we should be able to compare speeds of queries executed on data in memory using both the standard row-major format and the new columnar format. Let’s give it a quick try.

SYS@INMEM> -- note that to enable the In-Memory Option we had to set the INMEMORY_SIZE parameter and bounce the database
SYS@INMEM>
SYS@INMEM> @parms
Enter value for parameter: inmemory_size
Enter value for isset: 
Enter value for show_hidden: 
 
NAME                                               VALUE                                                                  ISDEFAUL ISMODIFIED ISSET
-------------------------------------------------- ---------------------------------------------------------------------- -------- ---------- ----------
inmemory_size                                      107374182400                                                           FALSE    FALSE      TRUE
 
Elapsed: 00:00:00.08
SYS@INMEM> -- then we had to tell Oracle that we wanted our table to take advantage of the column store
SYS@INMEM>
SYS@INMEM> alter table kso.skew3 inmemory;
 
Table altered.
 
Elapsed: 00:00:00.57
SYS@INMEM> -- we also told Oracle that we wanted our table to be cached in the normal buffer cache
SYS@INMEM>
SYS@INMEM> alter table kso.skew3 cache;
 
Table altered.
 
Elapsed: 00:00:00.01
SYS@INMEM> @inmem_tables
Enter value for owner: KSO
Enter value for table_name: 
 
OWNER                          TABLE_NAME                     CACHE PRIORITY INMEMORY_DISTRI INMEMORY_COMPRESS
------------------------------ ------------------------------ ----- -------- --------------- -----------------
KSO                            SKEW3                              Y NONE     AUTO            FOR QUERY LOW
 
Elapsed: 00:00:00.09
SYS@INMEM> -- note that the table was accessed in both modes to get it loaded in both the standard buffer cache and the column store
SYS@INMEM> -- (I cut the output to keep from clouding the issue)
SYS@INMEM> -- but you can see the blocks in the buffer cache below
SYS@INMEM> 
SYS@INMEM> @buffers_in_use
Enter value for owner: KSO
Enter value for object_name: SKEW3
 
OWNER                          OBJECT_NAME                    OBJECT_TYPE             BUFFERS
------------------------------ ------------------------------ -------------------- ----------
KSO                            SKEW3                          TABLE                   1305951
 
Elapsed: 00:00:02.37
SYS@INMEM> -- and you can see the table is in the column store as well
SYS@INMEM> 
SYS@INMEM> @inmem_segs
Enter value for owner: 
Enter value for segment_name: 
 
OWNER                          SEGMENT_NAME                   ORIG_SIZE_MEGS IN_MEM_SIZE_MEGS COMP_RATIO MEGS_NOT_POPULATED
------------------------------ ------------------------------ -------------- ---------------- ---------- ------------------
KSO                            SKEW3                                10,210.7          7,278.8        1.4                 .0
                                                                             ----------------
sum                                                                                   7,278.8
 
Elapsed: 00:00:00.00
SYS@INMEM> -- now let's test performance
SYS@INMEM> -- First let's try it from the standard buffer cache
SYS@INMEM>
SYS@INMEM> ALTER SESSION set inmemory_query = disable;
 
Session altered.
 
Elapsed: 00:00:00.00
SYS@INMEM> @avgskew3x
 
AVG(PK_COL)
-----------
 16487500.8
 
Elapsed: 00:00:12.45
SYS@INMEM> /
 
AVG(PK_COL)
-----------
 16487500.8
 
Elapsed: 00:00:12.41
SYS@INMEM>
SYS@INMEM> -- So about 12.5 seconds from the buffer cache
SYS@INMEM> @x
 
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  6vy9zstcnq02k, child number 5
-------------------------------------
select avg(pk_col) from kso.skew3 where col1 between 0 and 1000
 
Plan hash value: 2684249835                           <--- Same as TABLE ACCESS INMEMORY FULL version (see below)
 
-----------------------------------------------------
| Id  | Operation                  | Name  | E-Rows |
-----------------------------------------------------
|   0 | SELECT STATEMENT           |       |        |
|   1 |  SORT AGGREGATE            |       |      1 |
|*  2 |   TABLE ACCESS STORAGE FULL| SKEW3 |  64007 | <--- Exadata version of full scan (not necessarily off disk)
-----------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - storage(("COL1"<=1000 AND "COL1">=0))
       filter(("COL1"<=1000 AND "COL1">=0))
 
 
21 rows selected.
 
Elapsed: 00:00:00.10
SYS@INMEM> -- Now let's try it from the column store
SYS@INMEM>
SYS@INMEM> ALTER SESSION set inmemory_query = enable;
 
Session altered.
 
Elapsed: 00:00:00.00
SYS@INMEM> @avgskew3x
 
AVG(PK_COL)
-----------
 16487500.8
 
Elapsed: 00:00:03.03
SYS@INMEM> /
 
AVG(PK_COL)
-----------
 16487500.8
 
Elapsed: 00:00:03.11
SYS@INMEM>
SYS@INMEM> -- So a little over 3 seconds from the column store
SYS@INMEM> @x
 
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  6vy9zstcnq02k, child number 4
-------------------------------------
select avg(pk_col) from kso.skew3 where col1 between 0 and 1000
 
Plan hash value: 2684249835                           <--- Same as the TABLE ACCESS STORAGE FULL version
 
------------------------------------------------------
| Id  | Operation                   | Name  | E-Rows |
------------------------------------------------------
|   0 | SELECT STATEMENT            |       |        |
|   1 |  SORT AGGREGATE             |       |      1 |
|*  2 |   TABLE ACCESS INMEMORY FULL| SKEW3 |  64007 | <--- new access option
------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - inmemory(("COL1"<=1000 AND "COL1">=0))
       filter(("COL1"<=1000 AND "COL1">=0))
 
21 rows selected.
 
Elapsed: 00:00:00.02

So even though the data was accessed from memory on both tests, the column store was about 4X faster. So the speed up in this admittedly very simple example was not due to the fact that the data was accessed from RAM (because both tests accessed data directly from memory).

A couple of points about this example.

1. I hope you’ll take my word for it that no trickery was performed. The data was stored in standard 8K blocks and I didn’t do anything other than what is in the terminal output.

2. Oracle generally tries to prevent full scans of objects from wiping it out large portions of the buffer cache. This is a wise choice in most systems since the sizes of many databases far exceed the amount of RAM available for caching. However, it is possible to override that behavior by declaring that a table or partition should be cached. That’s what the CACHE table attribute does.

3. Note that the plan was actually different, even though the plan_hash_value didn’t reflect the difference. The first statement has TABLE ACCESS STORAGE FULL but the second version has TABLE ACCESS INMEMORY FULL. The reason the plan_hash_value doesn’t change is because only specific parts of the plan are included in the hash value calculation (see this post for more info: SQL Gone Bad – But Plan Not Changed? In this case, the operation (TABLE ACCESS) and not the options (INMEMORY FULL or STORAGE FULL) are used. This could be problematic as it will make it more difficult to identify a rather major decision by the optimizer (i.e. whether to use the In-Memory column store or not). This is analogous to the situation where a plan on Exadata doesn’t change, but on one execution a full table scan results in a smart scan while on a another execution it doesn’t. Performance can be vastly different even though the plan hash value hasn’t changed. I personally think this is a flaw and the behavior should be changed.

4. Of course the main point was that there is a significant difference in the execution time of the query when using the column store vs. using the buffer cache. In this simple case, the main difference was the fact that the query was very selective and the in-memory column store has a built in form of storage indexes (similar to Exadata storage indexes or the new zone maps) which allow Oracle to skip processing on regions of data via a min/max pruning.

Note that I did generate 10046 traces and the output clearly shows that neither test did any disk i/o.

So clearly the column orientation (and the optimizations enabled by it) makes a huge difference. So what is the secret sauce then? It’s the new format (and the improvement in processing that is possible with data arranged in columnar format) that provides the dramatic speed up. It’s by fundamentally changing the way the data is accessed, as opposed to the fact that the data is accessed from memory, that we are able to get the vast improvement in query execution times.

For those that are not bored out of their skulls at this point, here’s a bit of the tkprof output from the buffer cache test.

 
********************************************************************************
 
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
 
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          6          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2     12.79      12.82          0    1304305          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4     12.80      12.83          0    1304311          0           1
 
Misses in library cache during parse: 1
 
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       3        0.00          0.00
  SQL*Net message from client                     3       11.12         14.78
  Disk file operations I/O                        2        0.00          0.00
 
 
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
 
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute      2      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          6          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        6      0.00       0.00          0          6          0           0
 
Misses in library cache during parse: 1
Misses in library cache during execute: 1
 
    1  user  SQL statements in session.
    2  internal SQL statements in session.
    3  SQL statements in session.
********************************************************************************

So as you can see, there was no time spent waiting on disk I/O.

A couple of final comments and then I’ll shut up. I expect this new feature to drive adoption of 12c. Generally we don’t see mass adoption of new Oracle versions until the second major point release (i.e. 11gR2 or 12cR2). But in this case, I believe the new In-Memory Option provides so much benefit that it will drive some customers to early adoption and will provide enough potential payback to make it worth being one of the pioneers. We already have Exadata customers planning to move to 12.1.0.2 at their next patch cycle. So I expect we’ll have several systems in production by the end of the year. I’m looking forward to seeing how it behaves in the real world.

Well that’s about it for now. I hope to do several posts on the In-Memory Option over the next few weeks. So let me know if you have anything specific you’d like to hear about.

OTN APAC Tour 2014

As well as losing the ACED OpenWorld confirmation email, it turns out my website/mailbox move also caused me to lose the email about being accepted on the OTN APAC Tour 2014. I saw a tweet this morning saying that I was on the agenda for the NZOUG event and checked with Francisco to see what was going on. That’s when I found out that yet another important email had gone missing… :)

The good news is I had already agreed the time off work, so everything is good for the tour.

  • Perth, Australia : November 6-7
  • Shanghai, China : November 9
  • Tokyo, Japan : November 11-13
  • Beijing, China : November 14-15
  • Bangkok, Thailand : November 17
  • Auckland, New Zealand : November 19-21

After the OTN Yathra Tour 2014, which was 7 cities in 14 days, I told myself I was too old to do all this tour stuff… So much for that idea… :)

Although it’s a 6 event tour, the first and last events are pretty much like being at home for me. I’ve been to the AUSOUG and NZOUG events a few times, so it will be good to meet up with everyone again… It will be my second time in Beijing, and I’ve seen the traffic in India now, so I’m hoping I will cope better with the roads in China this time. :) Shanghai, Tokyo and Bangkok are all new to me, so it will be interesting to see how I cope with them…

As always before one of these tours, I am a complete mix of excitement, fear and general panic! :)

I look forward to seeing you all there!

Cheers

Tim…


OTN APAC Tour 2014 was first posted on August 26, 2014 at 10:49 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

12.1.0.2 Introduction to Attribute Clustering (The Division Bell)

One of the really cool new features introduced in 12.1.0.2 is Attribute Clustering. This new table based attribute allows you to very easily cluster data in close physical proximity based on the content of specific columns. As I’ve discussed many times, indexes love table data that is physically clustered in a similar manner to the index […]

ASH presentations from John Beresniewicz

 

John Beresniewicz and I worked hand in hand on developing the performance pages in Oracle Enterprise Manager. John is one of the foremost experts on active session history (ASH) queries and he has recently posted some of his presentations on ASH to slideshare:

 

 

 

Renaming an Oracle Apache Target in EM12c

When installing Enterprise Manager 12c, the host value can come from a number of places for different applications/tiers.  For most, it comes from the environment variable $ORACLE_HOSTNAME, (for Windows Servers, %ORACLE_HOSTNAME%).

The OHS1 target, aka Oracle Apache in the middle tier of the EM12c environment pulls it’s value from the etc/hosts file, (for Unix as well as Windows) and so it is vulnerable with a virtual host name or host name change occurs.  It can, however, be updated post installation when the OHS1 target fails to return an active status in the EM12c console.

Update the Configuration File

The file that control the configuration of the OHS1 target is the topology.xml file that is located in the $OMS_HOME\user_projects\domains\GCDomain\opmn\topology.xml

Edit the topology.xml file and replace/add the following entries in bolded text, replacing Virtual_Cluster_name with the name of the Cluster:

- ” port=”6701″>
- host=”“>

 Save the file with the new changes.

Remove the OHS1 Target

Log into your EM12c console as the SYSMAN user, (or another user with appropriate privileges) and click on All Targets.  Either do a search for the OHS1 target or just scan down and double-click on it.  The target will show as down and display the incorrect associated targets with the HTTP Server:

ohs_tgt_wrong

You will need to remove and re-add the target to have the EM12c utilize the topology.xml file configuration update to the new host name.

To do this, click on Oracle HTTP Server–> Target Setup –> Remove Target. The target for the Oracle Apache server/HTTP Server, along with its dependents have now been removed.

Refresh the Weblogic Domain

To re-add the OHS1 target, we are going to use a job already built into EM12c.  Go back to All Targets the Targets drop down.  At the very top you will commonly see the EMGC_GCDomain, (Grid Control Domain, yes, it’s still referred to it as that… :))  Log into this target.  There are two “levels” to this target, the parent and then the farm.  Either one will offer you a job in the drop down to Refresh Weblogic Domain.

weblogic_refresh

Once you click on this job, it will ask you to remove or add targets.  You can simply choose to Add Targets and the job will first search for any missing targets that need to be re-added.  Commonly it will locate 12 and display a list of the targets it wishes to add.  You will note that the OHS1 target now displays the CORRECT host name.

Close the window and choose to complete through the wizard steps to add these targets to the Weblogic domain.

Return to All Targets and access the OHS1 Target to verify that it now displays an active status-  it may take up to one collection to update the target status.

 

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Renaming an Oracle Apache Target in EM12c], All Right Reserved. 2014.

Dealing with Wi-Fi Problems the DBA Kevlar Way

As much as KSCOPE 14 fixed any and all Low-T levels, it couldn’t do anything for our the WiFi problems we were experiencing back in May. I’ve been a telecommuter for almost 4 straight years now and was quite frustrated when our WiFi service became dismal back then.

We called Comcast service and worked through trouble-shooting with their support multiple times, only afterward, still unable to download files or watch Netflix into the first three weeks of June.  Tim purchased a new modem after Comcast confirmed our model was at “end of support” and yet, still no improvement.  After more research and attempts at fixing the performance resulted in no improvement, I was getting tired of having to commute to telecommute.

In the middle of June, I switched service from Comcast to Centurylink, hoping to solve the problem, but just continued to experience dismal WiFi from another provider.  The situation had become unbearable for me, (and also for Tim who was working from home full-time, too…) and sent me right into one of my “take no prisoner” moods.   I’ve never liked guessing to solve anything and wanted some real information and a solution, so I thought I would share what I ended up doing to truly solve the WiFi problem, as it may save others from headache to time trying to figure out what is really going on.

After a bith of research, I located a program called inSSIDer.  This application, when downloaded and purchased for $20 will give you a clear, graphic display of what WiFi networks are in your vicinity, including unnamed, (hidden) ones.  It shows you what ranges, channels and bands, (2.4Ghz or 5Ghz) they are in.  The app can also show you what hardware is being used, if they overlap or interfere with your own WiFi signal and then make recommendations about channel changes that can be made to improve performance.

Keep in mind- You’ll now be viewing our “resolved” WiFi configuration, but the app is cool and it will make a lot of sense once you are able to visualize how this application works.

wifi1

If you’ll notice, I’m getting quite a bit of speed now that I’ve corrected the problem we were experience, but notice how the application displays the information about all WiFi networks that are in range.  It shows you the channels and security info, along with speed of each. With my new configuration, you can see that my 5Ghz network is very fast, followed by the 2.4Ghz and the kids extended one.

What’s really nice when you are viewing all of this data during a troubleshooting scenario, is the information shown both visually via graphs and tables regarding overlapping and channels that are used by you and your neighboring networks.

Below is the 2.4Ghz networks in just my area.  If you are on a small cul de sac like I am, this can be very frustrating to see how many are running not just on the same channel, but overlapping on frequency:

wifi2

You are able to then click on one of the networks on the list and view detailed data about the WiFi network:

wifi7

Above you can note that there is distinct overlap with another network, specifically by four, but primarily by one that is completely impacting the quality of the WiFi network.  The application also displays the conflict caused by the strongest overlap by the 5th Neighbor’s WiFi to my 3rd Neighbors in the graph.

One of the things that became quickly apparent was that there wasn’t ONE 5Ghz WiFi in our area, so this was our queue to purchase a dual-band and set it up.  Since the range is more limited on 5Ghz over 2.4Ghz, we added a repeater to the configuration and now we can see via the InSSIDer app that we have the whole 5Ghz range to ourselves in our neighborhood:

wifi3

If we look at the detailed information, the only overlap is coming from the repeater, (expected…)  and the Max Data Rate is to die for! :)

wifi4

Choosing a New Modem

As you can see above, we ended up with a dual-band modem, capable of 5Ghz and 2.4Ghz.  This may seem overkill for most, but considering how much is using WiFi these days.  Most people purchase the cheapest modem from a local retailer or the default modem recommended by their provider.  None have I seen were dual-band, (both 2.4Ghz and 5Ghz) as having a wide range of coverage is often more attractive than having a powerful connection.  This might sound like the same thing, but it isn’t.  2.4Ghz provides a less powerful signal than 5Ghz, but it will cover more area.  Your provider will, of course, sell you a modem that will cover a wider range in your household.

This has a catch-22 involved with it-

1.  The range will cover a wider area, resulting in more “over-lap” with your neighbor’s Wi-Fi.

2.  Channel ranges for a 2.4Ghz modem is 0-11 and most often is set to 1,6,10 or 11, which means there is a high chance that you will be set to the same range and channel as your neighbor’s service.

3.  This overlap and channel conflicts are not safe from those who have different Wi-Fi providers.  All providers are restricted to these same channels and ranges.

With this valuable information at hand, we purchased a dual-band modem that provided both 2.4Ghz and 5Ghz Wi-Fi networks.  We set up the 2.4Ghz on the channel had the least traffic and only used it for lighter traffic.

Since setting up the 5Ghz with 2.4Ghz for the kids, we’ve had no issue with WiFi and I can happily report, Tim and I work from home with no issues, downloading, uploading and just clogging the internet as often as we like without impacting anyone here in our household… :)



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Dealing with Wi-Fi Problems the DBA Kevlar Way], All Right Reserved. 2014.

To Hint or not to hint (Application Engine), that is the question

Over the years Oracle has provided a number of plan stability technologies to control how SQL statements are executed.  At the risk of over simplification, Outlines (deprecated in 11g), Profiles, Baselines and Patches work by injecting a set of hints into a SQL statement at parse time.  There is quite a lot of advice from Oracle to use these technologies to fix errant execution plans rather than hint the application.  I think it is generally good advice, however, there are times when this approach does not work well with PeopleSoft, and that is due to the behaviour and structure of PeopleSoft rather than the Oracle database.

It is possible to produce a SQL profile from a plan captured by AWR.  A part of distribution for the SQLT Diagnostic Tool (Doc ID 215187.1) is a script called coe_xfr_sql_profile.sql written by Carlos Sierra.
The only thing I would change in the delivered script, (for use with PeopleSoft and as suggested in a comment) is to create the profile with FORCE_MATCHING so that similar statements with different literal values still match. 

The Slings and Arrows of outrageous execution plans

Let's take an example of a statement (from the vanilla Financials product that has not been customised) that performed poorly because it didn't generate a good execution plan (although I have cut out most of the statement for readability.  Note, that it references instance 5 of PeopleTools temporary record CA_SUM_TAO5.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">INSERT INTO PS_CA_SUM_RC_TAO5 (…) SELECT

FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE

B.PROCESS_INSTANCE = 51381955 AND C.IN_USE_FLAG = 'Y'

Plan hash value: 2039212279
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 14424 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | NESTED LOOPS | | | | | |
| 3 | NESTED LOOPS | | 1 | 430 | 14424 (1)| 00:02:54 |
| 4 | NESTED LOOPS | | 1 | 318 | 14421 (1)| 00:02:54 |
| 5 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14420 (1)| 00:02:54 |
| 6 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO5 | 1 | 233 | 1 (0)| 00:00:01 |
| 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO5 | 1 | | 0 (0)| |
| 8 | INDEX RANGE SCAN | PSACA_PR_SUMM | 1 | | 2 (0)| 00:00:01 |
| 9 | TABLE ACCESS BY INDEX ROWID | PS_CA_PR_SUMM | 1 | 112 | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

However, below is the plan we get on instance 4.  We get this plan because there is already a profile that has been applied in the past, but now we are on a different non-shared instance of the temporary table, so the profile cannot match because we are on different objects, and we get the same problem, but on different non-shared instances of the temporary record.  Different literal values, such as those for Process Instance can be handled by FORCE_MATCHING, but not different tables.  That is a totally different SQL statement.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SQL_ID 5gtxdqbx0d0c3
--------------------
INSERT INTO PS_CA_SUM_RC_TAO4 (…) SELECT

FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE

B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'

Plan hash value: 3552771247

---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 36361 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | HASH JOIN | | 1 | 430 | 36361 (3)| 00:07:17 |
| 3 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14347 (1)| 00:02:53 |
| 4 | NESTED LOOPS | | | | | |
| 5 | NESTED LOOPS | | 1 | 345 | 22014 (3)| 00:04:25 |
| 6 | TABLE ACCESS FULL | PS_CA_PR_SUMM | 5268K| 562M| 21539 (1)| 00:04:19 |
| 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO4 | 1 | | 0 (0)| |
| 8 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO4 | 1 | 233 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Note
-----
- SQL profile "coe_gn3n77gs6xj2a_3552771247" used for this statement

Of course, the statement on instance 4 had a profile because it was added as a short term fix and then left in situ long term.  It worked fine until a process error, left the non-shared instance of the temporary record allocated to that process instance, and so PeopleSoft allocated instance 5 on the next execution.
So we could just create another profile using the coe_xfr_sql_profile.sql script

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SPO coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.log;
SET ECHO ON TERM ON LIN 2000 TRIMS ON NUMF 99999999999999999999;
REM
REM $Header: 215187.1 coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql 11.4.1.4 2014/08/13 csierra $
REM
REM Copyright (c) 2000-2010, Oracle Corporation. All rights reserved.
REM
REM AUTHOR
REM carlos.sierra@oracle.com
REM
REM SCRIPT
REM coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql
REM
REM DESCRIPTION
REM This script is generated by coe_xfr_sql_profile.sql
REM It contains the SQL*Plus commands to create a custom
REM SQL Profile for SQL_ID 5gtxdqbx0d0c3 based on plan hash
REM value 3552771247.
REM The custom SQL Profile to be created by this script
REM will affect plans for SQL commands with signature
REM matching the one for SQL Text below.
REM Review SQL Text and adjust accordingly.
REM
REM PARAMETERS
REM None.
REM
REM EXAMPLE
REM SQL> START coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql;
REM
REM NOTES
REM 1. Should be run as SYSTEM or SYSDBA.
REM 2. User must have CREATE ANY SQL PROFILE privilege.
REM 3. SOURCE and TARGET systems can be the same or similar.
REM 4. To drop this custom SQL Profile after it has been created:
REM EXEC DBMS_SQLTUNE.DROP_SQL_PROFILE('coe_5gtxdqbx0d0c3_3552771247');
REM 5. Be aware that using DBMS_SQLTUNE requires a license
REM for the Oracle Tuning Pack.
REM
WHENEVER SQLERROR EXIT SQL.SQLCODE;
REM
VAR signature NUMBER;
REM
DECLARE
sql_txt CLOB;
h SYS.SQLPROF_ATTR;
BEGIN
sql_txt := q'[
INSERT INTO PS_CA_SUM_RC_TAO4 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
]';
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
q'[DB_VERSION('11.2.0.3')]',
q'[OPT_PARAM('_unnest_subquery' 'false')]',
q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
q'[ALL_ROWS]',
q'[OUTLINE_LEAF(@"SEL$1")]',
q'[OUTLINE_LEAF(@"INS$1")]',
q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO4"@"INS$1")]',
q'[FULL(@"SEL$1" "A"@"SEL$1")]',
q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO4"."PROCESS_INSTANCE" "PS_CA_SUM_TAO4"."BUSINESS_UNIT" "PS_CA_SUM_TAO4"."PROJECT_ID" "PS_CA_SUM_TAO4"."ACTIVITY_ID" "PS_CA_SUM_TAO4"."ANALYSIS_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO4"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO4"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_NUM" ]',
q'[ "PS_CA_SUM_TAO4"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO4"."CA_FEE_STATUS"))]',
q'[FULL(@"SEL$1" "C"@"SEL$1")]',
q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
q'[END_OUTLINE_DATA]');
:signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => sql_txt,
profile => h,
name => 'coe_5gtxdqbx0d0c3_3552771247',
description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
END;
/
WHENEVER SQLERROR CONTINUE
SET ECHO OFF;
PRINT signature
PRO
PRO ... manual custom SQL Profile has been created
PRO
SET TERM ON ECHO OFF LIN 80 TRIMS OFF NUMF "";
SPO OFF;
PRO
PRO COE_XFR_SQL_PROFILE_5gtxdqbx0d0c3_3552771247 completed

But then we must manually change the table and index names from 4 to 5.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DECLARE
sql_txt CLOB;
h SYS.SQLPROF_ATTR;
BEGIN
sql_txt := q'[
INSERT INTO PS_CA_SUM_RC_TAO5 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
]';
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
q'[DB_VERSION('11.2.0.3')]',
q'[OPT_PARAM('_unnest_subquery' 'false')]',
q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
q'[ALL_ROWS]',
q'[OUTLINE_LEAF(@"SEL$1")]',
q'[OUTLINE_LEAF(@"INS$1")]',
q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO5"@"INS$1")]',
q'[FULL(@"SEL$1" "A"@"SEL$1")]',
q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO5"."PROCESS_INSTANCE" "PS_CA_SUM_TAO5"."BUSINESS_UNIT" "PS_CA_SUM_TAO5"."PROJECT_ID" "PS_CA_SUM_TAO5"."ACTIVITY_ID" "PS_CA_SUM_TAO5"."ANALYSIS_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO5"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO5"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_NUM" ]',
q'[ "PS_CA_SUM_TAO5"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO5"."CA_FEE_STATUS"))]',
q'[FULL(@"SEL$1" "C"@"SEL$1")]',
q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
q'[END_OUTLINE_DATA]');
:signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => sql_txt,
profile => h,
name => 'coe_5gtxdqbx0d0c3_3552771247',
description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
END;
/

Or to take Arms against a Sea of statements,

The profile has advantage that it can be applied quickly without a code change.  It is the perfect tool for the DBA with a production performance problem. However,
  • If applying to statement that references a PS temp record then we need to apply the profile to all instances of the record (both non-shared instances and the shared instance).
  • We were lucky that we referenced instance 5 of two temporary records. However, you could get a situation where a statement references different instances of different temporary records.  So perhaps instance 5 of one table and instance 6 of another.  In which case, you might also get instance 6 of the first table and instance 5 of the other.  A SQL profile could be needed for each permutation.
  • Bear in mind also that some areas of PeopleSoft use dynamically generated SQL.  So you get similar SQL statements which are sufficiently different for the profile not to match.  
  • Any changes to the expansion of Application Engine and PeopleCode MetaSQL will also prevent matching.
  • There is also the challenge of dealing with code changes as the system is upgraded, patched and customised.  A small code change, perhaps just an extra field in the select clause, can result in a performance regression because the profile stops matching. Of course, this challenge is not limited to PeopleSoft systems! 
However, profiles are effective if there are no PeopleSoft temporary records present.  So you can generally use them in COBOL and SQR processes and the on-line application (other than in on-line Application Engine processes).

Aye, there's the rub,

I would use a profile (or a set of profiles) as a short-term temporary fix that is easier to introduce into production, and then add hints to the source code and so fix all instances of the code, not just the ones that have been profiled. Of course, that does entail a code change, and everything that goes with that.  One strong argument against making code change is that you have to change the code again to remove the hint if it becomes unnecessary at some time in future after a significant change, such as an Oracle upgrade.  However, on balance, I think it is better than the scenario where the profile stops working one day without warning.

The rest is silence.

Unless you add a comment.

To Hint or not to hint (Application Engine), that is the question

Over the years Oracle has provided a number of plan stability technologies to control how SQL statements are executed.  At the risk of over simplification, Outlines (deprecated in 11g), Profiles, Baselines and Patches work by injecting a set of hints into a SQL statement at parse time.  There is quite a lot of advice from Oracle to use these technologies to fix errant execution plans rather than hint the application.  I think it is generally good advice, however, there are times when this approach does not work well with PeopleSoft, and that is due to the behaviour and structure of PeopleSoft rather than the Oracle database.

It is possible to produce a SQL profile from a plan captured by AWR.  A part of distribution for the SQLT Diagnostic Tool (Doc ID 215187.1) is a script called coe_xfr_sql_profile.sql written by Carlos Sierra.
The only thing I would change in the delivered script, (for use with PeopleSoft and as suggested in a comment) is to create the profile with FORCE_MATCHING so that similar statements with different literal values still match. 

The Slings and Arrows of outrageous execution plans

Let's take an example of a statement (from the vanilla Financials product that has not been customised) that performed poorly because it didn't generate a good execution plan (although I have cut out most of the statement for readability.  Note, that it references instance 5 of PeopleTools temporary record CA_SUM_TAO5.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">INSERT INTO PS_CA_SUM_RC_TAO5 (…) SELECT

FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE

B.PROCESS_INSTANCE = 51381955 AND C.IN_USE_FLAG = 'Y'

Plan hash value: 2039212279
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 14424 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | NESTED LOOPS | | | | | |
| 3 | NESTED LOOPS | | 1 | 430 | 14424 (1)| 00:02:54 |
| 4 | NESTED LOOPS | | 1 | 318 | 14421 (1)| 00:02:54 |
| 5 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14420 (1)| 00:02:54 |
| 6 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO5 | 1 | 233 | 1 (0)| 00:00:01 |
| 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO5 | 1 | | 0 (0)| |
| 8 | INDEX RANGE SCAN | PSACA_PR_SUMM | 1 | | 2 (0)| 00:00:01 |
| 9 | TABLE ACCESS BY INDEX ROWID | PS_CA_PR_SUMM | 1 | 112 | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

However, below is the plan we get on instance 4.  We get this plan because there is already a profile that has been applied in the past, but now we are on a different non-shared instance of the temporary table, so the profile cannot match because we are on different objects, and we get the same problem, but on different non-shared instances of the temporary record.  Different literal values, such as those for Process Instance can be handled by FORCE_MATCHING, but not different tables.  That is a totally different SQL statement.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SQL_ID 5gtxdqbx0d0c3
--------------------
INSERT INTO PS_CA_SUM_RC_TAO4 (…) SELECT

FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE

B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'

Plan hash value: 3552771247

---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 36361 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | HASH JOIN | | 1 | 430 | 36361 (3)| 00:07:17 |
| 3 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14347 (1)| 00:02:53 |
| 4 | NESTED LOOPS | | | | | |
| 5 | NESTED LOOPS | | 1 | 345 | 22014 (3)| 00:04:25 |
| 6 | TABLE ACCESS FULL | PS_CA_PR_SUMM | 5268K| 562M| 21539 (1)| 00:04:19 |
| 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO4 | 1 | | 0 (0)| |
| 8 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO4 | 1 | 233 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Note
-----
- SQL profile "coe_gn3n77gs6xj2a_3552771247" used for this statement

Of course, the statement on instance 4 had a profile because it was added as a short term fix and then left in situ long term.  It worked fine until a process error, left the non-shared instance of the temporary record allocated to that process instance, and so PeopleSoft allocated instance 5 on the next execution.
So we could just create another profile using the coe_xfr_sql_profile.sql script

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SPO coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.log;
SET ECHO ON TERM ON LIN 2000 TRIMS ON NUMF 99999999999999999999;
REM
REM $Header: 215187.1 coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql 11.4.1.4 2014/08/13 csierra $
REM
REM Copyright (c) 2000-2010, Oracle Corporation. All rights reserved.
REM
REM AUTHOR
REM carlos.sierra@oracle.com
REM
REM SCRIPT
REM coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql
REM
REM DESCRIPTION
REM This script is generated by coe_xfr_sql_profile.sql
REM It contains the SQL*Plus commands to create a custom
REM SQL Profile for SQL_ID 5gtxdqbx0d0c3 based on plan hash
REM value 3552771247.
REM The custom SQL Profile to be created by this script
REM will affect plans for SQL commands with signature
REM matching the one for SQL Text below.
REM Review SQL Text and adjust accordingly.
REM
REM PARAMETERS
REM None.
REM
REM EXAMPLE
REM SQL> START coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql;
REM
REM NOTES
REM 1. Should be run as SYSTEM or SYSDBA.
REM 2. User must have CREATE ANY SQL PROFILE privilege.
REM 3. SOURCE and TARGET systems can be the same or similar.
REM 4. To drop this custom SQL Profile after it has been created:
REM EXEC DBMS_SQLTUNE.DROP_SQL_PROFILE('coe_5gtxdqbx0d0c3_3552771247');
REM 5. Be aware that using DBMS_SQLTUNE requires a license
REM for the Oracle Tuning Pack.
REM
WHENEVER SQLERROR EXIT SQL.SQLCODE;
REM
VAR signature NUMBER;
REM
DECLARE
sql_txt CLOB;
h SYS.SQLPROF_ATTR;
BEGIN
sql_txt := q'[
INSERT INTO PS_CA_SUM_RC_TAO4 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
]';
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
q'[DB_VERSION('11.2.0.3')]',
q'[OPT_PARAM('_unnest_subquery' 'false')]',
q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
q'[ALL_ROWS]',
q'[OUTLINE_LEAF(@"SEL$1")]',
q'[OUTLINE_LEAF(@"INS$1")]',
q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO4"@"INS$1")]',
q'[FULL(@"SEL$1" "A"@"SEL$1")]',
q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO4"."PROCESS_INSTANCE" "PS_CA_SUM_TAO4"."BUSINESS_UNIT" "PS_CA_SUM_TAO4"."PROJECT_ID" "PS_CA_SUM_TAO4"."ACTIVITY_ID" "PS_CA_SUM_TAO4"."ANALYSIS_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO4"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO4"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_NUM" ]',
q'[ "PS_CA_SUM_TAO4"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO4"."CA_FEE_STATUS"))]',
q'[FULL(@"SEL$1" "C"@"SEL$1")]',
q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
q'[END_OUTLINE_DATA]');
:signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => sql_txt,
profile => h,
name => 'coe_5gtxdqbx0d0c3_3552771247',
description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
END;
/
WHENEVER SQLERROR CONTINUE
SET ECHO OFF;
PRINT signature
PRO
PRO ... manual custom SQL Profile has been created
PRO
SET TERM ON ECHO OFF LIN 80 TRIMS OFF NUMF "";
SPO OFF;
PRO
PRO COE_XFR_SQL_PROFILE_5gtxdqbx0d0c3_3552771247 completed

But then we must manually change the table and index names from 4 to 5.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DECLARE
sql_txt CLOB;
h SYS.SQLPROF_ATTR;
BEGIN
sql_txt := q'[
INSERT INTO PS_CA_SUM_RC_TAO5 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
]';
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
q'[DB_VERSION('11.2.0.3')]',
q'[OPT_PARAM('_unnest_subquery' 'false')]',
q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
q'[ALL_ROWS]',
q'[OUTLINE_LEAF(@"SEL$1")]',
q'[OUTLINE_LEAF(@"INS$1")]',
q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO5"@"INS$1")]',
q'[FULL(@"SEL$1" "A"@"SEL$1")]',
q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO5"."PROCESS_INSTANCE" "PS_CA_SUM_TAO5"."BUSINESS_UNIT" "PS_CA_SUM_TAO5"."PROJECT_ID" "PS_CA_SUM_TAO5"."ACTIVITY_ID" "PS_CA_SUM_TAO5"."ANALYSIS_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO5"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO5"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_NUM" ]',
q'[ "PS_CA_SUM_TAO5"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO5"."CA_FEE_STATUS"))]',
q'[FULL(@"SEL$1" "C"@"SEL$1")]',
q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
q'[END_OUTLINE_DATA]');
:signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => sql_txt,
profile => h,
name => 'coe_5gtxdqbx0d0c3_3552771247',
description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
END;
/

Or to take Arms against a Sea of statements,

The profile has advantage that it can be applied quickly without a code change.  It is the perfect tool for the DBA with a production performance problem. However,
  • If applying to statement that references a PS temp record then we need to apply the profile to all instances of the record (both non-shared instances and the shared instance).
  • We were lucky that we referenced instance 5 of two temporary records. However, you could get a situation where a statement references different instances of different temporary records.  So perhaps instance 5 of one table and instance 6 of another.  In which case, you might also get instance 6 of the first table and instance 5 of the other.  A SQL profile could be needed for each permutation.
  • Bear in mind also that some areas of PeopleSoft use dynamically generated SQL.  So you get similar SQL statements which are sufficiently different for the profile not to match.  
  • Any changes to the expansion of Application Engine and PeopleCode MetaSQL will also prevent matching.
  • There is also the challenge of dealing with code changes as the system is upgraded, patched and customised.  A small code change, perhaps just an extra field in the select clause, can result in a performance regression because the profile stops matching. Of course, this challenge is not limited to PeopleSoft systems! 
However, profiles are effective if there are no PeopleSoft temporary records present.  So you can generally use them in COBOL and SQR processes and the on-line application (other than in on-line Application Engine processes).

Aye, there's the rub,

I would use a profile (or a set of profiles) as a short-term temporary fix that is easier to introduce into production, and then add hints to the source code and so fix all instances of the code, not just the ones that have been profiled. Of course, that does entail a code change, and everything that goes with that.  One strong argument against making code change is that you have to change the code again to remove the hint if it becomes unnecessary at some time in future after a significant change, such as an Oracle upgrade.  However, on balance, I think it is better than the scenario where the profile stops working one day without warning.

The rest is silence.

Unless you add a comment.

The Expendables 3

It’s a holiday weekend and I just got back from watching The Expendables 3.

OK. You don’t go to a film like this expecting an Oscar winning script, but come on! During the end credits, when it said “Story by Sylvester Stallone”, I thought of a scene from Boogie Nights where they discus the plot of an adult movie about 30 seconds before they start to shoot, which goes something like this…

“Enter Reed to audition for Amber. They go at it.

Dirk enters. Meets with Becky. They go at it.”

In the case of The Expendables 3, the same sort of thing must have happened, but something like this…

“Sylvestor enters the room. He shoots sh*t up..

Terry Crews enters. Sees the bad guys. Shoots sh*t up.”

It’s also worth remembering that physics works completely different in The Expendables 3 world… :)

That aside, it’s a good laugh. I’m not really sure how they can sustain the series, because really this is just a rehash of the previous two. Actually, you could probably just rearrange the fight scenes from these three films to create “new” films every year for a while. That way you wouldn’t even had to waste time filming…

Antonio Banderas, who was described by a lady behind me as, “… so damn sexy!”, played the only character that had any sort of back-story, but I always think of that cat from Shrek when I hear his voice… :)

Cheers

Tim…


The Expendables 3 was first posted on August 25, 2014 at 5:58 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.