Search

OakieTags

Who's online

There are currently 0 users and 33 guests online.

Recent comments

Oakies Blog Aggregator

PeopleTools 8.54 for the Oracle DBA

The UKOUG PeopleSoft Roadshow 2015 comes to London on 31st March 2015.  In a moment of enthusiasm, I offered to talk about new and interesting features of PeopleTools 8.54 from the perspective of an Oracle DBA.

I have been doing some research, and have even read the release notes!  As a result, I have picked out some topics that I want to talk about.  I will discuss how the feature has been implemented, and what I think are the benefits and drawbacks of the feature:

This post is not about a new feature in PeopleTools 8.54, but it is something that I discovered while investigating the new version.

    Links have been added to the above list as I have also blogged about each.  I hope it might produce some feedback and discussion.  After the Roadshow I will also add a link to the presentation.

    PeopleTools 8.54 is still quite new, and we are all still learning.  So please leave comments, disagree with what I have written, correct things that I have got wrong, ask questions.

    PeopleTools 8.54: Oracle Resource Manager

    This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

    Oracle Resource manager is about prioritising one database session over another, or about restricting the overhead of one session for the good of the other database users.  A resource plan is a set of rules that are applied to some or all database sessions for some or all of the time.  Those rules may be simple or complex, but they need to reflect the business's view of what is most important. Either way Oracle resource manager requires careful design.
    I am not going to attempt to further explain here how the Oracle feature works, I want to concentrate on how PeopleSoft interfaces with it.

    PeopleTools Feature

    This feature effectively maps Oracle resource plans to PeopleSoft executables.  The resource plan will then manage the database resource consumption of that PeopleSoft process.  There is a new component that maps PeopleSoft resource names to Oracle consumer groups.  For this example I have chosen some of the delivered plans in the MIXED_WORKLOAD_GROUP that is delivered with Oracle 11g.

    • The Oracle Consumer Group field is validated against the name of the Oracle consumer groups defined in the database, using view     .
    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT DISTINCT group_or_subplan, type
    FROM dba_rsrc_plan_directives
    WHERE plan = 'MIXED_WORKLOAD_PLAN'
    ORDER BY 2 DESC,1
    /

    GROUP_OR_SUBPLAN TYPE
    ------------------------------ --------------
    ORA$AUTOTASK_SUB_PLAN PLAN
    BATCH_GROUP CONSUMER_GROUP
    INTERACTIVE_GROUP CONSUMER_GROUP
    ORA$DIAGNOSTICS CONSUMER_GROUP
    OTHER_GROUPS CONSUMER_GROUP
    SYS_GROUP CONSUMER_GROUP

    If you use Oracle SQL Trace on a PeopleSoft process (in this case PSAPPSRV) you find the following query.  It returns the name of the Oracle consumer group that the session should use.The entries in the component shown above are stored in PS_PT_ORA_RESOURCE

    • PS_PTEXEC2RESOURCE is another new table that maps PeopleSoft executable name to resource name.
    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT PT_ORA_CONSUMR_GRP 
    FROM PS_PT_ORA_RESOURCE
    , PS_PTEXEC2RESOURCE
    WHERE PT_EXECUTABLE_NAME = 'PSAPPSRV'
    AND PT_ORA_CONSUMR_GRP <> ' '
    AND PS_PT_ORA_RESOURCE.PT_RESOURCE_NAME = PS_PTEXEC2RESOURCE.PT_RESOURCE_NAME

    PT_ORA_CONSUMR_GRP
    ------------------------
    INTERACTIVE_GROUP

    And then the PeopleSoft process explicitly switches its group, thus:

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DECLARE 
    old_group varchar2(30);
    BEGIN
    DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP('INTERACTIVE_GROUP', old_group, FALSE);
    END;

    Unfortunately, the consequence of this explicit switch is that it overrides any consumer group mapping rules, as I demonstrate below.

    Setup

    The PeopleSoft owner ID needs some additional privileges if it is to be able to switch to the consumer groups.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">BEGIN
    DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SYSTEM_PRIVILEGE
    ('SYSADM', 'ADMINISTER_RESOURCE_MANAGER',FALSE);
    END;

    BEGIN
    FOR i IN(
    SELECT DISTINCT r.pt_ora_consumr_grp
    FROM sysadm.ps_pt_ora_resource r
    WHERE r.pt_ora_consumr_grp != ' '
    AND r.pt_ora_consumr_grp != 'OTHER_GROUPS'
    ) LOOP
    dbms_output.put_line('Grant '||i.pt_ora_consumr_grp);
    DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP
    (GRANTEE_NAME => 'SYSADM'
    ,CONSUMER_GROUP => i.pt_ora_consumr_grp
    ,GRANT_OPTION => FALSE);
    END LOOP;
    END;
    /

    The RESOURCE_MANAGER_PLAN initialisation parameters should be set to the name of the plan which contains the directives.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ----------------------
    resource_manager_plan string MIXED_WORKLOAD_PLAN

    I question one or two of the mappings on PS_PTEXEC2RESOURCE.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT * FROM PS_PTEXEC2RESOURCE …

    PT_EXECUTABLE_NAME PT_RESOURCE_NAME
    -------------------------------- -----------------

    PSAPPSRV APPLICATION SERVE
    PSQED MISCELLANEOUS
    PSQRYSRV QUERY SERVER

    • PSNVS is the nVision Windows executable.  It is in PeopleTools resource MISCELLANEOUS.  This is nVision running in 2-tier mode.  I think I would put nVision into the same consumer group as query.  I can't see why it wouldn't be possible to create new PeopleSoft consumer groups and map them to certain executables.  nVision would be a candidate for a separate group. 
      • For example, one might want to take a different approach to parallelism in GL reporting having partitioned the LEDGER tables by FISCAL_YEAR and ACCOUNTING_PERIOD
    • PSQED is also in MISCELLANEOUS.  Some customers use it to run PS/Query in 2-tier mode, and allow some developers to use it to run queries.  Perhaps it should also be in the QUERY SERVER group.

    Cannot Mix PeopleSoft Consumer Groups Settings with Oracle Consumer Group Mappings

    I would like to be able to blend the PeopleSoft configuration with the ability to automatically associate Oracle consumer groups with specific values of MODULE and ACTION.  Purely as an example, I am trying to move the Process Monitor component into the SYS_GROUP consumer group.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">BEGIN
    DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

    DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
    (attribute => 'MODULE_NAME'
    ,value => 'PROCESSMONITOR'
    ,consumer_group => 'SYS_GROUP');
    DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
    END;
    /

    However, it doesn't work because the explicit settings overrides any rules, and you cannot prioritise other rules above explicit settings

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">exec dbms_application_info.set_module('PROCESSMONITOR','PMN_PRCSLIST');
    SELECT REGEXP_SUBSTR(program,'[^.@]+',1,1) program
    , module, action, resource_consumer_group
    FROM v$session
    WHERE module IN('PROCESSMONITOR','WIBBLE')
    ORDER BY program, module, action
    /

    So I have created a new SQL*Plus session and set the module/action and it has automatically mover into the SYS_GROUP.  Meanwhile, I have been into the Process Monitor in the PIA and the module and action of the PSAPPSRV session has been set, but they remain in the interactive group.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">PROGRAM          MODULE           ACTION           RESOURCE_CONSUMER_GROUP
    ---------------- ---------------- ---------------- ------------------------
    PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
    PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
    sqlplus PROCESSMONITOR PMN_PRCSLIST SYS_GROUP

    If I set the module to something that doesn't match a rule, the consumer group goes back to OTHER_GROUPS which is the default. 

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">exec dbms_application_info.set_module('WIBBLE','PMN_PRCSLIST');

    PROGRAM MODULE ACTION RESOURCE_CONSUMER_GROUP
    ---------------- ---------------- ---------------- ------------------------
    PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
    PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
    sqlplus WIBBLE PMN_PRCSLIST OTHER_GROUPS

    Now, if I explicitly set the consumer group exactly as PeopleSoft does my session automatically moves into the INTERACTIVE_GROUP.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DECLARE 
    old_group varchar2(30);
    BEGIN
    DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP('INTERACTIVE_GROUP', old_group, FALSE);
    END;
    /

    PROGRAM MODULE ACTION RESOURCE_CONSUMER_GROUP
    ---------------- ---------------- ---------------- ------------------------
    PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
    PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
    sqlplus WIBBLE PMN_PRCSLIST INTERACTIVE_GROUP

    Next, I will set the module back to match the rule, but the consumer group doesn't change because the explicit setting takes priority over the rules.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">PROGRAM          MODULE           ACTION           RESOURCE_CONSUMER_GROUP
    ---------------- ---------------- ---------------- ------------------------
    PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
    PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
    sqlplus PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP

    You can rearrange the priority of the other rule settings, but explicit must have the highest priority (if you try will get ORA-56704). So, continuing with this example, I cannot assign a specific component to a different resource group unless I don't use the PeopleSoft configuration for PSAPPSRV.
    Instead, I could create a rule to assign a resource group to PSAPPSRV via the program name, and have a higher priority rule to override that when the module and/or action is set to a specific value.  However, first I have to disengage the explicit consumer group change for PSAPPSRV by removing the row from PTEXEC2RESOURCE.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">UPDATE ps_ptexec2resource 
    SET pt_resource_name = 'DO_NOT_USE'
    WHERE pt_executable_name = 'PSAPPSRV'
    AND pt_resource_name = 'APPLICATION SERVER'
    /
    COMMIT
    /
    BEGIN
    DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
    END;
    /
    BEGIN
    DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
    (attribute => 'CLIENT_PROGRAM'
    ,value => 'PSAPPSRV'
    ,consumer_group => 'INTERACTIVE_GROUP');

    DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
    (attribute => 'MODULE_NAME'
    ,value => 'PROCESSMONITOR'
    ,consumer_group => 'SYS_GROUP');

    DBMS_RESOURCE_MANAGER.set_consumer_group_mapping_pri(
    explicit => 1,
    oracle_user => 2,
    service_name => 3,
    module_name_action => 4, --note higher than just module
    module_name => 5, --note higher than program
    service_module => 6,
    service_module_action => 7,
    client_os_user => 8,
    client_program => 9, --note lower than module
    client_machine => 10
    );
    DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
    END;
    /

    So, you would have to choose between using either the PeopleSoft configuration or the Oracle Resource Manager configuration.  It depends on your requirements.  This is going to be a decision you will have to take when you design your resource management.  Of course, you can always use just the mapping approach in versions of PeopleTools prior to 8.54.

    Conclusion

    I have never seen Oracle Resource Manager used with PeopleSoft.  Probably because setting it up is not trivial, and then it is difficult to test the resource plan.  I think this enhancement is a great start, that makes it very much easier to implement Oracle Resource Manager on PeopleSoft.  However, I think we need more granularity.

    • I would like to be able to put specific process run on the process scheduler by name into specific consumer groups.  For now, you could do this with a trigger on PSPRCSRQST that fires on process start-up that makes an explicit consumer group change (and puts it back again for Application Engine on completion). 
    • I would like the ability to set different resource groups for the same process name in different application server domains.  For example,
      • I might want to distinguish between PSQRYSRV processes used for ad-hoc PS/Queries on certain domains from PSQRYSRVs used to support nVision running in 3-tier mode on other domains.
      • I might have different PIAs for backup-office and self-service users going to different applications servers.  I might want to prioritise back-office users over self-service users.

    Nonetheless, I warmly welcome the new support for Oracle Resource Manager in PeopleTools.  It is going to be very useful for RAC implementations, I think it will be essential for multi-tenant implementations where different PeopleSoft product databases are plugged into the same container database overrides any rules

    RETURNING BULK COLLECT and database links

    Looks like the nice PL/SQL facility for returning a set of updated rows is restricted when it comes to database links

    (This tested on 12.1.0.1)

    SQL> declare
      2    type int_list is table of number(12) index by pls_integer;
      3    l_results int_list;
      4
      5  begin
      6    update MY_TABLE b
      7    set b.my_col = ( select max(last_ddl_time) from user_objects@dblink where object_id = b.key_col)
      8    where b.my_col is null
      9    returning b.other_col bulk collect into l_results;
     10  end;
     11  /
    declare
    *
    ERROR at line 1:
    ORA-22816: unsupported feature with RETURNING clause
    ORA-06512: at line 6

    When we remove the database link, things revert to what we would expect

     
     
    SQL> declare
      2    type int_list is table of number(12) index by pls_integer;
      3    l_acct int_list;
      4
      5  begin
      6    update MY_TABLE b
      7    set b.my_col = ( select max(last_ddl_time) from user_objects where object_id = b.key_col)
      8    where b.my_col is null
     10    returning b.other_col bulk collect into l_results;
     11  end;
     12  /
     
    PL/SQL procedure successfully completed.
     

    A workaround is to perform an equivalent SELECT to fetch the required data from the remote source (for example, into a temporary table), and then update locally.

    Database Provisioning in Minutes: Using Enterprise Manager 12c DBaaS Snap Clone and EMC Storage

    A little while back I posted a blog on the official Oracle Enterprise Manager blog on using Enterprise Manager 12c Snap Clone with EMC storage. Last week, I also presented on exactly the same topic at the RMOUG Training Days in Denver, Colorado, and there was quite a bit of interest in the subject.

    Given that level of interest, I thought I’d let you know about another opportunity to hear all about it even if you weren’t able to attend the RMOUG conference. You may know that product management does a lot of webcasts around different areas of functionality with EM12c. Well, the very next one is on Snap Clone with EMC storage, in this case being co-presented by Oracle and EMC. You can click on the “Register” button on this link to register your attendance and get all the details as far as dial in numbers and so on. It’s on this Wednesday, February 25 at 8:00 A.M. PT ( 11:00 A.M. ET) – those are in US time zones BTW. Take advantage of this opportunity, and while you’re looking at the webcast page, check out some of the other fantastic opportunities that are coming up to hear about some of the really cool things you can do with EM12c!

    Understanding enhancements to block cleanouts in Exadata part 1

    Travel time is writing time and I have the perfect setting for a techie post. Actually I got quite excited about the subject causing the article to get a bit longer than initially anticipated. In this part you can read about block cleanouts when using buffered I/O. The next part will show how this works using direct path reads and Smart Scans.

    The article ultimately aims at describing the enhancements Exadata brings to the table for direct path reads and delayed block cleanouts. Delayed block cleanouts are described in Jonathan Lewis’s “Oracle Core”, and in one of his blog posts, so here’s just a summary.

    The delayed block cleanout

    In a nutshell Oracle’s database writer (the process persisting your transactions to disk) is free to write blocks to disk in batches when it has to be done. A commit on its own won’t trigger a write of dirty (or modified) block to disk. If it were to trigger the write, the commit-time would be proportional to the number of blocks affected by the last transaction. The commit command however completes very quickly regardless of how much data has been affected. Unlike a rollback … It is entirely possible that a block modified by a transaction is written to disk before the transaction has been completed. A little later, once the commit has been acknowledged by Oracle there is no process that would read the blocks back into the buffer cache and clear them out-this happens later. It would also be quite inefficient to do so.

    Defining “clean”

    Now when database writer writes to disk it is possible that the block just written has an active transaction recorded in its header. You can see this by dumping a block with an active transaction – the ITL in the header will reference the XID, UBA and the number of rows affected plus other information. The individual rows that are part of the transaction have their lock byte set as you can see in the row directory. The number in the lb field refers back to an ITL you see further up in the block dump (don’t worry I’ll show you an example shortly).

    What happened in Oracle before direct path reads became a lot more common is this:

    • A user starts a transaction, for example by updating a large portion of the table
    • In the meantime the database writer flushes some blocks to disk, including some of the ones affected by the transaction.
    • The user commits a transaction
    • The next user queries the block (let’s assume it is not in the buffer cache)
    • The second session’s foreground process reads the block
      • Realises that it has an active transaction recorded in it
      • Checks if that transaction is still active
      • And clears the block out if not
    • The block with a valid CR image is now in the buffer cache

    Huh?

    Sounds too abstract-I agree? Let’s have an example.

    SQL> select table_name,num_rows,compression,partitioned from tabs
      2  where table_name = 'T1_100k';
    
    TABLE_NAME                       NUM_ROWS COMPRESS PAR
    ------------------------------ ---------- -------- ---
    T1_100K                            500000 DISABLED NO
    
    SQL> update T1_100k set state = 'MODIFIED';
    
    5000000 rows updated.
    

    An easy way to simulate the flushing of blocks to disk is a slightly brute-force approach of flushing the buffer cache. Only recommended in the lab, really. Let’s have a look at what the block looks like:

    SQL> select dbms_rowid.rowid_block_number(rowid), dbms_rowid.rowid_relative_fno(rowid)
      2  from t1_100k where rownum < 11;
    
    DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID)
    ------------------------------------ ------------------------------------
                                 3940811                                    5
                                 3940811                                    5
                                 3940811                                    5
                                 3940811                                    5
                                 3940811                                    5
                                 3940811                                    5
                                 3940812                                    5
                                 3940812                                    5
                                 3940812                                    5
                                 3940812                                    5
    
    10 rows selected.
    
    Elapsed: 00:00:00.01
    SQL> alter system dump datafile 5 block 3940811;
    
    System altered.
    
    Elapsed: 00:00:00.01
    SQL> select value from v$diag_info where name like 'Default%';
    
    VALUE
    ---------------------------------------------------------------------------
    /u01/app/oracle/diag/rdbms/dbm01/dbm011/trace/dbm011_ora_94351.trc
    
    Elapsed: 00:00:00.02
    

    The block dump for block 3940811 on datafile 5 (users tablespace) is now in that trace file:

    Block header dump:  0x017c21cb
     Object id on Block? Y
     seg/obj: 0x12a13  csc: 0x00.2191ff7  itc: 3  flg: E  typ: 1 - DATA
         brn: 0  bdba: 0x17c21c8 ver: 0x01 opc: 0
         inc: 0  exflg: 0
    
     Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
    0x01   0xffff.000.00000000  0x00000000.0000.00  C---    0  scn 0x0000.02191d10
    0x02   0x0009.005.00002575  0x0003688a.0ef1.1f  ----    6  fsc 0x0000.00000000
    0x03   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
    bdba: 0x017c21cb
    data_block_dump,data header at 0x7f026361707c
    ===============
    tsiz: 0x1f80
    hsiz: 0x1e
    pbl: 0x7f026361707c
         76543210
    flag=--------
    ntab=1
    nrow=6
    ...
    0xe:pti[0]      nrow=6  offs=0
    0x12:pri[0]     offs=0x1b75
    0x14:pri[1]     offs=0x176a
    0x16:pri[2]     offs=0x135f
    0x18:pri[3]     offs=0xf54
    0x1a:pri[4]     offs=0xb49
    0x1c:pri[5]     offs=0x73e
    block_row_dump:
    tab 0, row 0, @0x1b75
    tl: 1035 fb: --H-FL-- lb: 0x2  cc: 6
    col  0: [ 2]  c1 02
    col  1: [999]
     31 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
     20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
    ...
    

    The significant pieces of information here are in the ITL (interested transaction list) and the lock byte set, pointing to the second ITL entry. The absence of flags for ITL #2 indicates the transaction is ongoing (you can see this in v$transaction-this is a lab environment, in real databases you’d see more than 1 active transaction):

    SQL> select xidusn, xidslot, xidsqn, status, start_scn from v$transaction;
    
        XIDUSN    XIDSLOT     XIDSQN STATUS            START_SCN
    ---------- ---------- ---------- ---------------- ----------
             9          5       9589 ACTIVE             35200741
    
    

    This is indeed the transaction that is referenced in v$transaction and the ITL 0x02 above (9589 is decimal for 2575 in hexadecimal notation, 9 and 5 don’t need converting to decimal numbers). If you are curious now about some more internals then you might find this post by Arup Nanda interesting.

    Now what happens if another session queries the table? Since the query in session 1 has not yet committed the changes in it must not be seen by any other session (that would be a dirty read otherwise. We do not use dirty reads in Oracle RDBMS). So to give the user a consistent view of the data the block must be rolled back. Before the test I made sure there wasn’t a block in the buffer cache for the object in question:

    SQL> select count(*), inst_id, status from gv$bh where objd =
      2   (select data_object_id from dba_objects where object_name = 'T1_100K')
      3  group by inst_id, status;
    
      COUNT(*)    INST_ID STATUS
    ---------- ---------- ----------
        524327          1 free
    
    

    Let’s test:

    SQL> @mystats start
    
    SQL> select /* test004 */ count(*) from t1_100k;
    
      COUNT(*)
    ----------
        500000
    
    Elapsed: 00:00:13.96
    
    @mystats stop t=1
    
    ==========================================================================================
    MyStats report : 22-FEB-2015 08:42:09
    ==========================================================================================
    
    
    ------------------------------------------------------------------------------------------
    1. Summary Timings
    ------------------------------------------------------------------------------------------
    
    Type    Statistic Name                                                               Value
    ------  ----------------------------------------------------------------  ----------------
    TIMER   snapshot interval (seconds)                                                  35.57
    TIMER   CPU time used (seconds)                                                       3.21
    
    
    ------------------------------------------------------------------------------------------
    2. Statistics Report
    ------------------------------------------------------------------------------------------
    
    Type    Statistic Name                                                               Value
    ------  ----------------------------------------------------------------  ----------------
    STAT    CPU used by this session                                                       327
    STAT    CPU used when call started                                                     327
    STAT    CR blocks created                                                           83,334
    STAT    DB time                                                                      1,401
    STAT    Requests to/from client                                                         16
    STAT    SQL*Net roundtrips to/from client                                               16
    STAT    active txn count during cleanout                                            83,334
    STAT    buffer is not pinned count                                                       2
    STAT    bytes received via SQL*Net from client                                      13,279
    STAT    bytes sent via SQL*Net to client                                             5,106
    STAT    calls to get snapshot scn: kcmgss                                              758
    STAT    calls to kcmgas                                                             83,332
    STAT    calls to kcmgcs                                                                 30
    STAT    cell flash cache read hits                                                   8,045
    STAT    cell physical IO interconnect bytes                                    748,576,768
    STAT    change write time                                                               48
    STAT    cleanout - number of ktugct calls                                           83,334
    STAT    cleanouts and rollbacks - consistent read gets                              83,334
    STAT    consistent changes                                                         591,717
    STAT    consistent gets                                                            757,980
    STAT    consistent gets examination                                                674,254
    STAT    consistent gets examination (fastpath)                                      83,337
    STAT    consistent gets from cache                                                 757,980
    STAT    consistent gets pin                                                         83,726
    STAT    consistent gets pin (fastpath)                                              83,055
    STAT    data blocks consistent reads - undo records applied                        590,917
    STAT    db block changes                                                            84,134
    STAT    db block gets                                                                2,301
    STAT    db block gets from cache                                                     2,301
    STAT    enqueue releases                                                                 4
    STAT    enqueue requests                                                                 4
    STAT    execute count                                                                   13
    STAT    file io wait time                                                       11,403,214
    STAT    free buffer requested                                                      174,766
    STAT    gc local grants                                                            174,713
    STAT    global enqueue gets sync                                                        43
    STAT    global enqueue releases                                                         41
    STAT    heap block compress                                                          4,549
    STAT    hot buffers moved to head of LRU                                                 4
    STAT    immediate (CR) block cleanout applications                                  83,334
    STAT    index fetch by key                                                               1
    STAT    lob writes                                                                     375
    STAT    lob writes unaligned                                                           375
    STAT    logical read bytes from cache                                        6,228,221,952
    STAT    messages sent                                                                    9
    STAT    non-idle wait count                                                         17,446
    STAT    non-idle wait time                                                           1,141
    STAT    opened cursors cumulative                                                       13
    STAT    parse count (hard)                                                               1
    STAT    parse count (total)                                                             13
    STAT    physical read IO requests                                                    8,715
    STAT    physical read bytes                                                    748,576,768
    STAT    physical read requests optimized                                             8,045
    STAT    physical read total IO requests                                              8,715
    STAT    physical read total bytes                                              748,576,768
    STAT    physical read total bytes optimized                                     65,904,640
    STAT    physical read total multi block requests                                       654
    STAT    physical reads                                                              91,379
    STAT    physical reads cache                                                        91,379
    STAT    physical reads cache prefetch                                               82,664
    STAT    recursive calls                                                              1,985
    STAT    recursive cpu usage                                                              6
    STAT    redo entries                                                                84,005
    STAT    redo entries for lost write detection                                          671
    STAT    redo size                                                                9,016,740
    STAT    redo size for lost write detection                                       3,016,296
    STAT    redo subscn max counts                                                      83,334
    STAT    rows fetched via callback                                                        1
    STAT    session cursor cache count                                                       4
    STAT    session cursor cache hits                                                        9
    STAT    session logical reads                                                      760,281
    STAT    session pga memory                                                         -65,536
    STAT    session uga memory                                                          65,488
    STAT    session uga memory max                                                     148,312
    STAT    shared hash latch upgrades - no wait                                         8,218
    STAT    table fetch by rowid                                                             1
    STAT    table scan blocks gotten                                                    83,334
    STAT    table scan disk non-IMC rows gotten                                        500,000
    STAT    table scan rows gotten                                                     500,000
    STAT    table scans (short tables)                                                       1
    STAT    temp space allocated (bytes)                                             2,097,152
    STAT    user I/O wait time                                                           1,141
    STAT    user calls                                                                      22
    STAT    workarea executions - optimal                                                    1
    STAT    workarea memory allocated                                                      -43
    
    
    ------------------------------------------------------------------------------------------
    3. About
    ------------------------------------------------------------------------------------------
    - MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
    - Based on the SNAP_MY_STATS utility by Jonathan Lewis
    
    ==========================================================================================
    End of report
    ==========================================================================================
    

    I am using Adrian Billington’s mystats script here which I can only recommend to you-it’s very decent (and that’s an understatement). In the example above it took two snapshots and calculated the change of the session counters in v$mystat during the query execution. Have a look at the cleanout% statistics here! These are the block cleanouts. As this generates redo you can see that recorded here too. This is a tricky interview question: can a select generate redo? Yes sure can! There is also a fair amount of physical I/O going on. After all the buffer cache was empty.

    The second execution of the query completes a lot faster due to the absence of physical I/O. Nevertheless, the session has to clean out all these blocks and perform consistent read processing again (remember the transaction is still uncommitted as seen in active txn%):

    SQL> select count(*) from t1_100k;
    
      COUNT(*)
    ----------
        500000
    
    Elapsed: 00:00:01.76
    
    SQL> @mystats stop t=1
    
    ==========================================================================================
    MyStats report : 22-FEB-2015 08:42:59
    ==========================================================================================
    
    
    ------------------------------------------------------------------------------------------
    1. Summary Timings
    ------------------------------------------------------------------------------------------
    
    Type    Statistic Name                                                               Value
    ------  ----------------------------------------------------------------  ----------------
    TIMER   snapshot interval (seconds)                                                   8.55
    TIMER   CPU time used (seconds)                                                       1.75
    
    
    ------------------------------------------------------------------------------------------
    2. Statistics Report
    ------------------------------------------------------------------------------------------
    
    Type    Statistic Name                                                               Value
    ------  ----------------------------------------------------------------  ----------------
    STAT    CPU used by this session                                                       181
    STAT    CPU used when call started                                                     181
    STAT    CR blocks created                                                           83,334
    STAT    DB time                                                                        183
    STAT    Requests to/from client                                                         16
    STAT    SQL*Net roundtrips to/from client                                               16
    STAT    active txn count during cleanout                                            83,334
    STAT    buffer is not pinned count                                                       2
    STAT    bytes received via SQL*Net from client                                      13,280
    STAT    bytes sent via SQL*Net to client                                             5,177
    STAT    calls to get snapshot scn: kcmgss                                              757
    STAT    calls to kcmgas                                                             83,331
    STAT    calls to kcmgcs                                                                 30
    STAT    change write time                                                               28
    STAT    cleanout - number of ktugct calls                                           83,334
    STAT    cleanouts and rollbacks - consistent read gets                              83,334
    STAT    consistent changes                                                         591,708
    STAT    consistent gets                                                            757,980
    STAT    consistent gets examination                                                674,254
    STAT    consistent gets examination (fastpath)                                      83,337
    STAT    consistent gets from cache                                                 757,980
    STAT    consistent gets pin                                                         83,726
    STAT    consistent gets pin (fastpath)                                              83,726
    STAT    cursor authentications                                                           1
    STAT    data blocks consistent reads - undo records applied                        590,917
    STAT    db block changes                                                            84,125
    STAT    db block gets                                                                2,895
    STAT    db block gets from cache                                                     2,895
    STAT    execute count                                                                   13
    STAT    free buffer requested                                                       83,382
    STAT    global enqueue gets sync                                                        38
    STAT    global enqueue releases                                                         36
    STAT    heap block compress                                                          4,549
    STAT    immediate (CR) block cleanout applications                                  83,334
    STAT    index fetch by key                                                               1
    STAT    lob writes                                                                     375
    STAT    lob writes unaligned                                                           375
    STAT    logical read bytes from cache                                        6,233,088,000
    STAT    messages sent                                                                    7
    STAT    non-idle wait count                                                             16
    STAT    opened cursors cumulative                                                       13
    STAT    parse count (total)                                                             13
    STAT    recursive calls                                                              1,982
    STAT    recursive cpu usage                                                              6
    STAT    redo entries                                                                83,334
    STAT    redo size                                                                6,000,400
    STAT    rows fetched via callback                                                        1
    STAT    session cursor cache hits                                                       12
    STAT    session logical reads                                                      760,875
    STAT    session pga memory                                                      -1,769,472
    STAT    session uga memory                                                      -1,858,472
    STAT    shared hash latch upgrades - no wait                                           524
    STAT    table fetch by rowid                                                             1
    STAT    table scan blocks gotten                                                    83,334
    STAT    table scan disk non-IMC rows gotten                                        500,000
    STAT    table scan rows gotten                                                     500,000
    STAT    table scans (short tables)                                                       1
    STAT    user calls                                                                      22
    STAT    workarea executions - optimal                                                    1
    STAT    workarea memory allocated                                                      -83
    
    
    ------------------------------------------------------------------------------------------
    3. About
    ------------------------------------------------------------------------------------------
    - MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
    - Based on the SNAP_MY_STATS utility by Jonathan Lewis
    
    ==========================================================================================
    End of report
    ==========================================================================================
    

    I should have mentioned that these 2 examples were executed on an X2-2 quarter rack. To prevent Exadata from applying some clever optimisations I specifically disabled direct path reads and turned off Exadata features. Here is the view from the buffer cache again:

    SQL> select count(*), inst_id, status from gv$bh where objd =
      2   (select data_object_id from dba_objects where object_name = 'T1_100K')
      3  group by inst_id, status;
    
      COUNT(*)    INST_ID STATUS
    ---------- ---------- ----------
        351704          1 free
             1          1 scur
         83334          1 xcur
         83334          1 cr
    
    4 rows selected.
    

    For each subsequent execution of the select statement consistent read processing has to be conducted. The number of CR blocks in the buffer cache will increase. For the third execution I can see the following:

    SQL> r
      1  select count(*), inst_id, status from gv$bh where objd =
      2   (select data_object_id from dba_objects where object_name = 'T1_100K')
      3* group by inst_id, status
    
      COUNT(*)    INST_ID STATUS
    ---------- ---------- ----------
        267526          1 free
             1          1 scur
         83334          1 xcur
        166668          1 cr
    
    4 rows selected.
    

    Things improve when the transaction commits.

    The impact of a commit

    What about a commit now? Committing in session 1 and running the query again in session 2 shows a very good response time. Please remember that I forced buffered reads. The next part deals with this case when using direct path reads instead (those are not buffered). The segment size would have qualified for direct path reads otherwise.

    SQL> select /* test004 */ count(*) from t1_100k;
    
      COUNT(*)
    ----------
        500000
    
    Elapsed: 00:00:00.18
    
    ==========================================================================================
    MyStats report : 22-FEB-2015 09:40:08
    ==========================================================================================
    
    
    ------------------------------------------------------------------------------------------
    1. Summary Timings
    ------------------------------------------------------------------------------------------
    
    Type    Statistic Name                                                               Value
    ------  ----------------------------------------------------------------  ----------------
    TIMER   snapshot interval (seconds)                                                  19.73
    TIMER   CPU time used (seconds)                                                       0.18
    
    
    ------------------------------------------------------------------------------------------
    2. Statistics Report
    ------------------------------------------------------------------------------------------
    
    Type    Statistic Name                                                               Value
    ------  ----------------------------------------------------------------  ----------------
    STAT    CPU used by this session                                                        25
    STAT    CPU used when call started                                                      25
    STAT    DB time                                                                         25
    STAT    Heatmap SegLevel - Full Table Scan                                               1
    STAT    Requests to/from client                                                         16
    STAT    SQL*Net roundtrips to/from client                                               16
    STAT    buffer is not pinned count                                                       2
    STAT    bytes received via SQL*Net from client                                      13,280
    STAT    bytes sent via SQL*Net to client                                             5,107
    STAT    calls to get snapshot scn: kcmgss                                              757
    STAT    calls to kcmgcs                                                                 30
    STAT    consistent changes                                                             800
    STAT    consistent gets                                                             83,729
    STAT    consistent gets examination                                                      3
    STAT    consistent gets examination (fastpath)                                           3
    STAT    consistent gets from cache                                                  83,729
    STAT    consistent gets pin                                                         83,726
    STAT    consistent gets pin (fastpath)                                              83,726
    STAT    db block changes                                                               800
    STAT    db block gets                                                                2,301
    STAT    db block gets from cache                                                     2,301
    STAT    enqueue releases                                                                 3
    STAT    enqueue requests                                                                 3
    STAT    execute count                                                                   13
    STAT    free buffer requested                                                           53
    STAT    global enqueue gets sync                                                        39
    STAT    global enqueue releases                                                         37
    STAT    index fetch by key                                                               1
    STAT    lob writes                                                                     375
    STAT    lob writes unaligned                                                           375
    STAT    logical read bytes from cache                                          704,757,760
    STAT    no work - consistent read gets                                              83,334
    STAT    non-idle wait count                                                             16
    STAT    opened cursors cumulative                                                       13
    STAT    parse count (total)                                                             13
    STAT    recursive calls                                                              1,984
    STAT    recursive cpu usage                                                              7
    STAT    rows fetched via callback                                                        1
    STAT    session cursor cache count                                                       4
    STAT    session cursor cache hits                                                       10
    STAT    session logical reads                                                       86,030
    STAT    session pga memory                                                          65,536
    STAT    session pga memory max                                                      65,536
    STAT    session uga memory                                                          65,488
    STAT    session uga memory max                                                     148,312
    STAT    shared hash latch upgrades - no wait                                           525
    STAT    table fetch by rowid                                                             1
    STAT    table scan blocks gotten                                                    83,334
    STAT    table scan disk non-IMC rows gotten                                        500,000
    STAT    table scan rows gotten                                                     500,000
    STAT    table scans (short tables)                                                       1
    STAT    temp space allocated (bytes)                                             2,097,152
    STAT    user calls                                                                      22
    STAT    workarea executions - optimal                                                    1
    STAT    workarea memory allocated                                                      -40
    
    
    ------------------------------------------------------------------------------------------
    3. About
    ------------------------------------------------------------------------------------------
    - MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
    - Based on the SNAP_MY_STATS utility by Jonathan Lewis
    
    ==========================================================================================
    End of report
    ==========================================================================================
    

    Access to blocks in the buffer cache is quick! There was no need to clean out any blocks anymore. The fastest way to do something is indeed not having to do it at all. Mind you, the block on disk still has the active transaction recorded!

    SQL> alter system dump datafile 5 block 3940811;
    
    System altered.
    
    Elapsed: 00:00:00.01
    SQL> select value from v$diag_info where name like 'Default%';
    
    VALUE
    ------------------------------------------------------------------------------------------------------------------------
    /u01/app/oracle/diag/rdbms/dbm01/dbm011/trace/dbm011_ora_100297.trc
    
    SQL> exit
    
    $ vi /u01/app/oracle/diag/rdbms/dbm01/dbm011/trace/dbm011_ora_100297.trc
    
    Block header dump:  0x017c21cb
     Object id on Block? Y
     seg/obj: 0x12a13  csc: 0x00.2317620  itc: 3  flg: E  typ: 1 - DATA
         brn: 0  bdba: 0x17c21c8 ver: 0x01 opc: 0
         inc: 0  exflg: 0
    
     Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
    0x01   0xffff.000.00000000  0x00000000.0000.00  C---    0  scn 0x0000.02191d10
    0x02   0x0009.005.00002575  0x0003688a.0ef1.1f  --U-    6  fsc 0x0000.0232cfca
    0x03   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
    bdba: 0x017c21cb
    data_block_dump,data header at 0x7fc9f3cb407c
    ...
    
    $ grep lb -B1 /u01/app/oracle/diag/rdbms/dbm01/dbm011/trace/dbm011_ora_100297.trc
    tab 0, row 0, @0x1b75
    tl: 1035 fb: --H-FL-- lb: 0x2  cc: 6
    --
    tab 0, row 1, @0x176a
    tl: 1035 fb: --H-FL-- lb: 0x2  cc: 6
    --
    tab 0, row 2, @0x135f
    tl: 1035 fb: --H-FL-- lb: 0x2  cc: 6
    --
    tab 0, row 3, @0xf54
    tl: 1035 fb: --H-FL-- lb: 0x2  cc: 6
    --
    tab 0, row 4, @0xb49
    tl: 1035 fb: --H-FL-- lb: 0x2  cc: 6
    --
    tab 0, row 5, @0x73e
    tl: 1035 fb: --H-FL-- lb: 0x2  cc: 6
    --
    

    The transaction recorded in the ITL slot 2 is gone though:

    SQL> select xidusn, xidslot, xidsqn, status, start_scn from v$transaction
      2  where xidusn = 9 and xidslot = 5 and xidsqn = 9589;
    
    no rows selected
    

    Did you notice the Flag in the second ITL entry? That wasn’t there before. It indicates a fast commit according to Core Oracle by Jonathan Lewis.

    Summary

    Block cleanouts are simple with buffered I/O. Dirty blocks can be rolled back and kept in the buffer cache and can be accessed without having to undergo block cleanout.

    Return of the Disasters – OUG Ireland 2015

    In just under a month I’ll be off to Dublin for the 2015 OUG Ireland conference. It takes place on Thursday the 19th of March. I’m doing my favorite presentation to present, on I.T. disasters I have witnessed and what you can learn from them (so now the title of this blog makes sense and maybe is not as exciting as it suggested). It is also the talk I get most nervous about doing. I tend to put a bit of humour into any presentation I do but if it is predominantly a technical talk, it’s fine if the humour falls flat. When I am intending to be entertaining, as I do with this one, there is more at stake!

    not_the_best_thing_to_drop

    Back in the mid-2000’s I used to do this talk once or twice a year but these days it tends not to get accepted for conferences. I suspect it is partly as I had done it a good few times and partly as it is hard to place it under the categories Oracle Technical conferences have. Is it technical? Is it project management? Is it entertainment? It is actually a bit of all of them. They are all true stories and each one highlights how we should not do things, be it some of the pitfalls of project management or where disaster tolerant hardware turned out not to be.

    I’ve mentioned this presentation a couple of times in my blog. Once early on in 2009 when no one came by here very often, where I go into why I toned down the talk {concern over impact on my career/perceived professionalism} for a while and the impact of that decision {a bland and rather poor talk}. It crops up again in a post in 2013, which I think is the last time I gave this talk. I am not sure I did a very good job of it then either, as I was not well during that trip (not helped by rough seas but I was ill for other reasons). Thus I am looking forward to giving it another airing and, as I no longer worry too much about the career, I might just let rip a little more. I have a few more disasters under my belt since I originally wrote the talk, so I might include one or two of them…

    The OUG Ireland conference itself is a fair-sized event, running from 09:00 to 17:30 or so, with 7 concurrent tracks covering Applications, Cloud, BI, Database and Development. I’m astounded by the number of Oracle Aces, Oaktable members and other top knowledge sharers who will be presenting {Update – Brendan Tierney has put together a list of all ACEs presenting}. I’ll have several hard decisions about which talk I go to at any given time. I’ll certainly be at Maria Colgan’s Tech keynote at the end of the day though, I’m hoping for another offer of a date* ;-).

    To my shame, I have never been to Ireland before and it’s only just over there {points West}, about 90 minutes by plane. So I am turning up Wednesday lunch time and staying to late Friday afternoon so that I can look around and spend some time with fellow presenting friends (and anyone else who I bump into).

    All in all, it is a trip I am greatly looking forward to for various reasons. If you can get along I encourage you to do so. And, if you are there and see me around, come and say “hi”.

    {* Note to lawyers, this is an in-joke}.

    PeopleTools 8.54: Multiple Query Security Records

    This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

    This post is not about a database feature newly supported in PeopleTools, but PeopleTools is capable of doing something new that could negatively impact the database.  When I saw the following warning in the PeopleTools 8.54 release notes, I thought I should look into it.
    "PeopleTools has added an Advanced Query Security option to Application Designer. This feature allows up to five Query Security Records to be associated with a single record, including the ability to associate security with non-key fields. While powerful, this feature should be used sparingly because multiple additional joins will affect query performance."

    The PeopleTools documentation shows how to add multiple query security records in Application Designer, but doesn't really explain what effect it will have on queries on that record.
    PeopleTools has always allowed a query security record to be defined on a record.  This is the record properties for JOB.

    I am going to create a simple example query that joins PS_JOB and PS_NAMES.  These records have different query security records, so both query security records appear in the PS/Query.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT B.EMPLID, B.DEPTID
    FROM PS_JOB B, PS_EMPLMT_SRCH_QRY B1, PS_NAMES A, PS_PERALL_SEC_QRY A1
    WHERE ( B.EMPLID = B1.EMPLID
    AND B.EMPL_RCD = B1.EMPL_RCD
    AND B1.OPRID = 'PS'
    AND A.EMPLID = A1.EMPLID
    AND A1.OPRID = 'PS'
    AND ( B.EFFDT =
    (SELECT MAX(B_ED.EFFDT) FROM PS_JOB B_ED
    WHERE B.EMPLID = B_ED.EMPLID
    AND B.EMPL_RCD = B_ED.EMPL_RCD
    AND B_ED.EFFDT <= SYSDATE)
    AND B.EFFSEQ =
    (SELECT MAX(B_ES.EFFSEQ) FROM PS_JOB B_ES
    WHERE B.EMPLID = B_ES.EMPLID
    AND B.EMPL_RCD = B_ES.EMPL_RCD
    AND B.EFFDT = B_ES.EFFDT)
    AND B.EMPLID = A.EMPLID
    AND A.EFFDT =
    (SELECT MAX(A_ED.EFFDT) FROM PS_NAMES A_ED
    WHERE A.EMPLID = A_ED.EMPLID
    AND A.NAME_TYPE = A_ED.NAME_TYPE
    AND A_ED.EFFDT <= SYSDATE) ))

    The new version of the same query, but this time with multiple query security records if force, is below.  Note that:

    • A1, B1 are the query security records defined on the record properties that have always been present in PeopleTools.
    • B4, B5, B6 are the advanced query security records.  Note that EMPLNT_SRCH_QRY has join criteria on the columns specified in the Advanced Query Security Mapping dialogue.
    • EMPLMT_SRCH_QRY gets joined twice to JOB because it is the query security record B1 and an advanced query security record B4, so the advanced settings are in addition to the standard setting.  Be careful not to duplicate records.  The PS/Query Security Join Optimization setting (introduced in PeopleTools 8.52) does not help with this.
    • The standard query security record is stored in PSRECDEFN.QRYSRCRECNAME, while the advanced query security record definitions are stored in a new PeopleTools table PSRECSECFLDMAPS.
    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT B.EMPLID, B.DEPTID
    FROM PS_JOB B, PS_EMPLMT_SRCH_QRY B1, PS_EMPLMT_SRCH_QRY B4,
    PS_PERALL_SEC_QRY B5, PS_PERS_SRCH_QRY B6,
    PS_NAMES A, PS_PERALL_SEC_QRY A1
    WHERE ( B.EMPLID = B1.EMPLID
    AND B.EMPL_RCD = B1.EMPL_RCD
    AND B1.OPRID = 'PS'
    AND B.EMPLID = B4.EMPLID
    AND B.EMPL_RCD = B4.EMPL_RCD
    AND B4.OPRID = 'PS'
    AND B.EMPLID = B5.EMPLID
    AND B5.OPRID = 'PS'
    AND B.EMPLID = B6.EMPLID
    AND B6.OPRID = 'PS'

    AND A.EMPLID = A1.EMPLID
    AND A1.OPRID = 'PS'
    AND ( B.EFFDT =
    (SELECT MAX(B_ED.EFFDT) FROM PS_JOB B_ED
    WHERE B.EMPLID = B_ED.EMPLID
    AND B.EMPL_RCD = B_ED.EMPL_RCD
    AND B_ED.EFFDT <= SYSDATE)
    AND B.EFFSEQ =
    (SELECT MAX(B_ES.EFFSEQ) FROM PS_JOB B_ES
    WHERE B.EMPLID = B_ES.EMPLID
    AND B.EMPL_RCD = B_ES.EMPL_RCD
    AND B.EFFDT = B_ES.EFFDT)
    AND B.EMPLID = A.EMPLID
    AND A.EFFDT =
    (SELECT MAX(A_ED.EFFDT) FROM PS_NAMES A_ED
    WHERE A.EMPLID = A_ED.EMPLID
    AND A.NAME_TYPE = A_ED.NAME_TYPE
    AND A_ED.EFFDT <= SYSDATE) ))

    Conclusion

    I know from previous experience that having just two different query security records on different base records, as in the first example above can lead to significant performance problems.  This new feature has the potential to add up to five more per record.
    I can see that this feature could have occasional application where the additional security is not joined by a key field.  However, I would generally echo the sentiment in the release notes, and use it sparingly.  Instead of two query security records, could you merge them into one security record?

    PeopleTools 8.54: Materialized Views

    This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

    Materialized Views in the Database

    Snapshots were introduced in Oracle 7.  They were a way of building and maintaining a physical table to hold the results of a SQL query.  We are well used to a view being the results of a query, but here the results are materialised into a physical table.  Hence the feature was renamed materialized views in Oracle 8i.

    Today materialized views are one among many replication technologies.  They are available in standard edition Oracle, and there is no licensing implication in their use.

    Materialized views can generally be put into two categories

    • A simple, single-table materialized view. 
      • Often used to replicate data across a database link from another database.  Or to produce a subset of the data.
      • Can be refreshed incrementally using a PL/SQL package supplied by Oracle
        •  A materialized view log is created on the source table to record all the changes made to the source table.  It holds the primary key of the changed row, or the rowid (physical address of the row).  It can optionally hold additional columns.  It is populated by a database trigger on the source table (since Oracle 8i that trigger is hidden).
    • Multi-table materialized view
      • Usually done within a single database rather than across a database link.
      • Can only be refreshed by being completely rebuilt. 
      • If the same query as is used in a materialized view is submitted, Oracle can rewrite the query to use the materialized view instead.  Query rewrite only occurs subject to configuration and certain pre-requisites being met.

    Materialized Views can be populated immediately when they are built, or later on demand.  They can be refreshed on demand, on a regular schedule by a database job, or immediately when an update to a source table is committed.  Materialized views can be put into Refresh Groups.  All the materialized views in a refresh group are refreshed in the same database transaction so the data in the materialized views is consistent.

    Materialized views can be updatable and used for bidirectional replication.  I am not going to talk that here.

    When you introduce materialized views into an application you need to consider what you are trying to achieve, and make design decisions accordingly.

    Materialized Views in PeopleTools 8.54

    Using this new feature in PeopleSoft is deceptively easy, but quite a lot is going on behind the scenes.
    PeopleSoft Documentation (the term PeopleBooks seems to have been replaced by PeopleSoft Online Help): Data Management, Using Materialized Views provides an overview.

    There are two new PeopleTools tables:

    • PSPTMATVWDEFN - addition definition fields for the materialized view, build, refresh, staleness, stats.  Doesn't contain the query, that is in PSSQLTEXTDEFN as it is for all other views.
    • PSPTMATVWDEP - lists tables upon which materialized view depends.  PeopleSoft seems to work this out for itself by parsing the SQL query.

    I am going to demonstrate some aspects of the feature by running through some examples.

    Example 1: Replicate part of PS_JOB across a database link

    In this example I am going to use a materialized view to replicate a table from another Oracle database across a database link.

    If I select SQL View the Materialized View check box appears, if I check the checkbox the Materialized View Options appear.

    This is build script generated by Application Designer

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DROP VIEW PS_DMK
    /
    CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
    DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
    DEPTID FROM PS_JOB@HROTHER
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
    /
    DROP VIEW PS_DMK
    /
    CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
    SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST
    ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
    EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB@HROTHER
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /

    However, if the materialized view already exists, the script will drop it, recreate and drop the view, and then recreate the materialized view.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DROP MATERIALIZED VIEW PS_DMK
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
    DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
    DEPTID FROM PS_JOB@HROTHER
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
    /
    DROP VIEW PS_DMK
    /
    CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
    SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD IMMEDIATE REFRESH FAST
    ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
    EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB@HROTHER
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
    (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
    STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK') WHERE RECNAME
    = 'DMK'
    /
    • The Application Designer build script creates the materialized view using a primary key based replication.  If there is no WITH PRIMARY KEY clause specified, because it is the default.  There appears to be no way to get Application Designer to generate a WITH ROWID clause, so it is not possible to replicate a single table without a unique key.  You might question whether that is useful, but it is possible in Oracle.
    • If there is no primary key on the source table, you will to add one.  If this is on another system, or a pre-8.58 PeopleSoft system you will need to do this manually.  Otherwise you will get this error message:
    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">ERROR at line 4:
    ORA-12014: table 'PS_JOB' does not contain a primary key constraint
    • If you specify any key columns on the materialized view it does not result in the usual indexes that you get on tables.  Also, it is not possible to add additional user specified indexes to the materialized view - the option is greyed out.  This is rather disappointing, because you might want to do exactly that so you can query the materialized view it in different ways to the underlying table.
      • You will get a unique index on a materialized view that is replicated by primary key, because the primary key will be inherited from the underlying table.
    • Nor is it possible to specify partitioning in Application Designer on a materialized view.
    • You can specify storage options on the materialized view via Record DDL, but the storage options do not appear in the CREATE MATERIALIZED VIEW statement in the build script.  This is rather disappointing because you don't need to provide free space for updates in a materialized view which is completely refreshed each time, but you might if you do incremental update.

    Example 2: Replicate part of PS_JOB locally

    In this example, I am again only replicating 6 named columns into my materialized view.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DROP VIEW PS_DMK
    /
    CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
    DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
    DEPTID FROM PS_JOB
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    ALTER TABLE PS_JOB DROP CONSTRAINT PS_JOB_PK
    /
    DROP MATERIALIZED VIEW LOG ON PS_JOB
    /
    DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
    'PS_JOB'
    /
    ALTER TABLE PS_JOB ADD CONSTRAINT PS_JOB_PK PRIMARY KEY (EFFDT, EFFSEQ
    , EMPLID, EMPL_RCD)
    /
    CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW WITH PRIMARY
    KEY, ROWID, SEQUENCE(DEPTID, SETID_DEPT) INCLUDING NEW VALUES PURGE
    IMMEDIATE
    /
    INSERT INTO PSPTMATVWDEP(RECNAME, PTMAT_BASETBL) VALUES('DMK',
    'PS_JOB')
    /
    DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
    /
    DROP VIEW PS_DMK
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /

    I don't know why it rebuilds the non-materialized view as a normal view and drops the primary key constraint each time every time but it does.  You might not want to do this every time for a large materialized view that takes time to build.
    If the materialized view log has been built, next time you generate the view build script it creates and drop the view and then builds the materialized view.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
    DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
    DEPTID FROM PS_JOB
    /
    DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
    'PS_JOB'
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    ALTER TABLE PS_JOB DROP CONSTRAINT PS_JOB_PK
    /
    DROP MATERIALIZED VIEW LOG ON PS_JOB
    /
    DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
    'PS_JOB'
    /
    ALTER TABLE PS_JOB ADD CONSTRAINT PS_JOB_PK PRIMARY KEY (EFFDT, EFFSEQ
    , EMPLID, EMPL_RCD)
    /
    CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW WITH PRIMARY
    KEY, ROWID, SEQUENCE(DEPTID, SETID_DEPT) INCLUDING NEW VALUES PURGE
    IMMEDIATE
    /
    INSERT INTO PSPTMATVWDEP(RECNAME, PTMAT_BASETBL) VALUES('DMK',
    'PS_JOB')
    /
    DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
    /
    DROP VIEW PS_DMK
    /
    CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
    SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST
    ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
    EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
    (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
    STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK') WHERE RECNAME
    = 'DMK'
    /
    • It is rather odd to see a build script update a PeopleTools table. Application Designer also updates the PSPTMATVWDEFN table itself every time it generates the build script.  Note that the script doesn't issue an explicit commit, so if you execute the build script in SQL*Plus remember to commit to release the row level lock on PSPTMATVWDEFN.  
    • Application Designer flip-flops between these two build scripts that will repeatedly drop and create the materialized view and materialized view log.  Unless you are very careful you might not know whether you have the objects in the desired state.
    • The materialized view and materialized view log are always created in tablespace PSMATVW.  This is a new tablespace delivered in the standard tablespace script.  It is not possible to set the tablespace to something else as for a normal table because it is a view.  This is unfortunately because, I might not want all my materialized views in the same tablespace.
    • Even though the materialized view is replicated by primary key, the materialized view log also contains the rowid and the supplementary columns.  This is overkill.  The materialized view log as built be application designer contains every length-bounded column in the source table.  This can significantly increase the overhead of the materialized view log which is maintained as other process update the source table.
    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SQL> desc mlog$_ps_job
    Name Null? Type
    ----------------------------- -------- -------------------
    EFFDT DATE
    EFFSEQ NUMBER
    EMPLID VARCHAR2(11 CHAR)
    EMPL_RCD NUMBER
    DEPTID VARCHAR2(10 CHAR)
    SETID_DEPT VARCHAR2(5 CHAR)
    M_ROW$$ VARCHAR2(255 CHAR)
    SEQUENCE$$ NUMBER
    SNAPTIME$$ DATE
    DMLTYPE$$ VARCHAR2(1 CHAR)
    OLD_NEW$$ VARCHAR2(1 CHAR)
    CHANGE_VECTOR$$ RAW(255)
    XID$$ NUMBER
      • If I just created the materialized view log as follows
    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW 
    WITH PRIMARY KEY
    --, ROWID, SEQUENCE(DEPTID, SETID_DEPT)
    INCLUDING NEW VALUES PURGE IMMEDIATE
    /
      • then the materialized view log contains fewer columns
    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">Name                          Null?    Type
    ----------------------------- -------- --------------------
    EFFDT DATE
    EFFSEQ NUMBER
    EMPLID VARCHAR2(11 CHAR)
    EMPL_RCD NUMBER
    SNAPTIME$$ DATE
    DMLTYPE$$ VARCHAR2(1 CHAR)
    OLD_NEW$$ VARCHAR2(1 CHAR)
    CHANGE_VECTOR$$ RAW(255)
    XID$$ NUMBER
    • The materialized view inherits the primary key from the source table because it is a single table materialized view replicated using the primary key.  Therefore there is also a unique index on this materialised view.
    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT constraint_name, constraint_type, table_name, index_name
    FROM user_constraints
    WHERE table_name = 'PS_DMK'
    AND constraint_type != 'C'
    /

    CONSTRAINT_NAME C TABLE_NAME INDEX_NAME
    -------------------- - ---------- ----------
    PS_JOB_PK1 P PS_DMK PS_JOB_PK1

    SELECT index_name, index_type, uniqueness
    FROM user_indexes
    WHERE table_name = 'PS_DMK'
    /

    INDEX_NAME INDEX_TYPE UNIQUENES
    ---------- ---------- ---------
    PS_JOB_PK1 NORMAL UNIQUE

    SELECT index_name, column_position, column_name, descend
    FROM user_ind_columns
    WHERE table_name = 'PS_DMK'
    /

    INDEX_NAME COLUMN_POSITION COLUMN_NAME DESC
    ---------- --------------- -------------------- ----
    PS_JOB_PK1 1 EFFDT ASC
    PS_JOB_PK1 2 EFFSEQ ASC
    PS_JOB_PK1 3 EMPLID ASC
    PS_JOB_PK1 4 EMPL_RCD ASC
  • The build script clears out the MV_CAPABILITIES_TABLE when it drops the materialized view.  This table is used to hold the output from DBMS_MVIEW.EXPLAIN_MVIEW (see Oracle Database Data Warehousing Guide - ), which Application Designer executes when the materialized view record is saved.
  • Example 3:DMK_DPT_SEC_MVW is a materialised view that is cloned from security view DEPT_SEC_SRCH.

    This view references various tables (I have edited out column lists and predicates for readability)

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">
    SELECT …
    FROM PS_DEPT_TBL DEPT
    , PSOPRDEFN OPR
    WHERE EXISTS (
    SELECT 'X'
    FROM PS_SJT_DEPT SEC
    , PS_SJT_CLASS_ALL CLS
    , PS_SJT_OPR_CLS SOC
    …)
    OR EXISTS (
    SELECT 'X'
    FROM PS_SJT_DEPT SEC
    , PS_SJT_CLASS_ALL CLS
    , PS_SJT_OPR_CLS SOC
    …)
    OR EXISTS (
    SELECT 'X'
    FROM PS_SJT_DEPT SEC
    , PS_SJT_CLASS_ALL CLS
    , PS_SJT_OPR_CLS SOC
    …)

    But only 4 views appear in PSPTMATVWDEP.  PS_SJT_DEPT was not added.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT * FROM psptmatvwdep WHERE recname = 'DMK_DPT_SEC_MVW'
    /

    RECNAME PTMAT_BASETBL
    --------------- ------------------
    DMK_DPT_SEC_MVW PSOPRDEFN
    DMK_DPT_SEC_MVW PS_DEPT_TBL
    DMK_DPT_SEC_MVW PS_SJT_CLASS_ALL
    DMK_DPT_SEC_MVW PS_SJT_OPR_CLS

    I think this is because it tried and failed to add primary key constraint and materialized view log to PS_SJT_DEPT because it has a 'duplicate key' defined in Application Designer.  The following errors are found in the build log even if the build script is not executed.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">ALTER TABLE PS_SJT_DEPT ADD CONSTRAINT PS_SJT_DEPT_PK PRIMARY KEY 
    (SCRTY_KEY1, SCRTY_KEY2, SCRTY_KEY3, SCRTY_TYPE_CD, SETID)
    Error: DMK_DPT_SEC_MVW - SQL Error. Error Position: 39 Return: 2437
    - ORA-02437: cannot validate (SYSADM.PS_SJT_DEPT_PK) - primary key violated

    CREATE MATERIALIZED VIEW LOG ON PS_SJT_DEPT TABLESPACE PSMATVW
    WITH PRIMARY KEY, ROWID, SEQUENCE (DEPTID, EFFDT_NOKEY)
    INCLUDING NEW VALUES PURGE IMMEDIATE
    Error: DMK_DPT_SEC_MVW - SQL Error. Error Position: 39 Return: 2437
    - ORA-02437: cannot validate (SYSADM.PS_SJT_DEPT_PK) - primary key violated

    Application Designer worked out that PS_SJT_DEPT was referenced in the materialized view query, but it didn't check that the table does not has a unique key defined in PeopleTools.

    We didn't get as far as creating the materialized view.  However, Application Designer passed the create Materialized View command to the EXPLAIN_MVIEW function in order to populate

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">EXECUTE DBMS_MVIEW.EXPLAIN_MVIEW (q'[CREATE MATERIALIZED VIEW PS_DMK_DPT_SEC_MVW (SETID, OPRID, DEPTID, DESCR
    , DESCRSHORT, SETID_LOCATION, LOCATION, MANAGER_ID, COMPANY, USE_BUDGETS, USE_ENCUMBRANCES)
    TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST ON DEMAND AS
    SELECT DEPT.SETID, OPR.OPRID, DEPT.DEPTID , DEPT.DESCR , DEPT.DESCRSHORT , DEPT.SETID_LOCATION
    , DEPT.LOCATION , DEPT.MANAGER_ID , DEPT.COMPANY , DEPT.USE_BUDGETS , DEPT.USE_ENCUMBRANCES
    FROM PS_DEPT_TBL DEPT , PSOPRDEFN OPR
    WHERE EXISTS (
    SELECT 'X' FROM PS_SJT_DEPT SEC , PS_SJT_CLASS_ALL CLS , PS_SJT_OPR_CLS SOC
    WHERE SEC.SETID = DEPT.SETID AND SEC.DEPTID = DEPT.DEPTID AND SEC.EFFDT_NOKEY = DEPT.EFFDT
    AND CLS.SCRTY_SET_CD = 'PPLJOB' AND CLS.SCRTY_TYPE_CD = '001' AND CLS.TREE = 'Y'
    AND CLS.SCRTY_KEY1 = SEC.SCRTY_KEY1 AND CLS.SCRTY_KEY2 = SEC.SCRTY_KEY2
    AND CLS.SCRTY_KEY3 = SEC.SCRTY_KEY3 AND SOC.OPRID

    Example 4: DMK_JOB_CUR_MVW is a materialized view cloned from JOB_CURR_ALL_VW

    In this case I will try to create a materialized view on a complex query, but this time the underlying table has a unique key.  When I try to build the materialized view I get the following entries in the error log.  These warnings were obtained from the entries in MV_CAPABILITIES_TABLE which was populated by an attempt to describe the query.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SQL Build process began on 16/02/2015 at 21:05:30 for database HR92U011. 
    Error: Cannot create Materialized View on record DMK_JOB_CUR_MVW.
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_COMPLETE| Y | | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST| N | | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE| N | | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_INSERT| N | aggregate function in mv | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_INSERT| N | multiple instances of the same table or view | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_ONETAB_DML| N | see the reason why REFRESH_FAST_AFTER_INSERT is disabled | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_ANY_DML| N | see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | Oracle error: see RELATED_NUM and RELATED_TEXT for details |expression not supported for query rewrite |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | Oracle error: see RELATED_NUM and RELATED_TEXT for details |expression not supported for query rewrite |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | query rewrite is disabled on the materialized view | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_PARTIAL_TEXT_MATCH| N | materialized view cannot support any type of query rewrite | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_PARTIAL_TEXT_MATCH| N | query rewrite is disabled on the materialized view | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | subquery present in the WHERE clause | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | materialized view cannot support any type of query rewrite | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | query rewrite is disabled on the materialized view | |

    SQL Build process ended on 16/02/2015 at 21:05:30.
    1 records processed, 1 errors, 15 warnings.
    SQL Build script for all processes written to file C:\Temp\PSBUILD.SQL.
    SQL executed online.
    SQL Build log file written to C:\Temp\PSBUILD.LOG.
    • So, Application Designer does try to prevent you from creating materialized views that Oracle won't manage, but the messages back are a little obscure.
    • If I change the refresh mode to Complete, Application Designer does not create materialized view logs. 
    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">CREATE MATERIALIZED VIEW PS_DMK_JOB_CUR_MVW (EMPLID, EMPL_RCD,
    ACTION_DT, BUSINESS_UNIT, EMPL_STATUS, HR_STATUS, DEPTID, JOBCODE,
    LOCATION, POSITION_NBR, ACTION, ACTION_REASON, COMP_FREQUENCY,
    COMPRATE, CURRENCY_CD, SAL_ADMIN_PLAN, GRADE, COMPANY, PAY_SYSTEM_FLG
    , PAYGROUP, REG_TEMP, FULL_PART_TIME, SETID_DEPT, SETID_JOBCODE,
    SETID_LOCATION, PER_ORG) TABLESPACE PSMATVW BUILD IMMEDIATE REFRESH
    COMPLETE ON DEMAND AS SELECT A.EMPLID ,A.EMPL_RCD ,A.EFFDT
    ,A.BUSINESS_UNIT ,A.EMPL_STATUS ,A.HR_STATUS ,A.DEPTID ,A.JOBCODE
    ,A.LOCATION ,A.POSITION_NBR ,A.ACTION ,A.ACTION_REASON
    ,A.COMP_FREQUENCY ,A.COMPRATE ,A.CURRENCY_CD ,A.SAL_ADMIN_PLAN
    ,A.GRADE ,A.COMPANY ,A.PAY_SYSTEM_FLG ,A.PAYGROUP ,A.REG_TEMP
    ,A.FULL_PART_TIME ,A.SETID_DEPT ,A.SETID_JOBCODE ,A.SETID_LOCATION
    ,A.PER_ORG FROM PS_JOB A WHERE A.EFFDT = ( SELECT MAX (C.EFFDT) FROM
    PS_JOB C WHERE C.EMPLID = A.EMPLID AND C.EMPL_RCD = A.EMPL_RCD AND
    ((C.EFFDT <= TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD')) OR
    (A.EFFDT > TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD') AND
    TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD') < ( SELECT
    MIN(J2.EFFDT) FROM PS_JOB J2 WHERE J2.EMPLID = A.EMPLID AND
    J2.EMPL_RCD = A.EMPL_RCD) ) )) AND A.EFFSEQ = ( SELECT MAX(D.EFFSEQ)
    FROM PS_JOB D WHERE D.EMPLID = A.EMPLID AND D.EMPL_RCD = A.EMPL_RCD
    AND D.EFFDT = A.EFFDT)
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
    (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
    STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK_JOB_CUR_MVW')
    WHERE RECNAME = 'DMK_JOB_CUR_MVW'
    /
    • Also, It doesn't create a primary key constraint on either the underlying table or the materialized view.  So this materialized view doesn't have any indexes.

    Query ReWrite

    One common use of complex materialized views is to allow the optimizer to rewrite the query to use the materialized view when it sees the same query as was used to create the materialized view.  Optionally the optimizer will also check that the view is up to date.  I have added the enable query rewrite clause.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DROP MATERIALIZED VIEW PS_DMK_PER_DEP_MVW
    /
    CREATE MATERIALIZED VIEW PS_DMK_PER_DEP_MVW (SETID_DEPT, DEPTID, EFFDT
    , DESCR) TABLESPACE PSMATVW
    BUILD IMMEDIATE REFRESH COMPLETE ON DEMAND
    enable query rewrite
    AS SELECT A.SETID ,A.DEPTID ,A.EFFDT ,A.DESCR FROM PS_DEPT_TBL A
    WHERE A.EFFDT= ( SELECT MAX(B.EFFDT) FROM PS_DEPT_TBL B WHERE A.SETID
    =B.SETID AND A.DEPTID= B.DEPTID AND B.EFFDT<=TO_DATE(TO_CHAR(SYSDATE
    ,'YYYY-MM-DD'),'YYYY-MM-DD'))
    /

    However, expressions - in this case one generated to determine the current effective-dated department - are not supported for query write.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">=B.SETID AND A.DEPTID= B.DEPTID AND B.EFFDT<=TO_DATE(TO_CHAR(SYSDATE
    *
    ERROR at line 7:
    ORA-30353: expression not supported for query rewrite

    This could make it very difficult to use the feature in PeopleSoft. If you want to use the materialized view you are likely to have to reference it explicitly in the code.

    Refreshing materialized Views

    There is a new component to manage the refresh frequency of materialized views.

    PeopleTools-> Utilities-> Administration -> Oracle Materialized Views -> Materialized View Maintenance
    This component will schedule an Application Engine process called PTMATREFVW.

    What this Application Engine actually does is to specify the refresh frequency for the materialized view

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">&AlterSQL = "alter materialized view " | &mview_name 
    | " REFRESH NEXT SYSDATE + (" | &MatRecords.PTMAT_REFINT.Value | "/86400)";

    So the command issued is just

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">alter materialized view PS_DMK REFRESH  NEXT SYSDATE + (4242/86400)";

    Effectively, for each materialized view, this creates a refresh group and a database job that refreshes it.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT rname, next_date, interval FROM user_refresh
    /

    RNAME NEXT_DATE INTERVAL
    ---------- --------- -------------------------
    PS_DMK 24-JAN-15 SYSDATE + (4242/86400)

    SELECT name, type, rname, job, next_date, interval FROM user_refresh_children
    /

    NAME TYPE RNAME JOB NEXT_DATE INTERVAL
    ---------- ---------- ---------- ---------- --------- -------------------------
    PS_DMK SNAPSHOT PS_DMK 21 24-JAN-15 SYSDATE + (4242/86400)

    SELECT job, next_date, next_Sec, interval, what FROM dba_jobs
    /

    JOB NEXT_DATE NEXT_SEC INTERVAL
    ---------- --------- -------- -------------------------
    WHAT
    --------------------------------------------------
    21 24-JAN-15 11:48:52 SYSDATE + (4242/86400)
    dbms_refresh.refresh('"SYSADM"."PS_DMK"');
    • But I might want to group related materialized views together into a single refresh group.
    • I might want to refresh the job at a particular time, which can be done with a more sophisticated function in the interval. 
    • I might prefer to refresh a materialized view at a particular point in a batch schedule.  So I might prefer to code that into an application engine, or have the application engine submit a job that only fires once and does resubmit (depending on whether I want to wait for the refresh).

    My Recommendations

    • Good Things
      • The removal of the descending indexes and the creation of the primary key is a good thing
      • It is useful to be able to define the materialized view query in PeopleTools along with the rest of the applicaiton.
      • The use of EXPLAIN_MVIEW to test the validity of the materialized view and to populate MV_CAPABILITIES_TABLE is clever, but the messages are obscure and should be better documented.
    • Bad Things
      • No checking that a source table in the local database doesn't have a unique key that will support a primary key.
      • I can't build indexes on materialized view.  Although, the primary key will be inherited automatically on single table materialized views.  So you will have to handle that manually outside PeopleTools.
      • There is no support for rowid based materialized views.
      • The materialized view logs created by Application Designer are totally overblown - there is far too much data being logged.  They should be either primark key or rowid (if primary key is not possible), but not both.  I cannot see the point of the additional columns.  I think they are a waste of resource.
      • The flip-flopping of the build script is confusing; you will never be completely sure what you have in the database. 
      • The script dropping the materialized view unnecessarily, which will drop any indexes that you have created manually!
    • I think some of the problems stem from trying to graft materialized views onto the existing view record type, instead of creating a new record type and building it into Application Designer properly and giving it the attributes of both a table and a view.
    • There is not enough control over when a materialized view is refreshed.  Just a time interval is not good enough.  In most systems, you need better control.
    • It is clearly going to be difficult getting database query rewrite to work with complex materialized views in PeopleSoft, especially if effective-date logic is required.  However, that is a reflection on the application code rather than the way support for materialized views has been implemented.

     When you design materialized views into your application you need to make careful choices about

    • Refresh method
      • Complete - pretty much mandatory for multi-table views, but there are some exceptions to this rule described in the Oracle database documentation.
      • Fast - only for single table - rarely used within a single database - more commonly used for moving data between databases.  In which case, you need database links and the materialized view log goes onto the source database and the materialized view is created on the target.
    • Refresh mode
      • On commit - this is potentially dangerous because it could happen too often and there won't be a saving.  Possible exception being when the underlying table is only ever updated by a batch process
      • On demand - manually issue refresh at a specific point in a batch
      • On schedule by a database job.
    • Build options
      • Immediate - the materialized view is populated when it is created
      • Deferred - the materialized view is not poplulated when it is created, and will have to be completely refreshed at some point in the future.

    Choice of refresh method and mode is often a function question.  How stale can the materialized view be allowed to be, especially if you can't get query rewrite to work.  Thoughtful design is required.  I have seen systems severely degraded by the query and redo overhead of excessive refresh. 
    PeopleTools support of materialized views certainly has some good things, but as it stands it is of limited use when it still leaves you with a lot of manual administration to do. 

    In most systems it is the DBAs who will have to manage the materialized views.  They are generally resistant to using PeopleSoft specific tools to do database administration.  That is going to be even more challenging when only a part of the job can be done in PeopleSoft.

    PeopleTools 8.54: Multiple Query Security Records

    This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

    This post is not about a database feature newly supported in PeopleTools, but PeopleTools is capable of doing something new that could negatively impact the database.  When I saw the following warning in the PeopleTools 8.54 release notes, I thought I should look into it.
    "PeopleTools has added an Advanced Query Security option to Application Designer. This feature allows up to five Query Security Records to be associated with a single record, including the ability to associate security with non-key fields. While powerful, this feature should be used sparingly because multiple additional joins will affect query performance."

    The PeopleTools documentation shows how to add multiple query security records in Application Designer, but doesn't really explain what effect it will have on queries on that record.
    PeopleTools has always allowed a query security record to be defined on a record.  This is the record properties for JOB.

    I am going to create a simple example query that joins PS_JOB and PS_NAMES.  These records have different query security records, so both query security records appear in the PS/Query.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT B.EMPLID, B.DEPTID
    FROM PS_JOB B, PS_EMPLMT_SRCH_QRY B1, PS_NAMES A, PS_PERALL_SEC_QRY A1
    WHERE ( B.EMPLID = B1.EMPLID
    AND B.EMPL_RCD = B1.EMPL_RCD
    AND B1.OPRID = 'PS'
    AND A.EMPLID = A1.EMPLID
    AND A1.OPRID = 'PS'
    AND ( B.EFFDT =
    (SELECT MAX(B_ED.EFFDT) FROM PS_JOB B_ED
    WHERE B.EMPLID = B_ED.EMPLID
    AND B.EMPL_RCD = B_ED.EMPL_RCD
    AND B_ED.EFFDT <= SYSDATE)
    AND B.EFFSEQ =
    (SELECT MAX(B_ES.EFFSEQ) FROM PS_JOB B_ES
    WHERE B.EMPLID = B_ES.EMPLID
    AND B.EMPL_RCD = B_ES.EMPL_RCD
    AND B.EFFDT = B_ES.EFFDT)
    AND B.EMPLID = A.EMPLID
    AND A.EFFDT =
    (SELECT MAX(A_ED.EFFDT) FROM PS_NAMES A_ED
    WHERE A.EMPLID = A_ED.EMPLID
    AND A.NAME_TYPE = A_ED.NAME_TYPE
    AND A_ED.EFFDT <= SYSDATE) ))

    The new version of the same query, but this time with multiple query security records if force, is below.  Note that:

    • A1, B1 are the query security records defined on the record properties that have always been present in PeopleTools.
    • B4, B5, B6 are the advanced query security records.  Note that EMPLNT_SRCH_QRY has join criteria on the columns specified in the Advanced Query Security Mapping dialogue.
    • EMPLMT_SRCH_QRY gets joined twice to JOB because it is the query security record B1 and an advanced query security record B4, so the advanced settings are in addition to the standard setting.  Be careful not to duplicate records.  The PS/Query Security Join Optimization setting (introduced in PeopleTools 8.52) does not help with this.
    • The standard query security record is stored in PSRECDEFN.QRYSRCRECNAME, while the advanced query security record definitions are stored in a new PeopleTools table PSRECSECFLDMAPS.
    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT B.EMPLID, B.DEPTID
    FROM PS_JOB B, PS_EMPLMT_SRCH_QRY B1, PS_EMPLMT_SRCH_QRY B4,
    PS_PERALL_SEC_QRY B5, PS_PERS_SRCH_QRY B6,
    PS_NAMES A, PS_PERALL_SEC_QRY A1
    WHERE ( B.EMPLID = B1.EMPLID
    AND B.EMPL_RCD = B1.EMPL_RCD
    AND B1.OPRID = 'PS'
    AND B.EMPLID = B4.EMPLID
    AND B.EMPL_RCD = B4.EMPL_RCD
    AND B4.OPRID = 'PS'
    AND B.EMPLID = B5.EMPLID
    AND B5.OPRID = 'PS'
    AND B.EMPLID = B6.EMPLID
    AND B6.OPRID = 'PS'

    AND A.EMPLID = A1.EMPLID
    AND A1.OPRID = 'PS'
    AND ( B.EFFDT =
    (SELECT MAX(B_ED.EFFDT) FROM PS_JOB B_ED
    WHERE B.EMPLID = B_ED.EMPLID
    AND B.EMPL_RCD = B_ED.EMPL_RCD
    AND B_ED.EFFDT <= SYSDATE)
    AND B.EFFSEQ =
    (SELECT MAX(B_ES.EFFSEQ) FROM PS_JOB B_ES
    WHERE B.EMPLID = B_ES.EMPLID
    AND B.EMPL_RCD = B_ES.EMPL_RCD
    AND B.EFFDT = B_ES.EFFDT)
    AND B.EMPLID = A.EMPLID
    AND A.EFFDT =
    (SELECT MAX(A_ED.EFFDT) FROM PS_NAMES A_ED
    WHERE A.EMPLID = A_ED.EMPLID
    AND A.NAME_TYPE = A_ED.NAME_TYPE
    AND A_ED.EFFDT <= SYSDATE) ))

    Conclusion

    I know from previous experience that having just two different query security records on different base records, as in the first example above can lead to significant performance problems.  This new feature has the potential to add up to five more per record.
    I can see that this feature could have occasional application where the additional security is not joined by a key field.  However, I would generally echo the sentiment in the release notes, and use it sparingly.  Instead of two query security records, could you merge them into one security record?

    PeopleTools 8.54: Global Temporary Tables

    This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

    Database Feature Overview

    Global Temporary tables were introduced in Oracle 8i.  They can be used where an application temporarily needs a working storage tables.  They are named

    • Global because the content is private
    • Temporary because the definition is permanent

    Or if you prefer

    • Global because the definition is available to everyone
    • Temporary because 
      • physical instantiation of the table is temporary, in the temporary segment (so it isn't redo logged and so isn't recoverable),
      • but it does generate undo in the undo segment, and there is redo on the undo.
      • Each session gets its own private copy of the table in the temp segment.  So you cannot see what is in another session's temporary table, which can make application debugging difficult.
      • The physical instantiation of the table is removed either 
        • when the session disconnects - on commit preserve
        • or when the transaction is terminated with a commit or rollback - on commit delete

    This is a very useful database feature (I have been using it in PeopleSoft application ever since it was introduced). 

    • Can be used for temporary records in Application Engines where restart is disabled.
    • Can be implemented without any application code change.
    • Only Application Designer temporary records can be built as global temporary tables.  You cannot make a SQL Table record global temporary.
    • The reduction in redo generation during intensive batch processes, such as payroll processing, can bring significant performance benefits.  There is no point logging redo information for temporary working storage tables that you do not ever need to restore.
    • Shared temporary tables, such as in the GP calculation process GPPDPRUN that is written in COBOL.  If using payroll streaming (multiple concurrent processes to process in parallel), then concurrent delete/update can cause read consistency problems when using a normal table, but with global temporary tables, each session has its own physical table so there is never any need to read consistency recover to read a global temporary tables.
    • Global temporary tables are also an effective way to resolve table high water mark issues that can occur on non-shared temporary tables in on-line application engine.  The PeopleTools %TruncateTable macro still resolves to delete.  You never get high water mark problems with global temporary tables because they are physically created afresh for each new session.  
    • There is often a reduction in database size because the tables are not retained after the session terminates.  Although there will be an increased demand for temporary tablespace while the global temporary tables are in use.
    • I have occasionally seen performance problems when PeopleSoft systems very frequently truncate tables and experience contention on the RO enqueue.  This problem does not occur with global temporary tables.

    Global temporary table are not a licensed database feature and are also available in standard edition.

    Global Temporary Tables in PeopleTools

    This is the create table DDL created by Application Designer

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DROP TABLE PS_ST_RM2_TAO
    /
    CREATE GLOBAL TEMPORARY TABLE PS_ST_RM2_TAO (PROCESS_INSTANCE
    DECIMAL(10) NOT NULL,
    EMPLID VARCHAR2(11) NOT NULL,
    GRANT_NBR VARCHAR2(10) NOT NULL,
    VEST_DT DATE,
    SHARES_REMAINDER DECIMAL(21, 9) NOT NULL,
    DEC_PLACES SMALLINT NOT NULL) ON COMMIT PRESERVE ROWS TABLESPACE PSGTT01
    /
    CREATE UNIQUE iNDEX PS_ST_RM2_TAO ON PS_ST_RM2_TAO (PROCESS_INSTANCE,
    EMPLID,
    GRANT_NBR,
    VEST_DT)
    /

    The first thing to point out is the specification of a tablespace.  This is a new feature in Oracle 11g.  It is not mandatory in Oracle, but it is coded into the PeopleSoft DDL model so you must specify a temporary tablespace on the record otherwise it will fail to build.  A new temporary tablespace PSGTT01 is delivered by Oracle when you upgrade to 8.54, or you could just use the existing temporary tables.

    This new feature has been implemented using 2 new DDL models (statement types 6 and 7).

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT * FROM psddlmodel WHERE statement_type IN(6,7);

    STATEMENT_TYPE PLATFORMID SIZING_SET PARMCOUNT
    -------------- ---------- ---------- ----------
    MODEL_STATEMENT
    ------------------------------------------------------------------------
    6 2 0 0
    CREATE GLOBAL TEMPORARY TABLE [TBNAME] ([TBCOLLIST]) ON COMMIT PRESERVE
    ROWS TABLESPACE [TBSPCNAME];

    7 2 0 0
    CREATE [UNIQUE] INDEX [IDXNAME] ON [TBNAME] ([IDXCOLLIST]);
    • All tables created ON COMMIT PRESERVE, but on-line instances could be ON COMMIT DELETE (theory subject to testing) and for ALL application engine programs even if restart is enabled because commits suppressed in on-line application engines.  Instead, commit is done by the component.

    If you try adding a global temporary table table to an application engine that is not restart disabled you quite rightly get the following error message. The table will be added, but the program will not execute correctly.

    "Global Temporary Tables allocated to this restart enabled AE program will not retain any data when program exits."

    Problems:

    • There has always been a 13 character limit on temporary records, because there used to be a maximum of 99 non-shared instances, and 2 characters were reserved.  If you try to set the number of instances to greater than 99 in an application Engine (I tried GP_GL_PREP)  you now get the warning message
    "Do not support more than 99 instances when select the Temp Table which are not attributed as GTT"
    • There is now a maximum length of 11 characters for the name of a record built a global temporary table because from PeopleTools 8.54 there can be up to 9999 non-shared instances of the record.  The restriction applies irrespective of how many instances you are actually using. 
      • I have yet to encounter a system where I need more than 99 instances of a temporary table.  I can just about imagine needing 100 non-shared instances, but not 1000.  
      • This means that I cannot retrofit global temporary tables into an existing Application Engine processes without changing record names.  There are existing delivered application engine programs with 12 and 13 character temporary record names that cannot now be switched to use global temporary tables managed by application designer.  I don't need to support more instances just because the table is global temporary.
        • For example, GP_GL_SEGTMP in GP_GL_PREP is a candidate to be made global temporary because that is a streamed Global Payroll process.  When I tried, I got a record name too long error!
    "Record Name is too long. (47,67)"
        • Really, if the table is global temporary you don't need lots of instances.  Everyone could use the shared instance, because Oracle gives each session a private physical copy of the table anyway. 
          • You could do this by removing the record name from the list of temporary records in the application engine, then the %Table() macro will generate the table name without an instance number.
          • There would be a question of how to handle optimizer statistics.  Optimizer statistics collected on a global temporary table in one session could end up being used in another because there is only one place to store them in the data dictionary.
          • The answer is not to collect statistics at all and to use Optimizer Dynamic Sampling.  There is a further enhancement in Oracle 12c where the dynamically sampled stats from different sessions are kept separate.
      • When Application Designer builds an alter script, it can't tell whether it is global temporary or a normal table, so doesn't rebuild the table if you change it from one to the other.
      • The only real runtime downside of global temporary tables is that if you want to debug a process the data is not left behind after the process terminates.  Even while the process is running, you cannot query the contents of a global temporary tables in use by another from your session,

      My Recommendation

      Support for global temporary tables is welcome and long overdue.  It can bring significant run time performance and system benefits due to the reduction in redo and read consistency.  It can be implemented without any code change. 

      We just need to sort out the 11 character record name length restriction.