Search

OakieTags

Who's online

There are currently 0 users and 20 guests online.

Recent comments

Oakies Blog Aggregator

Jenkins Plugin for Delphix

In my last blog I talked about trying out Jenkins. In this blog post I want to talk about the new Jenkins plugin for Delphix.

Delphix plugin

Delphix plugin is easy to add. Just navigate to “Manage Jenkins” in the top left

Screen Shot 2016-01-15 at 11.34.17 AM

Then click on “Manage Plugins”

Screen Shot 2016-01-15 at 11.34.27 AM

Then type in “delphix” in the search bar and then click on the “Delphix Jenkins plugin” to install

Screen Shot 2016-01-15 at 11.45.10 AM

Now we can access Delphix APIs through the Jenkins interface. If I access a job (pick a job then click “configure” on the top left, or create a new job) and click  “add build step” under build, you will now see Delphix command options:

Screen Shot 2015-12-28 at 12.34.37 PM

Here is a zoom in to the new “Add build step” options:

Screen Shot 2015-12-28 at 12.34.43 PMBefore we can use these build steps, we have to add the Delphix engine(s) to Jenkins.

To add Delphix engine(s) click on “Manage Jenkins” on the top left

jenkins_add_delphix1

Then you should see a long page. In the middle of this page you should see a “Delphix” section

jenkins_add_delphix2

Add the Delphix engine(s) and hit “save” at the bottom of the screen

jenkins_add_delphix3Now when I add a build step I can choose a Delphix action like “add environment”

Screen Shot 2016-01-15 at 12.05.34 PMand now I can click “run now” and it adds the environment

Getting Your Transaction SCN – USERENV(COMMITSCN)

A few days ago I was introduced (or re-introduced) to USERENV(‘COMMITSCN’) by Jonathan Lewis. This is an internal function that allows limited access to the SCN of your transaction.

I was trying to find a way to get the actual commit SCN easily as it struck me that Oracle would have it to hand somewhere and it would be unique to the change and generated very efficiently. I could not find anything to do it so I asked Jonathan and he pointed me straight to this post he did about it a while back. What a nice chap. However, the post is from 1999 (last CENTURY!) so I thought I should just check it out first…

I’ll say up front that this is an undocumented feature of a now-deprecated function. Well, almost undocumented – older SQL reference manuals mention that ‘commitscn’ returns a NUMBER as opposed to the VARCHAR2 returned by most parameters you can use with USERENV, but it does not list it as a valid parameter for that function.
USERENV has been deprecated since Oracle 10g (see the old 10g documentation link here about USERENV) and you have been instructed to use SYS_CONTEXT(‘userenv’,’parameter‘) as described in the 12c database SQL reference manual here. However, there is no way to get the commit SCN from SYS_CONTEXT that I can find, so I thought I’d check out if USERENV(‘COMMITSCN’) still works. It does, on my version of Oracle 12.1.0.2!

There are some strict limits to this function. To begin with, you can’t select it, you can only use it on insert/update:


-- attempt a simple select of the SCN
mdw> select userenv('commitscn') from dual;
select userenv('commitscn') from dual
               *
ERROR at line 1:
ORA-01725: USERENV('COMMITSCN')  not allowed here

--But I can use in an insert,
mdw> insert into test_scn (seq_no,vc1,scn_no)
  2  values (100,'abcd',userenv('commitscn'))

1 row created.

mdw> select * from test_scn where seq_no=100

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
       100 abcd         11144739

-- Now commit my new record
mdw> commit;
Commit complete.

mdw> select * from test_scn where seq_no=100
  2  /

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
       100 abcd         11144753

--LOOK at the value for SCN_NO now! Compare to before the commit!

If you look at the above you will see a couple of things. The first is that, as I said, you cannot SELECT the function USERENV(‘COMMITSCN’).
The other is, though a value is put into the column when I insert a row using it, and I see that when I query the information back, it changes when I commit. This is because Oracle is recording something at the point of commit, not at the point of the SQL statement running – and the new SCN is only generated when you commit. A lot could have happened since I did the INSERT, I might have gone for a cup of tea and a batch job kicked off doing 1 million transactions, each with it’s own SCN. So though a placeholder of the current SCN is put into your view of the table row, the value put in the actual table is generated at the time of the commit.

Another limiting rule is that you can only reference USERENV(‘COMMITSCN’) once in a transaction, for one row. If I try and create 2 rows using the function in the same transaction I get an error, if I try to update more than 1 row I get an error:

mdw> insert into test_scn (seq_no,vc1,scn_no)
  2  values (101,'abcd',userenv('commitscn'))

1 row created.

mdw> insert into test_scn (seq_no,vc1,scn_no)
  2  values (102,'abcd',userenv('commitscn'))

insert into test_scn (seq_no,vc1,scn_no)
            *
ERROR at line 1:
ORA-01721: USERENV(COMMITSCN) invoked more than once in a transaction

-- now test updating several records
mdw> commit;
Commit complete.

mdw> select * from test_scn;

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
         1 AAAAAAA      11143743
         2 AAAAAAA      11143746
         3 AAAAAAA      11143749
         4 AAAAAAA      11143774
         5 AAAAAAA      11143777
       100 abcd         11144753
       101 abcd         11145543

mdw> update test_scn set scn_no = userenv('commitscn');
update test_scn set scn_no = userenv('commitscn')
       *
ERROR at line 1:
ORA-01721: USERENV(COMMITSCN) invoked more than once in a transaction

-- but one row works
mdw> update test_scn set scn_no = userenv('commitscn') where rownum =1;
1 row updated.

USERENV(‘COMMITSCN’) is old, undocumented and limited in use. So why am I looking at it, let alone even telling you all about it? {Apart from the fact that a lot of you *love* this sort of obscure, tid-bitty stuff}. Well, as there is no replacement for it that I am aware of. You can get the current SCN in a couple of ways, the easiest probably being to get it from V$DATABASE:

mdw> select current_scn from v$database;
any key>

CURRENT_SCN
-----------
   11146718

However, that is the last SCN used at the time you check it and is not the SCN when you commit. ie it is a different thing. I always find it irksome on those odd occasions when something is deprecated in Oracle with nothing really to replace it.

I just demonstrate again that USERENV(‘COMMITSCN’) is a little special below, and not the same as just selecting SCN from V$DATABASE. Before I go any further, I think the value USERENV(‘COMMITSCN’) puts into the table is the actual COMMIT SCN minus 1. I mostly think this as Jonathan said so :-). I do see each time I run this test that the first select from V$DATABASE and then my insert and a commit straight away results in a value in the table 2 higher than the select.

Further iterations (2nd and 3rd in this case) show the value selected from V$DATABASE and the value inserted into TEST_SCN immediately after are the same, and are 3 higher than the previous iteration. I anticipated an increase of two, once for the change to the UNDO tablespace for the insert and once for the insert. I am not sure where the third one comes in.

However, in the fourth iteration I have a PAUSE in my SQL*Plus script between checking V$DATABASE and doing my insert and, in a second session, I do some simple inserts and commits {it does not matter what, so I don’t show it}. Thus the difference between the SCN I collected from V$DATABASE and the value inserted into the table.
Finally, in the fifth iteration, I check the value in V$DATABASE, do the insert, query it back and see the two are the same. And THEN I pause so I can do some changes and commit them in my second session. After I’ve done that I continue my first session which commits my latest insert into TEST_SCN. I check the values actually stored in the table and, just as at the top of this post, you see that the value actually preserved in the table is a later SCN than the placeholder one. It is doing something special.

(the below has the noddy code to create my test table and sequence as well as the test)

-- test_scn1
--create table test_scn (seq_no number,vc1 varchar2(10),scn_no number)
-- create sequence scn_seq;
select current_scn from v$database;
insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn'));
commit;
select * from test_scn;
select current_scn from v$database;
insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn'));
commit;
select * from test_scn;
select current_scn from v$database;
insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn'));
commit;
select * from test_scn;
select current_scn from v$database;
pause 'commit some stuff in second session and then press any key'
insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn'));
commit;
select * from test_scn;
select current_scn from v$database;
insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn'));
pause 'commit some stuff in second session again and then press any key'
select * from test_scn;
commit;
select * from test_scn;
select current_scn from v$database;

-- the output of the test
mdw> @test_scn1

--check V$DATABASE SCN
CURRENT_SCN
-----------
   11147809

-- Create and commit 1 row
1 row created.
Commit complete.

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
        11 AAAAAAA      11147811
-- note that the inserted SCN is 2 higher than the current SCN.

--Same steps, 2nd iteration
CURRENT_SCN
-----------
   11147814

1 row created.
Commit complete.

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
        11 AAAAAAA      11147811
        12 AAAAAAA      11147814
-- select SCN and inserted SCN are the same, 3 higher than first iteration

-- same steps, 3rd iteration
CURRENT_SCN
-----------
   11147817

1 row created.
Commit complete.

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
        11 AAAAAAA      11147811
        12 AAAAAAA      11147814
        13 AAAAAAA      11147817
-- select SCN and inserted SCN are the same, 3 higher than first iteration

-- 4th iteration, a pause
CURRENT_SCN
-----------
   11147820

'commit some stuff in second session and then press any key'
-- I did indeed change and commit some stuff in second session, before I create my record in test_scn

1 row created.
Commit complete.

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
        11 AAAAAAA      11147811
        12 AAAAAAA      11147814
        13 AAAAAAA      11147817
        14 AAAAAAA      11147831
-- larger gap in SCN (11147817 to 11147831

-- 5th iteration, pause now after insert and before commit
CURRENT_SCN
-----------
   11147834

1 row created.
'commit some stuff in second session again and then press any key'
-- I did indeed change and commit some stuff in second session 

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
        11 AAAAAAA      11147811
        12 AAAAAAA      11147814
        13 AAAAAAA      11147817
        14 AAAAAAA      11147831
        15 AAAAAAA      11147834

-- Notice the current_scn from V$DATABASE and the last row in the table match, 11147834..

Commit complete.

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
        11 AAAAAAA      11147811
        12 AAAAAAA      11147814
        13 AAAAAAA      11147817
        14 AAAAAAA      11147831
        15 AAAAAAA      11147842

-- But after the commit the SCN in row "15" has increased to 11147842.

CURRENT_SCN
-----------
   11147851

-- and the next check of V$DATABASE SCN shows the usual increase of 3 by the commit.

As you can see from the above, USERENV(‘COMMITSCN’) is doing something a little special and, despite all the limitations, I might actually have a use for it…

A little known ORDER BY extension

Within a CONNECT BY statement, you can order siblings, that is, under a particular branch of the hierarchy, the child entries can be ordered, but you still preserve the hierarchy.


SQL> select lpad('*', level, '*' ) || ename ename
  2  from emp
  3  start with mgr is null
  4  connect by prior empno = mgr
  5  order SIBLINGS by ename
  6  /

ENAME
------------------------------
*KING
**BLAKE
***ALLEN      <==| 
***JAMES      <==|
***MARTIN     <==| ordered within the "BLAKE" branch
***TURNER     <==|
***WARD       <==|
**CLARK
***MILLER
**JONES
***FORD
****SMITH
***SCOTT
****ADAMS

14 rows selected.

SQL> select lpad('*', level, '*' ) || ename ename
  2  from emp
  3  start with mgr is null
  4  connect by prior empno = mgr
  5  order SIBLINGS by ename DESC
  6  /

ENAME
------------------------------
*KING
**JONES
***SCOTT
****ADAMS
***FORD
****SMITH
**CLARK
***MILLER
**BLAKE
***WARD
***TURNER
***MARTIN
***JAMES
***ALLEN

14 rows selected.

Multiple partitions

In 12c, one of the nice changes to come along is that partition maintenance operations can now be done on multiple partitions, for example


ALTER TABLE t1 MERGE PARTITIONS p01, p02, p03, p04 INTO p0;

ALTER TABLE SPLIT PARTITION p0 INTO 
  (PARTITION p01 VALUES LESS THAN (25),
   PARTITION p02 VALUES LESS THAN (50), 
   PARTITION p03 VALUES LESS THAN (75),
   PARTITION p04);

and so on. However, one of the things that you still cannot do is the same thing with SELECT



SQL> create table T ( x int, y int )
  2  partition by hash ( x )
  3  --partitions 8
  4  (
  5  partition  p1,
  6  partition  p2,
  7  partition  p3,
  8  partition  p4,
  9  partition  p5,
 10  partition  p6,
 11  partition  p7,
 12  partition  p8
 13  )
 14  /

Table created.

SQL>
SQL> insert /*+ APPEND */ into T
  2  select rownum, rownum
  3  from dual
  4  connect by rownum <= 1000000
  5  /

1000000 rows created.

SQL> select * from T partition ( p1,p2);
select * from T partition ( p1,p2)
                          *
ERROR at line 1:
ORA-00933: SQL command not properly ended


SQL>
SQL> select * from T partition2 ( p1,p2);
select * from T partition2 ( p1,p2)
                           *
ERROR at line 1:
ORA-00933: SQL command not properly ended


So if you’re looking to come up with a way getting access to multiple partitions easily, here’s a little trick you could employ

  • have a partition-view style SQL statement that accesses ALL of the partitions
  • with each SQL, include a bind variable and some bitwise logic

and it will look something like this



SQL> set autotrace off
SQL>
SQL> variable b number
SQL> exec :b := 17

PL/SQL procedure successfully completed.

SQL>
SQL> set autotrace on stat
SQL> select count(*)
  2  from
  3  (
  4  select * from t partition ( p1 ) where bitand(:b,1) = 1 union all
  5  select * from t partition ( p2 ) where bitand(:b,2) = 2 union all
  6  select * from t partition ( p3 ) where bitand(:b,4) = 4 union all
  7  select * from t partition ( p4 ) where bitand(:b,8) = 8 union all
  8  select * from t partition ( p5 ) where bitand(:b,16) = 16 union all
  9  select * from t partition ( p6 ) where bitand(:b,32) = 32 union all
 10  select * from t partition ( p7 ) where bitand(:b,64) = 64 union all
 11  select * from t partition ( p8 ) where bitand(:b,128) = 128
 12  )
 13  /

  COUNT(*)
----------
    250498

1 row selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        524  consistent gets
          0  physical reads
          0  redo size
        544  bytes sent via SQL*Net to client
        551  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>
SQL> exec :b := 255

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*)
  2  from
  3  (
  4  select * from t partition ( p1 ) where bitand(:b,1) = 1 union all
  5  select * from t partition ( p2 ) where bitand(:b,2) = 2 union all
  6  select * from t partition ( p3 ) where bitand(:b,4) = 4 union all
  7  select * from t partition ( p4 ) where bitand(:b,8) = 8 union all
  8  select * from t partition ( p5 ) where bitand(:b,16) = 16 union all
  9  select * from t partition ( p6 ) where bitand(:b,32) = 32 union all
 10  select * from t partition ( p7 ) where bitand(:b,64) = 64 union all
 11  select * from t partition ( p8 ) where bitand(:b,128) = 128
 12  )
 13  /

  COUNT(*)
----------
   1000000

1 row selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       2091  consistent gets
          0  physical reads
          0  redo size
        542  bytes sent via SQL*Net to client
        551  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>
SQL>

You can see from the consistent gets that it looks like we are accessing just the partitions of note. A look at the execution plan shows why – we are applying a filter on each SELECT statement in the UNION ALL based on the bind variable to decide if we need to access the partition


SQL> set autotrace traceonly explain
SQL> select count(*)
  2  from
  3  (
  4  select * from t partition ( p1 ) where bitand(:b,1) = 1 union all
  5  select * from t partition ( p2 ) where bitand(:b,2) = 2 union all
  6  select * from t partition ( p3 ) where bitand(:b,4) = 4 union all
  7  select * from t partition ( p4 ) where bitand(:b,8) = 8 union all
  8  select * from t partition ( p5 ) where bitand(:b,16) = 16 union all
  9  select * from t partition ( p6 ) where bitand(:b,32) = 32 union all
 10  select * from t partition ( p7 ) where bitand(:b,64) = 64 union all
 11  select * from t partition ( p8 ) where bitand(:b,128) = 128
 12  )
 13  /

Execution Plan
----------------------------------------------------------
Plan hash value: 726641459

------------------------------------------------------------------------------------------
| Id  | Operation                 | Name | Rows  | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |      |     1 |   621   (1)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE           |      |     1 |            |          |       |       |
|   2 |   VIEW                    |      |  1000K|   621   (1)| 00:00:01 |       |       |
|   3 |    UNION-ALL              |      |       |            |          |       |       |
|*  4 |     FILTER                |      |       |            |          |       |       |
|   5 |      PARTITION HASH SINGLE|      |   125K|    78   (2)| 00:00:01 |     1 |     1 |
|   6 |       TABLE ACCESS FULL   | T    |   125K|    78   (2)| 00:00:01 |     1 |     1 |
|*  7 |     FILTER                |      |       |            |          |       |       |
|   8 |      PARTITION HASH SINGLE|      |   124K|    78   (2)| 00:00:01 |     2 |     2 |
|   9 |       TABLE ACCESS FULL   | T    |   124K|    78   (2)| 00:00:01 |     2 |     2 |
|* 10 |     FILTER                |      |       |            |          |       |       |
|  11 |      PARTITION HASH SINGLE|      |   124K|    78   (2)| 00:00:01 |     3 |     3 |
|  12 |       TABLE ACCESS FULL   | T    |   124K|    78   (2)| 00:00:01 |     3 |     3 |
|* 13 |     FILTER                |      |       |            |          |       |       |
|  14 |      PARTITION HASH SINGLE|      |   124K|    78   (2)| 00:00:01 |     4 |     4 |
|  15 |       TABLE ACCESS FULL   | T    |   124K|    78   (2)| 00:00:01 |     4 |     4 |
|* 16 |     FILTER                |      |       |            |          |       |       |
|  17 |      PARTITION HASH SINGLE|      |   125K|    78   (2)| 00:00:01 |     5 |     5 |
|  18 |       TABLE ACCESS FULL   | T    |   125K|    78   (2)| 00:00:01 |     5 |     5 |
|* 19 |     FILTER                |      |       |            |          |       |       |
|  20 |      PARTITION HASH SINGLE|      |   125K|    78   (2)| 00:00:01 |     6 |     6 |
|  21 |       TABLE ACCESS FULL   | T    |   125K|    78   (2)| 00:00:01 |     6 |     6 |
|* 22 |     FILTER                |      |       |            |          |       |       |
|  23 |      PARTITION HASH SINGLE|      |   125K|    78   (2)| 00:00:01 |     7 |     7 |
|  24 |       TABLE ACCESS FULL   | T    |   125K|    78   (2)| 00:00:01 |     7 |     7 |
|* 25 |     FILTER                |      |       |            |          |       |       |
|  26 |      PARTITION HASH SINGLE|      |   124K|    78   (2)| 00:00:01 |     8 |     8 |
|  27 |       TABLE ACCESS FULL   | T    |   124K|    78   (2)| 00:00:01 |     8 |     8 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter(BITAND(TO_NUMBER(:B),1)=1)
   7 - filter(BITAND(TO_NUMBER(:B),2)=2)
  10 - filter(BITAND(TO_NUMBER(:B),4)=4)
  13 - filter(BITAND(TO_NUMBER(:B),8)=8)
  16 - filter(BITAND(TO_NUMBER(:B),16)=16)
  19 - filter(BITAND(TO_NUMBER(:B),32)=32)
  22 - filter(BITAND(TO_NUMBER(:B),64)=64)
  25 - filter(BITAND(TO_NUMBER(:B),128)=128)

Video: Oracle Linux Virtual Machine (VM) on Micorosft Azure

The interface for Microsoft Azure has been re-jigged since I last did screen shots, so I did a run through of creating an Oracle Linux VM and recorded it for my channel.

I also updated the associated article.

Cheers

Tim…


Video: Oracle Linux Virtual Machine (VM) on Micorosft Azure was first posted on January 18, 2016 at 9:17 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Drop Column

I published a note on AllthingsOracle a few days ago discussing the options for dropping a column from an existing table. In a little teaser to a future article I pointed out that dropping columns DOESN’T reclaim space; or rather, probably doesn’t, and even if it did you probably won’t like the way it does it.

I will  be writing about “massive deletes” for AllthingsOracle in the near future, but I thought I’d expand on the comment about not reclaiming space straight away. The key point is this – when you drop a column you are probably dropping a small fraction of each row. (Obviously there are some extreme variants on the idea – for example, you might have decided to move a large varchar2() to a separate table with shared primary key).

If you’ve dropped a small fraction of each row you’ve freed up a small fraction of each block, which probably means the block hasn’t been identified as having available free space for inserts. In many cases this is probably  a good thing – because it’s quite likely the if every block in your table is suddenly labelled as having sufficient free space for new row then you could end up with a difficult and ongoing performance problem.

Many large tables have a “time-based” component to their usage – as time passes the most recently entered rows are the ones that get most usage, and older rows are no longer accessed; this means you get a performance benefit from caching because the most useful fractions of such tables are often well cached and the “interesting” data is fairly well clustered.

In a case like this, imagine what will happen if EVERY block in your table suddenly acquires enough free space to accept a couple of new rows – over the next few days the incoming data will be spread across the entire length of the table, and for the next couple of months, or years, you will have to keep the entire table cached in memory if the performance is to stay constant; moreover the clustering_factor of the most useful indexes is likely to jump from “quite small” to “absolutely massive”, and the optimizer will start changing lots of plans because it will decide that your favourite indexes are probably much to expensive to user.

I am, of course, painting a very grim picture – but it is a possible scenario that should be considered before you drop a column from a table. Combined with my observations about the locking and overheads of dropping a column you might (probably ought to) decide that you should never drop a column you should only mark it as unused or (better still if you’re on 12c) mark it invisible for a while before marking it unused. You can worry about space reclamation at a later date when you considered all the ramifications of how it might impact on performance.

Footnote: If you’re still using freelist management then dropping a column won’t put a block on the freelist until the total used space in the block falls below the value dictated by pctused (default 40%); if you’re using ASSM then the block doesn’t become available for reuse until (by default) the free space exceeds 25% of the block’s usable space.

 

 

DML Operations On Partitioned Tables Can Restart On Invalidation

It's probably not that well known that Oracle can actually rollback / re-start the execution of a DML statement should the cursor become invalidated. By rollback / re-start I mean that Oracle actually performs a statement level rollback (so any modification already performed by that statement until that point gets rolled back), performs another optimization phase of the statement on re-start (due to the invalidation) and begins the execution of the statement from scratch. Note that this can happen multiple times - actually it's possible to end up in a kind of infinite loop when this happens, leading to statements that can run for very, very long (I've seen statements on Production environments executing for several days although a single execution would only take minutes).The pre-requisites to meet for this to happen are not that complex or exotic:- The target table to manipulate needs to be partitioned- The cursor currently executing gets invalidated - either by running DDL (typically think of partition related operations) - or simply by gathering statistics on one of the objects involved in the statement- The DML statement hasn't touched yet one of the partitions of the target table but attempts to do so after the cursor got invalidatedWhen the last condition is met, the statement performs a rollback, and since it got invalidated - which is one of the conditions to be met - another optimization phase happens, meaning that it's also possible to get different execution plans for the different execution attempts. When the execution plan is ready the execution begins from scratch.According to my tests the issue described here applies to both conventional and direct-path inserts, merge statements (insert / update / delete) as well as serial and parallel execution. I haven't explicitly tested UPDATE and DELETE statements, but the assumption is that they are affected, too.The behaviour is documented in the following note on MOS: "Insert Statement On Partitioned Tables Is RE-Started After Invalidation (Doc ID 1462003.1)" which links to Bug "14102209 : INSERT STATEMENT' IS RESTARTING BY ITSELF AFTER INVALIDATION" where you can also find some more comments on this behaviour. The issue seems to be that Oracle at that point is no longer sure if the partition information compiled into the cursor for the partitioned target table is still correct or not (and internally raises and catches a corresponding error, like "ORA-14403: Cursor invalidation detected after getting DML partition lock", leading to the re-try), so it needs to refresh that information, hence the re-optimization and re-start of the cursor.Note that this also means that the DML statement might already have performed modifications to other partitions but after being invalidated attempts to modify another partition it hasn't touched yet - it just needs an attempt to modify a partition not touched into yet by that statement.It's also kind of nasty that the statement keeps running the potentially lengthy query part after being invalidated only to find out it needs to re-start after the first row is attempted to be applied to a target table partition not touched yet.Note that applications typically run into this problem, when they behave like the following:- There are longer running DML statements that take typically several seconds / minutes until they attempt to actually perform an modification to a partitioned target table- They either use DBMS_STATS to gather stats on one of the involved tables, typically using NO_INVALIDATE=>FALSE, which leads to an immediate invalidation of all affected cursors- And/Or they perform partition related operations on one of the tables involved, like truncating, creating or exchanging partitions. Note that it is important to point out that it doesn't matter which objects gets DDL / stats applied, so it's not limited to activity on the partitioned target table being modified - any object involved in the query can cause the cursor invalidationIn principle this is another variation of the general theme "Don't mix concurrent DDL with DML/queries on the same objects". Doing so is something that leads to all kinds of side effects, and the way the Oracle engine is designed means that it doesn't cope very well with doing so.Here is a simple test case for reproducing the issue, using INSERTs in this case here (either via INSERT or MERGE statement):


create table t_target (
id number(*, 0) not null,
pkey number(*, 0) not null,
filler varchar2(500)
)
--segment creation immediate
partition by range (pkey) --interval (1)
(
partition pkey_0 values less than (1)
, partition pkey_1 values less than (2)
, partition pkey_2 values less than (3)
, partition pkey_3 values less than (4)
);

create table t_source
compress
as
select 1 as id, rpad('x', 100) as filler
from
(select /*+ cardinality(1e3) */ null from dual connect by level <= 1e3),
(select /*+ cardinality(1e0) */ null from dual connect by level <= 1e0)
union all
select 1 as id, rpad('y', 100) as filler from dual;

-- Run this again once the DML statement below got started
exec dbms_stats.gather_table_stats(null, 't_source', no_invalidate=>false)

exec dbms_stats.gather_table_stats(null, 't_target', no_invalidate=>false)

----------------------------------------------------------------------------------------------------------------------------------
-- INSERT example --
-- Run above DBMS_STATS calls or any other command that invalidates the cursor during execution to force re-start of the cursor --
----------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

-- alter session set tracefile_identifier = 'insert_restart';

-- alter session set events '10046 trace name context forever, level 12';

-- exec sys.dbms_monitor.session_trace_enable(waits => true, binds => true/*, plan_stat => 'all_executions'*/)

insert /* append */ into t_target (id, pkey, filler)
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 1 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 2 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 3 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
;

-- exec sys.dbms_monitor.session_trace_disable

----------------------------------------------------------------------------------------------------------------------------------
-- MERGE example --
-- Run above DBMS_STATS calls or any other command that invalidates the cursor during execution to force re-start of the cursor --
----------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

merge /* append */ into t_target t
using (
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 1 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 2 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 3 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
) s
on (s.id = t.id)
when not matched then
insert (id, pkey, filler) values (s.id, s.pkey, s.filler)
;

The idea of the test case is to maximise the time until each UNION ALL branch produces data to insert by performing an inefficient HASH JOIN (that in fact generates a Cartesian product and needs to apply a costly REGEXP filter on that huge intermediate result) and forcing a sort on the join result, so rows will only be handed over to the parent operations until all rows were processed in the join operation - and each branch generates data for a different partition of the target table. Typically it should take several seconds per branch to execute (if you need more time just un-comment the additional REGEXP_REPLACE filters), so you should have plenty of time to cause the invalidation from another session.This means during the execution of each branch invalidating the cursor (for example by executing either of the two DBMS_STATS calls on the source or target table using NO_INVALIDATE=>FALSE) will lead to a re-start of the statement at the next attempt to write into a new target partition, possibly rolling back rows already inserted into other partitions.

Diagnostics

If you run the provided INSERT or MERGE statement on newer versions of Oracle that include the SQL_EXEC_START and SQL_EXEC_ID in V$ACTIVE_SESSION_HISTORY (or V$SESSION for that matter) and invalidate the cursor during execution and before a partition of the target table gets inserted for the first time then you can see that these entries change as the statement restarts.In such cases the INVALIDATIONS and LOADS increase in V$SQL accordingly and the OBJECT_STATUS changes from INVALID_UNAUTH to VALID again with each re-start attempt. In newer versions where you can configure the "plan_stat" information for SQL trace to "all_executions" you'll find STAT lines for each execution attempt dumped to the trace file, but only a single final EXEC line, where the elapsed time covers all execution attempts.The oldest version I've tested was 10.2.0.4, and that one showed already the re-start behaviour, although I would be inclined to think that this wasn't the case with older versions. So if anybody still runs older versions than 10.2.0.4 I would be interested to hear whether the behaviour reproduces or not.

Oracle 12c – PL/SQL in SELECT (WITH & PRAGMA UDF)

Oracle 12c – PL/SQL in SELECT
(using WITH & PRAGMA UDF)

It is often useful to execute PL/SQL functions as part of an SQL SELECT statement (or other DML). When SQL calls PL/SQL a “context switch” occurs as execution moves between SQL execution and PL/SQL execution. In Oracle Database 12c Oracle has given us two tools to help reduce the impact of this context switching. The first is the ability to add a local function definition to the SELECT via the WITH clause, the second is to flag a function using PRAGMA UDF so that the compiler and optimizer will know it is to be used from SQL. Both options can measurably improve performance if used correctly.

To achieve maximum performance gains the PL/SQL function involved should not call other PL/SQL procedures and/or PL/SQL functions nor should PRAGMA UDF functions be called from “normal” PL/SQL (not inside SQL).
 

Normal Function Use in SQL (for comparison)

The following function definition and subsequent tests provide a baseline for comparison of WITH and PRAGMA UDF performance.

create or replace function times_42(inval number)
  return number 
as
  pragma udf;
begin
  return inval * 42;
end;
/
select channel_id,count(*) nbr_rows,sum(quantity_sold) qtysold, sum(times_42(cust_id)) cust42
  from sh.sales
  group by channel_id
  order by channel_id;

Elapsed (Wall-Clock) Execution Times

Running the above code three times (using SQL*Plus SET TIMING ON) resulted in elapsed times as follows:

1 – 2.018

2 – 1.945

3 – 1.928

 

WITH

Oracle 12c allows definition of PL/SQL Functions and Procedures using SQL’s Common Table Expression (WITH).

Defining PL/SQL locally reduces SQL-PL/SQL context-switching costs perhaps significantly improving performance.

Some points of caution:

  • Local PL/SQL overrides stored PL/SQL with the same name
  • Local PL/SQL is not stored in the database
  • Local PL/SQL is part of the same source code as the SQL that uses it
    (and would have to be copied if reused, yow!)
  • PL/SQL Result Cache does not cache Local PL/SQL

To use the feature in a sub-query SELECT, the main query must use the “/*+ WITH_PLSQL */” hint to avoid a syntax error.
 

Example Using WITH

with function times_42(inval number)
      return number
     as
     begin
        return inval * 42;
     end;
select channel_id,count(*) nbr_rows,
             sum(quantity_sold) qtysold,
             sum(times_42(cust_id)) cust42
  from sh.sales
  group by channel_id
  order by channel_id
/

 

Elapsed (Wall-Clock) Execution Times

Running the above code three times (using SQL*Plus SET TIMING ON) resulted in elapsed times as follows:

1 – .854

2 – .825

3 – .929

 

PRAGMA UDF

A related tool to reduce context switching in Oracle 12c allows functions to be defined using “PRAGMA UDF” to specify that a function will be used in SELECTS (behaving similar to function in WITH). This optimizes code for use within a SELECT or other SQL. Using PRAGMA UDF is probably not a good option for functions also used directly from PL/SQL (not nice to lie to the database).
 

Example Using PRAGMA UDF

create or replace function times_42(inval number)
  return number
as
  pragma udf;
begin
  return inval * 42;
end;
/
select channel_id,count(*) nbr_rows,
           sum(quantity_sold) qtysold, 
           sum(times_42(cust_id)) cust42
  from sh.sales
  group by channel_id
  order by channel_id;

 

Elapsed (Wall-Clock) Execution Times

Running the above code three times (using SQL*Plus SET TIMING ON) resulted in elapsed times as follows:

1 – .667

2 – .602

3 – .664

 

Performance Implications

All examples in this article use the Oracle-supplied SH.SALES table (>900k rows) running on a single-CPU laptop with Oracle 12c Standard Edition to allow you to recreate them. I was unable to find a tool to accurately measure context switching (if you know of one please send me a note); so, elapsed time is used in the comparisons.

Both WITH and PRAGMA UDF options provide a dramatic improvement in runtimes and in my testing (with other data sources and functions too) the PRAGMA UDF seemed to consistently out-perform a local function using WITH. Here is a table showing the comparative elapsed times shown in this unit:

1st Run 2nd Run 3rd Run
Compiled Function in Database 2.018 1.945 1.928
Function in WITH 0.854 0.825 0.929
Compiled UDF Function in Database 0.667 0.602 0.664

 

Conclusion

In this case my first choice is to use PRAGMA UDF because having a single copy of the function code seems so much more maintainable than having code repeated; the consistently better performance provides an added incentive.

Clearly, there are savings to be had when PL/SQL is called from SQL with a couple of caveats:

  • If the PL/SQL in question calls other PL/SQL, then, WITH and UDF might not be the best choice.
  • If a PL/SQL function will be called from PL/SQL, UDF may cause performance to be off since the optimization will be incorrect.

As always when performance is involved; don’t take my word for it (or anyone else’s). Test, test, and test again in your own environment with your own data!

 

Trying out Jenkins

Jenkins seems to be the top code build, integration and QA automation tool to support continuous delivery.

Jenkins allows me to kick of jobs on a schedule or ofter certain events and Jenkins will keep track of the status of the jobs and the results.

To get Jenkins installed,  I read wiki.jenkins-ci.org   and then read biouno.org to try my first Jenkins example.

I installed Jenkins  on one of the VMs from the “landshark” demo environment for Delphix Express. Landshark consists of two Linux Centos VMs that provide source and target examples for Delphix Express. I used the “target” VM but could have just as easily used the “source” machine.

Installing Jenkins

To install Jenkins I did:

sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo

sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key

sudo yum install jenkins

Then

service jenkins start

#999999;">#000000;">This gives the message “OK” but actually Jenkins dies. The workaround was to edit /etc/sysconfig/jenkins and change

JENKINS_AJP_PORT="8009"

to

JENKINS_AJP_PORT="8010"

also starting jenkins by hand worked even without workaround

#000000;">cd /usr/lib/jenkins 
java -jar jenkins.war

I then accessed the Jenkins console at    http://my_linux_vm:8080

Running Jenkins

Click on “create new jobs”

Screen Shot 2015-12-23 at 1.13.55 PM

Name the job

Select “Freestyle project”

Screen Shot 2015-12-23 at 1.14.17 PM

Schedule the “date” command to be run and every 5 minutes by doing:

  • Fill in “Build Triggers” by checking “Build periodically” and type in “H/5 * * * *”
  • Under build choose “Execute Shell” and type in “echo `date`”

Screen Shot 2015-12-23 at 1.15.52 PM

After hitting “Save”, then letting 5 minutes pass you’ll see a new entry in “Build History” in the bottom left. Click on the “#1″.

Screen Shot 2015-12-23 at 1.21.27 PM

Then click “Console Output”

Screen Shot 2015-12-23 at 1.21.38 PM

and you will see the output of the shell script we scheduled to be run every 5 minutes.

Screen Shot 2015-12-23 at 1.21.51 PM

 

See Sharp Objects, Run with Sharp Objects, Crash with Sharp Objects – Obscure and Incorrect Error Messages in Commercial Products

January 15, 2016 Since October 2015 I have been fighting with an application programming interface (API) that works with an enterprise resource planning (ERP) system.  The API, with uses Microsoft .Net 4.0 or higher, is intended to simplify the process of automating the creation of various types of documents within the ERP system.  For example, the API […]