Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Parallel First_rows()

A recent posting on OTN raised the question of whether or not the “parallel” hint and the “first_rows(n)” hint were mutually incompatible. This reminded me that from time to time other posters on OTN (copying information from various websites, perhaps) have claimed that “parallel doesn’t work with first rows” or, conversely, “first rows doesn’t work with parallel”. This is one of those funny little myths that is so old that the script I’ve got to demonstrate the misconception is dated 2003 with a first test version of 8.1.7.4.

Since I haven’t run the test on any version of Oracle newer than 9.2.0.4 I thought it was time to dust it down, modernise it slightly, and run it again. So here’s the bit that creates a sample data set:


create table t1 (
        id      number,
        v1      varchar2(10),
        padding varchar2(100),
        constraint      t_pk primary key(id) using index local
)
partition by range(id) (
        partition p1000 values less than (1000),
        partition p2000 values less than (2000),
        partition p3000 values less than (3000),
        partition p4000 values less than (4000),
        partition p5000 values less than (5000)
)
;

insert into t1
select
        rownum - 1,
        rpad(rownum-1,10),
        rpad('x',100)
from
        all_objects
where
        rownum <= 5000 -- > hint to avoid WordPress formatting issue
order by 
        dbms_random.value
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1', 
                method_opt       => 'for all columns size 1'
        );
end;
/

Now I’m going to run a simple query, hinted in 4 different ways:

  • no hints
  • parallel hint only: /*+ parallel */
  • first_rows(1) hint only: /*+ first_rows(1) */
  • parallel and first_rows(1): /*+ parallel first_rows(1) */

Here’s the version of the query that has both hints in place:


set serveroutput off
set linesize 156
set pagesize 60
set trimspool on

select
        /*+ parallel first_rows(1) */
        v1
from
        t1
where
        id between 1500 and 2000
;

select * from table(dbms_xplan.display_cursor(null,null,'cost outline'));

I’ve actually run the query and used the display_cursor() option to pull the plan from memory – in the original (8i) script I used autotrace and the old (deprecated, backwards compatibility only) first_rows hint. To do any other tests just clone and edit. Here are the 4 outputs from the call to display_cursor() – with a little cosmetic editing:


SQL_ID  63qnzam9b8m9g, child number 0
=====================================
select  /*+ */  v1 from  t1 where  id between 1500 and 2000

Plan hash value: 277861402

-------------------------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |      |       |       |    15 (100)|          |       |       |
|   1 |  PARTITION RANGE ITERATOR|      |   502 |  7530 |    15   (0)| 00:00:01 |     2 |     3 |
|*  2 |   TABLE ACCESS FULL      | T1   |   502 |  7530 |    15   (0)| 00:00:01 |     2 |     3 |
-------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
      DB_VERSION('11.2.0.4')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("ID"<=2000 AND "ID">=1500))


SQL_ID  ahary3u8q88mq, child number 1
=====================================
select  /*+ parallel */  v1 from  t1 where  id between 1500 and 2000

Plan hash value: 9959369

------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |       |       |     8 (100)|          |       |       |        |      |            |
|   1 |  PX COORDINATOR      |          |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |   502 |  7530 |     8   (0)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX BLOCK ITERATOR |          |   502 |  7530 |     8   (0)| 00:00:01 |     2 |     3 |  Q1,00 | PCWC |            |
|*  4 |     TABLE ACCESS FULL| T1       |   502 |  7530 |     8   (0)| 00:00:01 |     2 |     3 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
      DB_VERSION('11.2.0.4')
      ALL_ROWS
      SHARED(2)
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access(:Z>=:Z AND :Z<=:Z)
       filter(("ID"<=2000 AND "ID">=1500))

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2


SQL_ID  3m6mnk9b337dd, child number 0
=====================================
select  /*+ first_rows(1) */  v1 from  t1 where  id between 1500 and
2000

Plan hash value: 1044541683

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |      |       |       |     6 (100)|          |       |       |
|   1 |  PARTITION RANGE ITERATOR          |      |     4 |    60 |     6   (0)| 00:00:01 |     2 |     3 |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID| T1   |     4 |    60 |     6   (0)| 00:00:01 |     2 |     3 |
|*  3 |    INDEX RANGE SCAN                | T_PK |       |       |     2   (0)| 00:00:01 |     2 |     3 |
-----------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
      DB_VERSION('11.2.0.4')
      FIRST_ROWS(1)
      OUTLINE_LEAF(@"SEL$1")
      INDEX_RS_ASC(@"SEL$1" "T1"@"SEL$1" ("T1"."ID"))
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID">=1500 AND "ID"<=2000) -- > needs edit to avoid WordPress formatting issue


SQL_ID  9asm7t1zbv4q8, child number 1
=====================================
select  /*+ parallel first_rows(1) */  v1 from  t1 where  id between
1500 and 2000

Plan hash value: 4229065483

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |          |       |       |     3 (100)|          |       |       |        |      |            |
|   1 |  PX COORDINATOR                      |          |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)                | :TQ10000 |     4 |    60 |     3   (0)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX PARTITION RANGE ITERATOR       |          |     4 |    60 |     3   (0)| 00:00:01 |     2 |     3 |  Q1,00 | PCWC |            |
|   4 |     TABLE ACCESS BY LOCAL INDEX ROWID| T1       |     4 |    60 |     3   (0)| 00:00:01 |     2 |     3 |  Q1,00 | PCWP |            |
|*  5 |      INDEX RANGE SCAN                | T_PK     |       |       |     1   (0)| 00:00:01 |     2 |     3 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
      DB_VERSION('11.2.0.4')
      FIRST_ROWS(1)
      SHARED(2)
      OUTLINE_LEAF(@"SEL$1")
      INDEX_RS_ASC(@"SEL$1" "T1"@"SEL$1" ("T1"."ID"))
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("ID">=1500 AND "ID"<=2000)

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2

Critically we get four different execution plans from the four different strategies – so clearly the optimizer is perfectly happy to accept the parallel and first_rows() hints simultaneously. Note, particularly, how the first_rows(1) hint when combined with the parallel hint moved us from a parallel full tablescan to a parallel index range scan.

Whether or not it’s sensible to use the hint combination in this way is a matter for careful consideration, of course, but there could be circumstances where the combination really is the best way to get the starting row(s) from a query that otherwise has to return a large amount of data.

Completing JWT_NINjA for full JWT support

In the single blog post I wrote on the JWT_NINJA package back in June last year, I ended the post by saying that I needed to implement both a verify and a parse function for the package.

Well, it took me a year, but I finally had a reason to go and implement it. As I am getting into Oracle ORDS and trying to get some experience in building REST services, I want to be able to use JWT as the token generator for the application that I am building.

So a quick refresh on how to generate a token using the package:


select jwt_ninja.jwt_generate(p_signature_key => 'my secret password', p_reg_claim_issuer => 'My Application', p_reg_claim_subject => '147654') as token from dual;

TOKEN 
-------------------------------------------------------------------------------------------------------------------------------------
eyAiYWxnIjogIkhTMjU2IiwgInR5cCI6ICJKV1QiIH0.eyAiaWF0IjogMTQ5MTcxNzIxODAwMCwgImlzcyI6ICJNeSBBcHBsaWNhdGlvbiIs
ICJzdWIiOiAiMTQ3NjU0In0.Uy6KeKeO1/52+URTbhk1+spOIO3OYamLhViPJ1ZmjUU 

So now that we have a token, we would store that on the client side, and every request from the client would include this token for us to validate. So to validate the token there is now a procedure jwt_ninja.jwt_verify_and_decode that will do just that. The third argument (p_do_parse) indicates if you only want the token to be verified or if you want to parse the fields inside the payload. The default is to only verify the validity of the signature and the message and not do the parse. If it is set to true, the values of the different JWT claims will be set and found in the output parameters of the procedure.


declare
  mysecret varchar2(4000) := 'my secret password';
  mytoken varchar2(4000) := jwt_ninja.jwt_generate(p_signature_key => 'my secret password', p_reg_claim_issuer => 'My Application', p_reg_claim_subject => '147654');
  doparse boolean := false;
  myresult boolean;
  l_reg_claim_issuer      varchar2(4000) := null;
  l_reg_claim_subject     varchar2(4000) := null;
  l_reg_claim_audience    varchar2(4000) := null;
  l_reg_claim_expiration  date := null;
  l_reg_claim_notbefore   date := null;
  l_reg_claim_issuedat    date := null;
  l_reg_claim_jwtid       varchar2(4000) := null;
begin
  jwt_ninja.jwt_verify_and_decode(mytoken, mysecret, doparse, myresult, l_reg_claim_issuer, l_reg_claim_subject, l_reg_claim_audience, l_reg_claim_expiration, l_reg_claim_notbefore, l_reg_claim_issuedat, l_reg_claim_jwtid);
  if myresult then
    dbms_output.put_line('Verified');
    dbms_output.put_line('Iss: ' || l_reg_claim_issuer);
    dbms_output.put_line('Sub: ' || l_reg_claim_subject);
  else
    dbms_output.put_line('Not verified');
  end if;
end;
/

So know I can "close" this codemonth for now, since I can start to use it in my other project.

Service “696c6f76656d756c746974656e616e74″ has 1 instance(s).

Weird title, isn’t it? That was my reaction when I did my first ‘lsnrctl status’ in 12.2: weird service name… If you have installed 12.2 multitenant, then you have probably seen this strange service name registered in your listener. One per PDB. It is not a bug. It is an internal service used to connect to the remote PDB for features like Proxy PDB. This name is the GUID of the PDB which makes this service independent of the name or the physical location of the PDB. You can use it to connect to the PDB, but should not. It is an internal service name. But on a lab, let’s play with it.

CDB

I have two Container Databases on my system:

18:01:33 SQL> connect sys/oracle@//localhost/CDB2 as sysdba
Connected.
18:01:33 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO

CDB2 has been created without any pluggable databases (except PDB$SEED of course).

18:01:33 SQL> connect sys/oracle@//localhost/CDB1 as sysdba
Connected.
18:01:33 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 READ WRITE NO

CDB1 has one pluggable database PDB1.

PDB1 has its system files in /u01/oradata/CDB1/PDB1/ and I’ve a user tablespace datafiles elsewhere:

18:01:33 SQL> select con_id,file_name from cdb_data_files;
CON_ID FILE_NAME
------ -------------------------------------
1 /u01/oradata/CDB1/users01.dbf
1 /u01/oradata/CDB1/undotbs01.dbf
1 /u01/oradata/CDB1/system01.dbf
1 /u01/oradata/CDB1/sysaux01.dbf
4 /u01/oradata/CDB1/PDB1/undotbs01.dbf
4 /u01/oradata/CDB1/PDB1/sysaux01.dbf
4 /u01/oradata/CDB1/PDB1/system01.dbf
4 /u01/oradata/CDB1/PDB1/USERS.dbf
4 /var/tmp/PDB1USERS2.dbf

Both are registered to the same local listener:

SQL> host lsnrctl status
 
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 07-APR-2017 18:01:33
 
Copyright (c) 1991, 2016, Oracle. All rights reserved.
 
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 07-APR-2017 07:53:06
Uptime 0 days 10 hr. 8 min. 27 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/VM104/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM104)(PORT=1521)))
Services Summary...
Service "4aa269fa927779f0e053684ea8c0c27f" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB2" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2XDB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

Each container database declares its db_unique_name as a service: CDB1 and CDB2, with an XDB service for each: CDB1XDB and CDB2XDB, each pluggable database has also its service: PDB1 here. This is what we had in 12.1 but in 12.2 there is one more service with a strange name in hexadecimal: 4aa269fa927779f0e053684ea8c0c27f

Connect to PDB without a service name?

Want to know more about it? Let’s try to connect to it:

SQL> connect sys/oracle@(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=4aa269fa927779f0e053684ea8c0c27f))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.78.104)(PORT=1521))) as sysdba
Connected.
SQL> select sys_context('userenv','cdb_name'), sys_context('userenv','con_name'), sys_context('userenv','service_name') from dual;
 
SYS_CONTEXT('USERENV','CDB_NAME') SYS_CONTEXT('USERENV','CON_NAME') SYS_CONTEXT('USERENV','SERVICE_NAME')
--------------------------------- --------------------------------- -------------------------------------
CDB1 PDB1 SYS$USERS

With this service, I can connect to the PDB1 but the service name I used in the connection string is not a real service:

SQL> select name from v$services;
 
NAME
----------------------------------------------------------------
pdb1
 
SQL> show parameter service
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
service_names string CDB1

The documentation says that SYS$USERS is the default database service for user sessions that are not associated with services so I’m connected to a PDB here without a service.

GUID

The internal service name is the GUID of the PDB, which identifies the container even after unplug/plug.

SQL> select pdb_id,pdb_name,con_uid,guid from dba_pdbs;
 
PDB_ID PDB_NAME CON_UID GUID
------ -------- ------- ----
4 PDB1 2763763322 4AA269FA927779F0E053684EA8C0C27F

Proxy PDB

This internal service has been introduced in 12cR2 for Proxy PDB feature: access to a PDB through another one, so that you don’t have to change the connection string when you migrate the PDB to another server.

I’ll create a Proxy PDB in CDB2 to connect to PDB1 which is in CDB1. This is simple: create a database link for the creation of the Proxy PDB which I call PDB1PX1:

18:01:33 SQL> connect sys/oracle@//localhost/CDB2 as sysdba
Connected.
18:01:33 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
 
18:01:33 SQL> create database link CDB1 connect to system identified by oracle using '//localhost/CDB1';
Database link CDB1 created.
 
18:01:38 SQL> create pluggable database PDB1PX1 as proxy from PDB1@CDB1
file_name_convert=('/u01/oradata/CDB1/PDB1','/u01/oradata/CDB1/PDB1PX1');
 
Pluggable database PDB1PX1 created.
 
18:02:14 SQL> drop database link CDB1;
Database link CDB1 dropped.

The Proxy PDB clones the system tablespaces, and this is why I had to give a file_name_convert. Note that the user tablespace datafile is not cloned, so I don’t need to convert the ‘/var/tmp/PDB1USERS2.dbf’. The dblink is not needed anymore once the Proxy PDB is created, as it is used only for the clone of system tablespaces. The PDB is currently in mount.

18:02:14 SQL> connect sys/oracle@//localhost/CDB2 as sysdba
Connected.
18:02:14 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1PX1 MOUNTED

The system tablespaces are there (I’m in 12.2 with local undo which is required for Proxy PDB feature)

18:02:14 SQL> select con_id,file_name from cdb_data_files;
 
CON_ID FILE_NAME
------ ---------
1 /u01/oradata/CDB2/system01.dbf
1 /u01/oradata/CDB2/sysaux01.dbf
1 /u01/oradata/CDB2/users01.dbf
1 /u01/oradata/CDB2/undotbs01.dbf

I open the PDB

18:02:19 SQL> alter pluggable database PDB1PX1 open;
Pluggable database PDB1PX1 altered.

connect

I have now 3 ways to connect to PDB1: with the PDB1 service, with the internal service, and through the Proxy PDB service.
I’ve tested the 3 ways:


18:02:45 SQL> connect demo/demo@//localhost/PDB1
18:02:56 SQL> connect demo/demo@//localhost/PDB1PX1
18:03:06 SQL> connect demo/demo@//localhost/4aa269fa927779f0e053684ea8c0c27f

and I’ve inserted each time into a DEMO table the information about my connection:
SQL> insert into DEMO select '&_connect_identifier' "connect identifier", current_timestamp "timestamp", sys_context('userenv','cdb_name') "CDB name", sys_context('userenv','con_name') "con name" from dual;

Here is the result:

connect identifier timestamp CDB name container name
------------------ --------- -------- --------------
//localhost/PDB1 07-APR-17 06.02.50.977839000 PM CDB1 PDB1
//localhost/PDB1PX1 07-APR-17 06.03.01.492946000 PM CDB1 PDB1
//localhost/4aa269fa927779f0e053684ea8c0c27f 07-APR-17 06.03.11.814039000 PM CDB1 PDB1

We are connected to the same databases. As for this test I’m on the same server with same listener, I can check what is logged in the listener log.

Here are the $ORACLE_BASE/diag/tnslsnr/$(hostname)/listener/alert/log.xml entries related to my connections.

//localhost/PDB1

When connecting directly to PDB1 the connection is simple:


type='UNKNOWN' level='16' host_id='VM104'
host_addr='192.168.78.104' pid='1194'>
07-APR-2017 18:02:45 * (CONNECT_DATA=(SERVICE_NAME=PDB1)(CID=(PROGRAM=java)(HOST=VM104)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=27523)) * establish * PDB1 * 0


I am connecting with SQLcl which is java: (PROGRAM=java)

//localhost/PDB1PX1

When connecting through the Proxy PDB I see the connection to the Proxy PDBX1:


type='UNKNOWN' level='16' host_id='VM104'
host_addr='192.168.78.104' pid='1194'>
07-APR-2017 18:02:56 * (CONNECT_DATA=(SERVICE_NAME=PDB1PX1)(CID=(PROGRAM=java)(HOST=VM104)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=27524)) * establish * PDB1PX1 * 0


This is the java connection. But I can also see the connection to the remote PDB1 from the Proxy PDB


type='UNKNOWN' level='16' host_id='VM104'
host_addr='192.168.78.104' pid='1194'>
07-APR-2017 18:03:01 * (CONNECT_DATA=(SERVICE_NAME=4aa269fa927779f0e053684ea8c0c27f)(CID=(PROGRAM=oracle)(HOST=VM104)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.78.104)(PORT=16787)) * establish * 4aa269fa927779f0e053684ea8c0c27f * 0


Here the program is (PROGRAM=oracle) which is a CDB2 instance process connecting to the CDB1 remote through the internal service.

//localhost/4aa269fa927779f0e053684ea8c0c27f

When I connect to the internal service, I see the same connection to PDB1’s GUID but from (PROGRAM=java) directly


type='UNKNOWN' level='16' host_id='VM104'
host_addr='192.168.78.104' pid='1194'>
07-APR-2017 18:03:06 * (CONNECT_DATA=(SERVICE_NAME=4aa269fa927779f0e053684ea8c0c27f)(CID=(PROGRAM=java)(HOST=VM104)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=27526)) * establish * 4aa269fa927779f0e053684ea8c0c27f * 0


One more…

So each user PDB, in addition to the PDB name and additional services you have defined, registers an additional internal service, whether the PDB is opened our closed. And the fun is that Proxy PDB also register this additional service. Here is my listener status:


Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM104)(PORT=1521)))
Services Summary...
Service "4aa269fa927779f0e053684ea8c0c27f" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "4c96bda23b8e41fae053684ea8c0918b" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB2" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2XDB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1px1" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
The command completed successfully

This “4c96bda23b8e41fae053684ea8c0918b” is the GUID of the Proxy PDB.

SQL> select sys_context('userenv','cdb_name'), sys_context('userenv','con_name'), sys_context('userenv','service_name') from dual;
 
SYS_CONTEXT('USERENV','CDB_NAME')
--------------------------------------------------------------------------------
SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
SYS_CONTEXT('USERENV','SERVICE_NAME')
--------------------------------------------------------------------------------
CDB1
PDB1
SYS$USERS

So that’s a fourth way to connect to PDB1: through the internal service of the Proxy PDB.

Then you can immediately imagine what I tried…

ORA-65280

Because the internal service name is used to connect through Proxy PDB, can I create an proxy for the proxy?

18:03:32 SQL> create pluggable database PDB1PX2 as proxy from PDB1PX1@CDB2
2 file_name_convert=('/u01/oradata/CDB1/PDB1/PX1','/u01/oradata/CDB1/PDB1PX2');
 
Error starting at line : 76 File @ /media/sf_share/122/blogs/proxypdb.sql
In command -
create pluggable database PDB1PX2 as proxy from PDB1PX1@CDB2
file_name_convert=('/u01/oradata/CDB1/PDB1/PX1','/u01/oradata/CDB1/PDB1PX2')
Error report -
ORA-65280: The referenced pluggable database is a proxy pluggable database.

Answer is no. You cannot nest the Proxy PDB.

So what?

Don’t panic when looking at services registered in the listener. Those hexadecimal service names are expected in 12.2, with one per user PDB. You see them, but have no reason to use them directly. You will use them indirectly when creating a Proxy PDB which makes the location where users connect independent from the physical location of the PDB. Very interesting from migration because client configuration is independent from the migration (think hybrid-cloud). You can use this feature even without the multitenant option. Want to see all multitenant architecture options available without the option? Look at the ITOUG Tech day agenda

 

Cet article Service “696c6f76656d756c746974656e616e74″ has 1 instance(s). est apparu en premier sur Blog dbi services.

Take a random REST

I needed to test ORDS for some webservice related projects at work. So I decided to build a rest interface for all the different random data functions in the RANDOM_NINJA package to test out, just how easy it is.

First order is to setup ORDS. Like always Tim over at oracle-base have a couple of really good articles on installing and setting up ORDS itself. So no need for me to duplicate that.

After setting up ORDS, and enabling my schema for rest services like in this oracle-base article, it is time to enable the functions in the different data domain packages. I decided to keep the URL scheme simple, so that the path would be /[package_name]/[function_name]/ for any of the functions that are part of the RANDOM_NINJA package.

Apex Interactive Grid and IOT’s

I love the interactive grid in Application Express.  And here’s why… (Warning: Ranting mode is now on Smile)

You can tell people

  • here’s an application built with almost no code, in fact, you probably could have built it yourself
  • it’s multi-user, with optimistic locking built in for you
  • it’s secure
  • it’s backed up and recoverable,
  • it’s scales ridiculously well,
  • it doesn’t need any complicated middle tier, or software libraries,
  • it can be accessed anywhere you have a browser…which is…anywhere!
  • it has responsive look and feel,
  • it was built with software that doesn’t cost a single dollar,
  • it centralises the data so you have a single source of truth

and after you have told them all of that….do you know what they’ll say ?

“Yeah…but I like to double-click on a field to edit it…So I’ll just use Excel and store it on my hard drive”

AGGGGHHHHHHHH!!!!!!!!!!!!!  Somebody…..shoot….me……

Interactive grids blow that flaccid argument out of the water!  So get on board to Application Express 5.1+ for some interactive grid awesome-ness.

One discovery I did make with interactive grids, is that if you based the grid on an Index-Organized table, you will erroneously get a ROWID column in your grid

int_grid_iot_proj_short

 

This is not a major drama – just delete the item from the designer and it will still work just fine, and this minor detail is fixed in an upcoming release.

12cR2 DML monitoring and Statistics Advisor

Monitoring DML to get an idea of the activity on our tables is not new. The number of insert/delete/update/truncate since last stats gathering is tracked automatically. The statistics gathering job use it to list and prioritize tables that need fresh statistics. This is for slow changes on tables. In 12.2 we have the statistics advisor that goes further, with a rule that detects volatile tables:

SQL> select * from V$STATS_ADVISOR_RULES where rule_id=14;
 
RULE_ID NAME RULE_TYPE DESCRIPTION CON_ID
------- ---- --------- ----------- ------
14 LockVolatileTable OBJECT Statistics for objects with volatile data should be locked 0

But to detect volatile tables, you need to track DML frequency with finer grain. Let’s investigate what is new here in 12.2

Statistics Advisor tracing

DBMS_STATS has its trace mode enabled as a global preference. It is not documented, but it works with powers of two. 12.1.0.2 introduced 262144 to trace system statistics gathering, so let’s try the next one: 524288

SQL> exec dbms_stats.set_global_prefs('TRACE',0+524288)
PL/SQL procedure successfully completed.

After a while, I grepped my trace directory for DBMS_STATS and found the MMON slave trace (ORCLA_m001_30694.trc here):

*** 2017-04-06T14:10:11.979283+02:00
*** SESSION ID:(81.2340) 2017-04-06T14:10:11.979302+02:00
*** CLIENT ID:() 2017-04-06T14:10:11.979306+02:00
*** SERVICE NAME:(SYS$BACKGROUND) 2017-04-06T14:10:11.979309+02:00
*** MODULE NAME:(MMON_SLAVE) 2017-04-06T14:10:11.979313+02:00
*** ACTION NAME:(Flush KSXM hash table action) 2017-04-06T14:10:11.979317+02:00
*** CLIENT DRIVER:() 2017-04-06T14:10:11.979320+02:00
 
...
 
DBMS_STATS: compute_volatile_flag: objn=74843, flag=0, new_flag=0, inserts_new=619, updates_new=0, deletes_new=0, inserts_old=619, updates_old=0, deletes_old=0, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=0
DBMS_STATS: compute_volatile_flag: objn=74862, flag=0, new_flag=0, inserts_new=4393, updates_new=0, deletes_new=0, inserts_old=4393, updates_old=0, deletes_old=0, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=0
DBMS_STATS: compute_volatile_flag: objn=74867, flag=1, new_flag=0, inserts_new=4861477, updates_new=584000, deletes_new=13475192, inserts_old=3681477, updates_old=466000, deletes_old=12885192, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=1

Those entries appear every hour. Obviously, they are looking at some table (by their object_id) and computes a new flag from an existing flag and statistics about new and old DML (insert, update, delete). There’s a mention or row count and stale percentage. Obviously, the volatility of tables est computed every hour (mentions gather=NO_GATHER) or when we gather statistics (gather=GATHER). This goes beyond the DML monitoring from previous release, but is probably based on it.

Testing some DML

SQL> delete from DEMO;
10000 rows deleted.
 
SQL> insert into DEMO select rownum from xmltable('1 to 10000');
10000 rows created.
 
SQL> commit;
Commit complete.
 
SQL> select count(*) numrows from DEMO;
NUMROWS
----------
10000
 
SQL> update demo set n=n+1 where rownum lt;= 2000;
 
2000 rows updated.
 
SQL> insert into DEMO select rownum from xmltable('1 to 10000');
 
10000 rows created.

I deleted 10000 rows and inserted 10000, with a commit at the end. I updated 2000 ones and inserted 10000 again, without commit.

x$ksxmme

DML monitoring is done in memory, I order to see the changes in DBA_TAB_MODIFICATIONS, we need to flush it. But this in-memory information is visible in X$ fixed view:

SQL> select * from X$KSXMME where objn=&object_id;
old 1: select * from X$KSXMME where objn=&object_id
new 1: select * from X$KSXMME where objn= 74867
 
ADDR INDX INST_ID CON_ID CHUNKN SLOTN OBJN INS UPD DEL DROPSEG CURROWS PAROBJN LASTUSED FLAGS
---------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
00007F526E0B81F0 0 1 0 64 256 74867 20000 2000 10000 0 2350000 0 1491467123 128

Here are my 10000 deletes + 10000 inserts + 2000 updates + 10000 inserts. Of course the uncommitted ones are there because DML tracking do not keep the numbers for each transaction in order to update later what is committed or not.

The proof is that when I rollback, the numbers do not change:

SQL> rollback;
Rollback complete.
 
SQL> select * from X$KSXMME where objn=&object_id;
old 1: select * from X$KSXMME where objn=&object_id
new 1: select * from X$KSXMME where objn= 74867
 
ADDR INDX INST_ID CON_ID CHUNKN SLOTN OBJN INS UPD DEL DROPSEG CURROWS PAROBJN LASTUSED FLAGS
---------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
00007F526DDF47F8 0 1 0 64 256 74867 20000 2000 10000 0 2350000 0 1491467123 128

Yes, there is an estimation of the current number of rows here, in real-time. This is used to compare the changes with the total number, but you can use it to see the progress of a big transaction, giving a view of uncommitted changes.

sys.mon_mods_all$

The table sys.mon_mods_all$ is what is behind DBA_TAB_MODIFICATIONS (not exactly, but that will be for another blog post) and you have to flush what’s in memory to see the latest changes there:

SQL> exec dbms_stats.flush_database_monitoring_info;
PL/SQL procedure successfully completed.
 
SQL> select * from sys.mon_mods_all$ where obj#=&object_id;
old 1: select * from sys.mon_mods_all$ where obj#=&object_id
new 1: select * from sys.mon_mods_all$ where obj#= 74867
 
OBJ# INSERTS UPDATES DELETES TIMESTAMP FLAGS DROP_SEGMENTS
---------- ---------- ---------- ---------- ------------------ ---------- -------------
74867 5581477 656000 13835192 06-APR 15:10:53 1 0

The flag 1 means that the table has been truncated since the latest stats gathering.

This is what we already know from previous release. Nothing to do with the trace we see every hour in MMON slave.

sys.optstat_snapshot$

What happens every hour is that a snapshot of sys.mon_mods_all$ is stored in sys.optstat_snapshot$:

SQL> select * from sys.optstat_snapshot$ where obj#=&object_id order by timestamp;
old 1: select * from sys.optstat_snapshot$ where obj#=&object_id order by timestamp
new 1: select * from sys.optstat_snapshot$ where obj#= 74867 order by timestamp
 
OBJ# INSERTS UPDATES DELETES FLAGS TIMESTAMP
---------- ---------- ---------- ---------- ---------- ------------------
74867 999 0 0 32 05-APR-17 17:27:01
74867 1997 0 0 32 05-APR-17 17:33:25
74867 1997 0 0 32 05-APR-17 17:33:31
74867 1997 0 0 32 05-APR-17 17:33:32
74867 80878 0 160 0 05-APR-17 18:59:37
74867 90863 0 210 0 05-APR-17 20:53:07
74867 10597135 0 410 0 05-APR-17 21:53:13
74867 10598134 0 410 32 05-APR-17 22:02:38
74867 38861 0 10603745 1 06-APR-17 08:17:58
74867 38861 0 10603745 1 06-APR-17 09:18:04
74867 581477 124000 11175192 1 06-APR-17 10:11:27
74867 1321477 230000 11705192 1 06-APR-17 11:09:50
74867 2481477 346000 12285192 1 06-APR-17 12:09:56
74867 3681477 466000 12885192 1 06-APR-17 01:10:04
74867 4861477 584000 13475192 1 06-APR-17 02:10:11
74867 5561477 654000 13825192 1 06-APR-17 03:10:19

You see snapshots every hour, the latest being 03:10, 02:10, 01.10, 12:09, 11:09, …
You see additional snapshots at each statistics gathering. I’ve run dbms_stats.gather_table_stats at 17:27 and 17:33 several times the day before. Those snapshots are flagged 32.
The statistics was gathered again at 20:02 (the auto job) and I’ve truncated the table after that which is why the flag is 1.

dbms_stats_advisor.compute_volatile_flag

My guess is that there should be a flag for volatile tables here, because I’ve seen a trace for compute_volatile_flag in MMON trace, so I’ve enabled sql_trace for the MMON slave, and here is the query which takes the snapshot:

insert /* KSXM:TAKE_SNPSHOT */ into sys.optstat_snapshot$ (obj#, inserts, updates, deletes, timestamp, flags) (select m.obj#, m.inserts, m.updates, m.deletes, systimestamp, dbms_stats_advisor.compute_volatile_flag( m.obj#, m.flags, :flags, m.inserts, m.updates, m.deletes, s.inserts, s.updates, s.deletes, null, nvl(to_number(p.valchar), :global_stale_pcnt), s.gather) flags from sys.mon_mods_all$ m, (select si.obj#, max(si.inserts) inserts, max(si.updates) updates, max(si.deletes) deletes, decode(bitand(max(si.flags), :gather_flag), 0, 'NO_GATHER', 'GATHER') gather, max(si.timestamp) timestamp from sys.optstat_snapshot$ si, (select obj#, max(timestamp) ts from sys.optstat_snapshot$ group by obj#) sm where si.obj# = sm.obj# and si.timestamp = sm.ts group by si.obj#) s, sys.optstat_user_prefs$ p where m.obj# = s.obj#(+) and m.obj# = p.obj#(+) and pname(+) = 'STALE_PERCENT' and dbms_stats_advisor.check_mmon_policy_violation(rownum, 6, 2) = 0)

It reads the current values (from sys.mon_mods_all$) and the last values (from sys.optstat_snapshot$), reads the stale percentage parameter, and calls the dbms_stats_advisor.compute_volatile_flag function that updates the flag with one passed as :flag, probably adding the value 64 (see below) when table is volatile (probably when sum of DML is over the row count + stale percentage). The function is probably different when the snapshots comes from statistics gathering (‘GATHER’) or from DML monitoring (‘NO_GATHER’) because the number of rows is absolute or relative to the previous one.

From the trace of bind variables, or simply from the dbms_stats trace, I can see all values:
DBMS_STATS: compute_volatile_flag: objn=74867, flag=1, new_flag=0, inserts_new=5701477, updates_new=668000, deletes_new=13895192, inserts_old=5701477, updates_old=668000, deletes_old=13895192, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=1
DBMS_STATS: compute_volatile_flag: objn=74867, flag=1, new_flag=0, inserts_new=4861477, updates_new=584000, deletes_new=13475192, inserts_old=3681477, updates_old=466000, deletes_old=12885192, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=1
DBMS_STATS: compute_volatile_flag: objn=74867, flag=1, new_flag=0, inserts_new=5561477, updates_new=654000, deletes_new=13825192, inserts_old=4861477, updates_old=584000, deletes_old=13475192, rowcnt=, rowcnt_loc=, stale_pcnt=10, gather=NO_GATHER, flag_result=1

The input flag is 1 and the output flag is 1. And I think that, whatever the number of DML we have, this is because the new_flag=0

This explains why I was not able to have snapshots flagged as volatile even when changing a lot of rows. Then How can the statistics advisor detect my volatile table?

Statistics Advisor

I’ve traced the statistics advisor

set long 100000 longc 10000
variable t varchar2(30)
variable e varchar2(30)
variable r clob
exec :t:= DBMS_STATS.CREATE_ADVISOR_TASK('my_task');
exec :e:= DBMS_STATS.EXECUTE_ADVISOR_TASK('my_task');
exec :r:= DBMS_STATS.REPORT_ADVISOR_TASK('my_task');
print r

No ‘LockVolatileTable’ rule has raised a recommendation, but I’ve seen a call to the DBMS_STATS.CHECK_VOLATILE function with an object_id as parameter.

dbms_stats_internal.check_volatile

In order to understand what are the criteria, I’ve run (with sql_trace) the function on my table:

SQL> select dbms_stats_internal.check_volatile(&object_id) from dual;
old 1: select dbms_stats_internal.check_volatile(&object_id) from dual
new 1: select dbms_stats_internal.check_volatile( 74867) from dual
 
DBMS_STATS_INTERNAL.CHECK_VOLATILE(74867)
------------------------------------------
F

I suppose ‘F’ is false, which explains why my table was not considered as volatile.

Here is the trace with binds:

PARSING IN CURSOR #140478915921360 len=191 dep=1 uid=0 oct=3 lid=0 tim=99947151021 hv=976524548 ad='739cb468' sqlid='1r3ujfwx39584'
SELECT SUM(CASE WHEN ISVOLATILE > 0 THEN 1 ELSE 0 END) FROM (SELECT OBJ#, BITAND(FLAGS, :B2 ) ISVOLATILE FROM OPTSTAT_SNAPSHOT$ WHERE OBJ# = :B1 ORDER BY TIMESTAMP DESC) O WHERE ROWNUM < :B3
END OF STMT
...
BINDS #140478915921360:
 
Bind#0
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=72 off=0
kxsbbbfp=7fc3cbe1c158 bln=22 avl=02 flg=05
value=64
Bind#1
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=24
kxsbbbfp=7fc3cbe1c170 bln=22 avl=04 flg=01
value=74867
Bind#2
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=48
kxsbbbfp=7fc3cbe1c188 bln=22 avl=02 flg=01
value=24

So, here is what the algorithm looks like:

  1. sys.opstat_snapshot$ is read for the latest 24 snapshots (remember that we have snapshots every hour + at each statistics gathering)
  2. ‘ISVOLATILE’ is 1 when the flags from the snapshots has flag 64. This is how I guessed that snapshots should me flagged with 64 by compute_volatile_flag.
  3. And finally, the number of ‘ISVOLATILE’ ones is summed.

So, it seems that the Statistics Advisor will raise a recommendation when the table has been flagged as volatile multiple times over the last 24 hour. How many? let’s guess:

SQL> insert into sys.optstat_snapshot$ select &object_id,0,0,0,64,sysdate from xmltable('1 to 12');
old 1: insert into sys.optstat_snapshot$ select &object_id,0,0,0,64,sysdate from xmltable('1 to 12')
new 1: insert into sys.optstat_snapshot$ select 74867,0,0,0,64,sysdate from xmltable('1 to 12')
 
12 rows created.
 
SQL> select dbms_stats_internal.check_volatile(&object_id) from dual;
old 1: select dbms_stats_internal.check_volatile(&object_id) from dual
new 1: select dbms_stats_internal.check_volatile( 74867) from dual
 
DBMS_STATS_INTERNAL.CHECK_VOLATILE(74867)
-----------------------------------------
F
 
SQL> rollback;
 
Rollback complete.

I’ve called the function after inserting various number of lines with flag=63 into sys.optstat_snapshot$ and up to 12 snapshots, it is still not considered as volatile.
Please remember that this is a lab, we are not expected to update the internal dictionary tables ourselves.

Now inserting one more:

SQL> insert into sys.optstat_snapshot$ select &object_id,0,0,0,64,sysdate from xmltable('1 to 13');
old 1: insert into sys.optstat_snapshot$ select &object_id,0,0,0,64,sysdate from xmltable('1 to 13')
new 1: insert into sys.optstat_snapshot$ select 74867,0,0,0,64,sysdate from xmltable('1 to 13')
 
13 rows created.
 
SQL> select dbms_stats_internal.check_volatile(&object_id) from dual;
old 1: select dbms_stats_internal.check_volatile(&object_id) from dual
new 1: select dbms_stats_internal.check_volatile( 74867) from dual
 
DBMS_STATS_INTERNAL.CHECK_VOLATILE(74867)
-----------------------------------------
T
 
SQL> rollback;
 
Rollback complete.

Good I have a ‘T’ here for ‘true. I conclude that the Statistics Advisor recommends to lock the stats on tables when half of the last 24h hours snapshots have encountered more than STALE_PERCENT modifications.

So what?

My table was not considered as volatile. None of the snapshots have been flagged as volatile. I’m quite sure that the number of DML is sufficient, so I suppose that this is disabled by default and I don’t know how to enable it. What I want to see is the compute_volatile_flag called with new_flag=64 so that snapshots are flagged when a large percentage or rows have been modified, so that enough snapshots have been flagged to be considered by the the check_volatile function.
Even if it is enabled, I think that there are more cases where tables should have statistics locked. Even if a table is empty for 5 minutes per day, we must be sure that the statistics are not gathered at that time. And looking at the Statistics Advisor thresholds, this case is far from being detected.
Final thought here: do you realize that you buy an expensive software to detect the changes happening on your tables, guess how the tables are updated, and recommend (and even implement) a general best practice? Does it mean that, today, we put in production some applications where we have no idea about what it does? Aren’t we supposed to design the application, document which tables are volatile and when they are loaded in bulk, and when to gather stats and lock them?

 

Cet article 12cR2 DML monitoring and Statistics Advisor est apparu en premier sur Blog dbi services.

MOS plugin

Few days back I was (again) angry at My Oracle Support “The page has expired” message appearing in an open tab I left for some time. I tried to find out how to avoid it, and it was relatively easy to do even though I don’t know JavaScript. Then I tried to make a plugin out of it so I it can be on by default & won’t need my attention.
Here it is: plugin for Chrome. It took me 2 seconds to come up with a name – MOS-cow and about 3 hours to prepare plugin.
Then I realized that having such plugin independently is probably not a good idea (it’s only for Chrome, and there are other browsers), and it may be enough to just add a custom JavaScript to Tampermonkey so it’s possible to use in almost any browser. So here it is. I don’t plan to add any new features at the moment, but maybe later. I know what else is needed but have no idea how much time it would take to implement, probably too much, which is not an option.
Anyway, I hope this little stupid piece of JavaScript helps you!

Filed under: Oracle Tagged: MOS

Indexing in Oracle, Fragmentation

I just uploaded my slides from Collaborate 2017 to Slideshare, but also, the findings missing from my slide deck I used for Oak Table World on War of the Indices- Oracle and SQL Server.  Feel free to download them here.

As I tested out Oracle vs. SQL Server Index performance, I ran across this great script to check for fragmentation from Franck Pachot.  You’ll need to simply update the script to declare the table and index name or simply edit the script as is, adding those two values correctly before running it.

The outcome when run against my ugly index in the testing of the good, the bad and the ugly, was fun, to say the least:

SQL> @index_frag.sql;

 C1 ->       C1 rows/block bytes/block %free space blocks free
---------- -- ---------- ---------- ----------- ----------- ---------- -----
     30041 ->   130221 35     560  93   2851 oooo
    130256 ->   230433 35     560  93   2851 oooo
    230468 ->   330654 35     560  93   2851 oooo
    330689 ->   430874 35     560  93   2851 oooo
    430909 ->   531073 35     560  93   2851 oooo
    531108 ->   631264 35     560  93   2850 oooo
    631299 ->   731429 35     560  93   2850 oooo
    731464 ->   831547 35     560  93   2850 oooo
    831583 ->   931711 35     560  93   2850 oooo
    931746 -> 1030047 34     560  93   2850 oooo

If you have just one or a couple indexes that you want to perform some deep analysis to view fragmentation, Franck’s script does an awesome job.

Just my share for the day…. </p />
</p></div></div>

    	  	<div class=

Oracle and SQL Server Index Comparison

This post has a lot of the support code and data for my Oak Table Talk that I’ll be giving at IOUG Collaborate 2017 in Las Vegas on April 5th, 2017.  

One of the Iceland 2017 SQL Saturday sessions got me thinking about indexing and how similar and different it all is in Oracle vs. SQL Server.  There was some really fun, (well, at least what I call fun…) test cases built out and referenced by Paul Randal.  After looking through some of it, I decided it might be interesting to try to replicate it to Oracle, (as close as possible) and compare how the two database platforms deal with index storage and specifically- SQL Server’s Fill Factor vs. Oracle PctIncrease index percentage filled.

B-tree indexing is the cornerstone of physically optimizing searches on data.  No consensus exists on what the “B” stands for, (some think its from Bayer, for one of the main gentlemen who did the research and many more believe it’s for Boeing, for the Research Center the research was done at.)

The choice in how the data is organized, leafs and such are pretty standard, but database platforms have created some unique indexing that enhances queries on RDBMS vs. just having heap tables.

Using Oracle and SQL Server as our choice for a comparison today, there are a few translations I need for readers of this blog:

Oracle SQL Server Description
Index Organized Table, (IOT) Clustered Index physical index storing data in their key values. In SQL Server, there can be only one Clustered index per table.
Pctfree of block FillFactor of page Percent of storage that is allowed filled. There are different times when this is used for each platform.
Sequence TOP Ability to populate data with a sequential number
dbms_random.string Replicate Ability to populate data with string values
block page unit of storage
Automatic Workload Repository, (AWR) Dynamic Management Views, (DMV) Performance data collection

Now that we have that out of the way, you can use this trusty, little graph for common terms that require a “translation” from one database platform to the other.

The next thing to remember is that PCTFree and FillFactor aren’t adhered to at all times.  Appending a row to an index is different than updating a row in an index and each platform has it’s own set of criteria to decide if it follows the rule of percentage of a block or page to fill or not.

The Test

The steps of this test:

  1. Create a table with three columns and two indexes-  SQL Server having its trusted clustered index.
  2. Populate data of different sizes to each of the tables.
  3. check the storage of our index “health”
  4. Remove data
  5. Repeat step 2 and also remove data
  6. Check the storage again to see how it has changed-  page splits in SQL Server, leaf block splits in Oracle

Goal is:

  1. Inspect the differences and similarities of indexing in both platforms
  2. The pros and cons of how index data is stored and used in both platforms

Oracle Code for Creation of Objects and Support

  • Table and PK with constraint
  • Alter index statement to decrease pctfree
  • Sequence to populate c1 column
  • Trigger to do this in simple way
  • Rinse, repeat, more “shampoo”, and do it all again… <br />
</li></ul></div></div>

    	  	<div class=

A quick look at Oracle 12.2 performance instrumentation

Thanks to the prep-work done by Andy Colvin, who continuously and boldly goes where no man has gone before, one of our Exadata systems in the lab is now fully upgraded to 12.2.0.1. It comes fully equipped with the matching cellos to support all the cool new features. Exciting times!

The reason for this post is simple: I have started working on our talk for @Enkitec’s E4 conference in June but thought I’d share this little article with you as a teaser :) There might be one or two more of these posts but if you want the full story make sure you catch us (online) during the conference.

More diagnostic information in 12.2

The Oracle database truly champions diagnosability in a way I have not seen with any other system, and it does so out of the box. Granted, some of that requires an investment into extra cost options but the value one gets in form of Active Session History (ASH) and Automatic Workload Repository (AWR) is real. After I read the chapter on instrumentation in “Insights-tales from the Oaktable” (Apress) a long time ago, I started to realise the inherent beauty of having insights to code. This code can be your code if you instrumented it properly, or the Oracle codepath externalised as wait events. Sadly most application developers do not adhere to the principle of instrumenting code (or maybe don’t even know about the possibility?) and therefore complicate troubleshooting unnecessarily. The latter is not so much an issue on many platforms where you don’t have an equivalent of the Oracle Wait Interface and session counters anyway, but on Oracle it’s a real wasted opportunity as others have pointed out before me.

I’ll now take my idealist hat off and focus on the real world :) In the far more common case where the application isn’t instrumented by the developer, you can still get to some conclusions by using the Wait Interface and session counters. In most scenarios I am involved in the first step is to use Tanel Poder’s session snapper script which gives me insights to both.

So what has changed in this respect with 12.2? The test environment I am using is an Exadata quarter rack as mentioned before. The findings should be comparable with other Oracle software offerings, in the cloud and on premises.

Wait Interface

The Wait Interface is one of the most useful features for the performance engineer, and one of the building blocks for Statspack, ASH and AWR. I was curious if new events have been introduced in 12.2, hence this investigation. The sheer number of events tracked in the database prevents them from being listed verbally in this post. If you want to, you can use Tanel’s “sed.sql” to find out more, or simply query v$event_name.

An interesting tidbit for 12.2 has been covered by @FranckPachot: some of the more misleading event names such as db file scattered read and db file sequential readhave been clarified in 12.2. Search for events where the name is not equal to the display_name, or read Franck’s post on the DBI blog.

Wait events in 11.2.0.3

I like to start comparisons with a benchmark, and 11.2.0.3 seems to be a good candidate. Just looking at the wait_classes and counting events per class should be a good starting point:

SQL> select count(*), wait_class 
  2  from v$event_name 
  3  group by rollup(wait_class) 
  4  order by wait_class;

   COUNT(*) WAIT_CLASS
----------- ----------------------------------------------------------------
         55 Administrative
         17 Application
         50 Cluster
          2 Commit
         33 Concurrency
         24 Configuration
         95 Idle
         35 Network
        745 Other
          9 Queueing
          8 Scheduler
         31 System I/O
         48 User I/O
       1152

14 rows selected.

So there are 1152 events total in 11.2.0.3, keep that number in mind.

Wait events in 12.1.0.2

In my opinion 12.1 is a major step ahead, and I said it many times: I haven’t seen so many fundamental changes to the Oracle database for a long time. For various reasons though 12.1 hasn’t seen the traction in the field it probably deserved. Note how the diagnosability has been greatly enhanced:

SQL> select count(*), wait_class 
  2  from v$event_name 
  3  group by rollup(wait_class) 
  4  order by wait_class;

   COUNT(*) WAIT_CLASS
----------- ----------------------------------------------------------------
         57 Administrative
         17 Application
         64 Cluster
          4 Commit
         38 Concurrency
         26 Configuration
        121 Idle
         28 Network
       1186 Other
          9 Queueing
          9 Scheduler
         35 System I/O
         56 User I/O
       1650

14 rows selected.

A quick calculation reveals that 12.1 features 498 additional events, a lot more than 11.2.0.3. Not too bad in my opinion. Unfortunately most of these additional events ended up in the “Other” wait class. This is a real shame, and I have remarked this before when blogging about the new Data Guard Wait events: they probably should have ended up in the Network class instead. I am sure there are other examples like this.

Wait events in 12.2.0.1

Now what about 12.2? First let’s start with the list:

SQL> select count(*), wait_class
  2  from v$event_name
  3  group by rollup(wait_class)
  4  order by wait_class;

  COUNT(*) WAIT_CLASS
---------- ----------------------------------------------------------------
        57 Administrative
        17 Application
        68 Cluster
         4 Commit
        49 Concurrency
        27 Configuration
       135 Idle
        29 Network
      1314 Other
         9 Queueing
        10 Scheduler
        35 System I/O
        57 User I/O
      1811

14 rows selected.

There are indeed some new events, most of them can be found in the Other wait class. Again, this is quite unfortunate as it prevents the performance architect from identifying unknown wait events quickly.

I have decided to keep this post short-ish and will spend more time some other day to investigate the exact difference between 12.1.0.2 and 12.2.0.1. Most likely after E4 this year.

Session Counters

With the wait interface covered, it’s time to move on to the session counters. Continuing the approach I took with wait events I will group all session counters by class. Instead of re-inventing the wheel I am using a slightly adapted version of Tanel Poder’s “mys.sql” script to group counters by class. Most of them fall in just one, but there are others where more than one class is applicable. The 12.2 Reference Guide explains v$statname.class in more detail.

Session counters in 11.2.0.3

Before investigating 12.1 and 12.2 I’ll look at 11.2.0.3 first, as in the first section of the article. The SQL statement I used is this:

with classification as (
select name, TRIM(
  CASE WHEN BITAND(class,  1) =   1 THEN 'USER  ' END ||
  CASE WHEN BITAND(class,  2) =   2 THEN 'REDO  ' END ||
  CASE WHEN BITAND(class,  4) =   4 THEN 'ENQ   ' END ||
  CASE WHEN BITAND(class,  8) =   8 THEN 'CACHE ' END ||
  CASE WHEN BITAND(class, 16) =  16 THEN 'OS    ' END ||
  CASE WHEN BITAND(class, 32) =  32 THEN 'RAC   ' END ||
  CASE WHEN BITAND(class, 64) =  64 THEN 'SQL   ' END ||
  CASE WHEN BITAND(class,128) = 128 THEN 'DEBUG ' END
) class_name 
from v$statname
) select count(*), class_name from classification 
group by rollup(class_name)
order by class_name;

Executed on an 11.2.0.3 database this reveals the following numbers:

   COUNT(*) CLASS_NAME
----------- ------------------------------------------------
        121 CACHE
         27 CACHE RAC
         15 CACHE SQL
        188 DEBUG
          9 ENQ
         16 OS
         25 RAC
         32 REDO
         93 SQL
          2 SQL   DEBUG
        107 USER
          3 USER  RAC
        638

13 rows selected.

638 of them altogether. Keep the number in mind when moving to 12.1.

Session counters in 12.1.0.2

There was a major increase in the number of counters between 11.2.0.3 and 12.1.0.2. Consider the numbers:

  COUNT(*) CLASS_NAME
---------- ------------------------------------------------
       151 CACHE
        53 CACHE RAC
        15 CACHE SQL
       565 DEBUG
         9 ENQ
        16 OS
        35 RAC
        68 REDO
         1 REDO  RAC
       130 SQL
         2 SQL   DEBUG
       130 USER
         3 USER  RAC
      1178

14 rows selected.

That nearly doubles the number of counters available. Note that quite a few of the new counters fall into the DEBUG section. Looking a bit more closely you can see they seem to be used by the In Memory (IM) Option:

SQL>  SELECT
  2      regexp_substr(name,'\w+ \w+') AS short_name,
  3      COUNT(*)
  4  FROM
  5      v$statname
  6  WHERE
  7      class = 128
  8  GROUP BY
  9      regexp_substr(name,'\w+ \w+')
 10  ORDER BY
 11      2 DESC
 12  FETCH FIRST 5 ROWS ONLY;

SHORT_NAME                        COUNT(*)
------------------------------ -----------
IM repopulate                           49
IM scan                                 47
IM populate                             37
spare statistic                         35
IM space                                26

5 rows selected.

There are 198 session counters beginning with ‘IM %’. I can feel another post about DB In Memory coming …

Session counters in 12.2.0.1

Finally, here is the list of statistics in 12.2.0.1:

  COUNT(*) CLASS_NAME
---------- ------------------------------------------------
       174 CACHE
        73 CACHE RAC
        15 CACHE SQL
      1067 DEBUG
         9 ENQ
        16 OS
        35 RAC
        75 REDO
         1 REDO  RAC
       190 SQL
         2 SQL   DEBUG
       144 USER
         3 USER  RAC
      1804

Another 626 additional counters, that’s almost the number of counters available in total on the 11.2.0.3 system! Running my previous query again it seems that IM-related statistics dominate, but there are lots of others of interest.

As with the 12.2 wait events I don’t want to give too much away at this point (and the post is long enough anyway) so stay tuned for an update at a later time.

Summary

Oracle has been one of the best database engines around, and with 12.2.0.1 instrumentation is further improved. This post has again become too long, so I’ll stop here and defer the write-up of my investigation into those stats relevant for Exadata to a later point.

Happy troubleshooting!