Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

My Performance & Troubleshooting scripts (TPT) for Oracle are now in GitHub and open sourced

I have uploaded my TPT-oracle scripts to GitHub and have formally open sourced them under Apache 2.0 license as well. This allows companies to embed this software in their toolsets and processes & distribute them without a worry from legal departments.

The repository is here:

Now you can “git clone” this repository once and just “git pull” every now and then to see what updates & fixes I have made.

Also if you like my scripts, make sure you “Star” this repository in Github too – the more stars it gets, the more updates I will commit! ;-)

https://i2.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/github_... 300w, https://i2.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/github_... 768w, https://i2.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/github_... 50w, https://i2.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/github_... 1600w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

While “git clone” is a recommended method for getting your own workstation copy of the repository now, your servers might not have git installed (and no direct internet access), so you can still download a zipfile of everything in this repo too:

You can still directly access individual scripts too using links like the ones below. For example, if you want to run fish.sql to display an awesome SQL fish in sqlplus, you can download this:

https://i0.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/fish.pn... 289w, https://i0.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/fish.pn... 768w, https://i0.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/fish.pn... 48w, https://i0.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/fish.pn... 1600w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

Or if you want to run something from a subdirectory, like ash/dashtop.sql for showing ASH top from the historical ASH data in DBA_HIST views, you can download this script from the ASH subdirectory:

Example output below:

SQL> @ash/dashtop sql_opname,event2 username='SYS' DATE'2018-04-19' DATE'2018-04-20'

    Total
  Seconds     AAS %This   SQL_OPNAME           EVENT2                                     FIRST_SEEN          LAST_SEEN
--------- ------- ------- -------------------- ------------------------------------------ ------------------- -------------------
     4930      .1   83%                        ON CPU                                     2018-04-19 18:00:04 2018-04-19 23:48:08
      430      .0    7%   SELECT               ON CPU                                     2018-04-19 18:01:04 2018-04-19 23:49:48
      290      .0    5%   SELECT               acknowledge over PGA limit                 2018-04-19 18:00:34 2018-04-19 23:23:50
       60      .0    1%   UPSERT               ON CPU                                     2018-04-19 18:00:04 2018-04-19 22:00:15
       50      .0    1%   UPSERT               acknowledge over PGA limit                 2018-04-19 18:00:04 2018-04-19 23:13:47
       30      .0    1%   CALL METHOD          ON CPU                                     2018-04-19 18:00:24 2018-04-19 21:03:19
       30      .0    1%                        control file sequential read               2018-04-19 18:56:42 2018-04-19 21:47:21
       30      .0    1%                        log file parallel write                    2018-04-19 21:03:19 2018-04-19 22:13:39
       20      .0    0%   CALL METHOD          acknowledge over PGA limit                 2018-04-19 18:00:24 2018-04-19 22:01:55
       20      .0    0%   DELETE               db file sequential read                    2018-04-19 20:46:54 2018-04-19 22:00:35
       20      .0    0%   SELECT               db file sequential read                    2018-04-19 22:01:05 2018-04-19 22:01:35
       10      .0    0%   INSERT               ON CPU                                     2018-04-19 19:50:28 2018-04-19 19:50:28
       10      .0    0%   INSERT               acknowledge over PGA limit                 2018-04-19 20:43:12 2018-04-19 20:43:12
       10      .0    0%   SELECT               db file scattered read                     2018-04-19 23:03:55 2018-04-19 23:03:55
       10      .0    0%                        LGWR any worker group                      2018-04-19 21:03:19 2018-04-19 21:03:19
       10      .0    0%                        control file parallel write                2018-04-19 21:05:59 2018-04-19 21:05:59

16 rows selected.

Now that I have this stuff in Github, I plan to update my scripts a bit more regularly – and you can follow the repository to get real time updates whenever I push something new.

As a next step I’ll convert my blog from WordPress to static hosting (Hugo) hopefully over this weekend, so you might see a few blog template/webserver glitches in the next few days.

AMM vs security

Most of us already know that AMM sucks. But usually, we think about disadvantages of AMM in terms of performance. Let’s see why it sucks in the terms of security </p />
</p></div>

    	  	<div class=

Bitmap Join Indexes

I’ve been prompted by a recent question on the ODC database forum to revisit a note I wrote nearly five years ago about bitmap join indexes and their failure to help with join cardinalities. At the time I made a couple of unsupported claims and suggestions without supplying any justification or proof. Today’s article finally fills that gap.

The problem is this – I have a column which exhibits an extreme skew in its data distribution, but it’s in a “fact” table where most columns are meaningless ids and I have to join to a dimension table on its primary key to translate an id into a name. While there is a histogram on the column in the fact table the information in the histogram ceases to help if I do the join to the dimension and query by name, and the presence of a bitmap join index doesn’t make any difference. Let’s see this in action – some of the code follows a different pattern and format from my usual style because I started by copying and editing the example supplied in the database forum:


rem
rem     Script:         bitmap_join_4.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2018
rem
rem     Last tested 
rem             12.2.0.1
rem             12.1.0.2
rem             11.2.0.4
rem
rem     Notes:
rem     Bitmap join indexes generate virtual columns on the fact table
rem     but you can't get stats on those columns - which means if the
rem     data is skewed you can have a histogram on the raw column but
rem     you don't have a histogram on the bitmap virtual column.
rem

drop table t1;
drop table dim_table;

create table dim_table (type_code number, object_type varchar2(10));

insert into dim_table values (1,'TABLE');
insert into dim_table values (2,'INDEX');
insert into dim_table values (3,'VIEW');
insert into dim_table values (4,'SYNONYM');
insert into dim_table values (5,'OTHER');

alter table dim_table add constraint dim_table_pk primary key (type_code) using index;

exec dbms_stats.gather_table_stats(user,'dim_table',cascade=>true);

create table t1 
nologging
as 
select 
        object_id, object_name, 
        decode(object_type, 'TABLE',1,'INDEX',2,'VIEW',3,'SYNONYM',4,5) type_code 
from 
        all_objects
where
        rownum <= 50000 -- > comment to bypass wordpress format issue
;

insert into t1 select * from t1;
insert into t1 select * from t1;
insert into t1 select * from t1;


create  bitmap index t1_b1 on t1(dt.object_type)
from    t1, dim_table dt
where   t1.type_code = dt.type_code
;

exec dbms_stats.gather_table_stats(null, 't1', cascade=>true, method_opt=>'for all columns size 254');


select
        dt.object_type, count(*)
from
        t1, dim_table  dt
where
        t1.type_code   = dt.type_code
group by
        dt.object_type
order by
        dt.object_type
;

I’ve started with a dimension table that lists 5 type codes and has a primary key on that type code; then I’ve used all_objects to generate a table of 400,000 rows using those type codes, and I’ve created a bitmap join index on the fact (t1) table based on the dimension (dim_table) table column. By choice the distribution of the five codes is massively skewed so after gathering stats (including histograms on all columns) for the table I’ve produced a simple aggregate report of the data showing how many rows there are of each type – by name. Here are the results – with the execution plan from 12.1.0.2 showing the benefit of the “group by placement” transformation:


OBJECT_TYP   COUNT(*)
---------- ----------
INDEX           12960
OTHER          150376
SYNONYM        177368
TABLE           12592
VIEW            46704

5 rows selected.

-------------------------------------------------------------------
| Id  | Operation             | Name      | Rows  | Bytes | Cost  |
-------------------------------------------------------------------
|   0 | SELECT STATEMENT      |           |       |       |   735 |
|   1 |  SORT GROUP BY        |           |     5 |   125 |   735 |
|*  2 |   HASH JOIN           |           |     5 |   125 |   720 |
|   3 |    VIEW               | VW_GBF_7  |     5 |    80 |   717 |
|   4 |     HASH GROUP BY     |           |     5 |    15 |   717 |
|   5 |      TABLE ACCESS FULL| T1        |   400K|  1171K|   315 |
|   6 |    TABLE ACCESS FULL  | DIM_TABLE |     5 |    45 |     2 |
-------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("ITEM_1"="DT"."TYPE_CODE")

Having established the basic result we can now examine some execution plans to see how well the optimizer is estimating cardinality for queries relating to that skewed distribution. I’m going to generate the execution plans for a simple select of all the rows of type ‘TABLE’ – first by code, then by name, showing the execution plan of each query:


explain plan for
select  t1.object_id
from
        t1
where
        t1.type_code = 1
;

select * from table(dbms_xplan.display(null,null,'outline'));


--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 12592 |    98K|   281   (8)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   | 12592 |    98K|   281   (8)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("T1"."TYPE_CODE"=1)

Thanks to the histogram I generated on the type_code table the optimizer’s estimate of the number of rows is very accurate. So how well does the optimizer handle the join statistics:


prompt  =============
prompt  Unhinted join
prompt  =============

explain plan for
select  t1.object_id
from
        t1, dim_table  dt
where
        t1.type_code   = dt.type_code 
and     dt.object_type = 'TABLE'
;

select * from table(dbms_xplan.display(null,null,'outline'));

--------------------------------------------------------------------------------
| Id  | Operation          | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |           | 80000 |  1328K|   287  (10)| 00:00:01 |
|*  1 |  HASH JOIN         |           | 80000 |  1328K|   287  (10)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| DIM_TABLE |     1 |     9 |     2   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1        |   400K|  3125K|   277   (7)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."TYPE_CODE"="DT"."TYPE_CODE")
   2 - filter("DT"."OBJECT_TYPE"='TABLE')

Taking the default execution path the optimizer’s estimate of rows identified by type name is 80,000 – which is one fifth of the total number of rows. Oracle knows that the type_code is skewed in t1, but at compile time doesn’t have any idea which type_code corresponds to type ‘TABLE’, so it’s basically using the number of distinct values to dictate the estimate.

We could try hinting the query to make sure it uses the bitmap join index – just in case this somehow helps the optimizer (and we’ll see in a moment why we might have this hope, and why it is forlorn):


prompt  ===================
prompt  Hinted index access
prompt  ===================

explain plan for
select 
        /*+ index(t1 t1_b1) */
        t1.object_id
from
        t1, dim_table dt
where
        t1.type_code   = dt.type_code 
and     dt.object_type = 'TABLE'
;

select * from table(dbms_xplan.display(null,null,'outline'));

---------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       | 80000 |   625K|   687   (1)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    | 80000 |   625K|   687   (1)| 00:00:01 |
|   2 |   BITMAP CONVERSION TO ROWIDS       |       |       |       |            |          |
|*  3 |    BITMAP INDEX SINGLE VALUE        | T1_B1 |       |       |            |          |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."SYS_NC00004$"='TABLE')

The plan tells us that the optimizer now realises that it doesn’t need to reference the dimension table at all – all the information it needs is in the t1 table and its bitmap join index – but it still comes up with an estimate of 80,000 for the number of rows. The predicate section tells us what to do next – it identifies a system-generated column, which is the virtual column underlying the bitmap join index: let’s see what the stats on that column look like:


select
        column_name, histogram, num_buckets, num_distinct, num_nulls, sample_size
from
        user_tab_cols
where
        table_name = 'T1'
order by
        column_id
;


COLUMN_NAME          HISTOGRAM       NUM_BUCKETS NUM_DISTINCT  NUM_NULLS SAMPLE_SIZE
-------------------- --------------- ----------- ------------ ---------- -----------
OBJECT_ID            HYBRID                  254        50388          0        5559
OBJECT_NAME          HYBRID                  254        29224          0        5560
TYPE_CODE            FREQUENCY                 5            5          0      400000
SYS_NC00004$         NONE

4 rows selected.

There are no stats on the virtual column – and Oracle won’t try to collect any, and even if you write some in (using dbms_stats.set_column_stats) it won’t use them for the query. The optimizer seems to be coded to use the number of distinct keys from the index in this case.

Workaround

It’s very disappointing that there seems to be no official way to work around this problem – but Oracle has their own (undocumented) solution to the problem that comes into play with OLAP – the hint /*+ precompute_subquery() */. It’s possible to tell the optimizer to execute certain types of subquery as the first stage of optimising a query, then changing the query to take advantage of the resulting data:


explain plan for
select
        /*+
                qb_name(main)
                precompute_subquery(@subq)
        */
        t1.object_id
from
        t1
where
        t1.type_code in (
                select
                        /*+
                                qb_name(subq)
                        */
                        dt.type_code
                from    dim_table dt
                where   dt.object_type = 'TABLE'
        )
;

select * from table(dbms_xplan.display(null,null,'outline'));

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 12592 |    98K|   281   (8)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   | 12592 |    98K|   281   (8)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("T1"."TYPE_CODE"=1)

Oracle hasn’t optimized the query I wrote, instead it has executed the subquery, derived a (very short, in this case) list of values, then optimized and executed the query I first wrote using the constant(s) returned by the subquery. And you can’t see the original subquery in the execution plan. Of course, with the literal values in place, the cardinality estimate is now correct.

It’s such a pity that this hint is undocumented, and one that you shouldn’t use in production.

 

VMWare Experts Program, Sydney

A few weeks back, I received an invitation from Don Sullivan to attend the Sydney version of the VMWare Experts Program. I worked with Don during our time at Oracle, and caught up with him again a couple of years ago at one of the Collaborate conferences. He had moved on to VMWare, and is still working for them today.

As Don describes the event, “the VMWare Experts Program was started in 2013 as a collaborative effort between the Cloud Platform Business Unit, then referred to as vSphere Product Marketing team and the VMware Global Center of Excellence. Members are initiated into the program when they attend a 3-day workshop delivered by VMware. Everyone invited meets with VMware executives and technical specialists, VMware partner personnel and alumni of the program itself. Attendees of the event often include the highest-level executives, architects and veteran engineers who build the platform and write code. A variety of open discussions and exercises facilitate an intense learning environment leading to the expansion of a broader community of Database and Applications experts who have extensive knowledge of VMware virtualized infrastructure.” There have been multiple runs of the event, in Palo Alto USA, Cork Ireland, Sofia Bulgaria, and now in Sydney.

Branches of the VMware Experts program presently include Oracle, MS SQL Server and “Big Data and Scientific and Engineering Workloads”. For the Sydney event, we had the Oracle and SQL Server branches held at the same time, with some shared sessions where attendees from both backgrounds were together, and some sessions where we split off into separate presentations for SQL Server and Oracle. This is the first time I can recall having experts from both backgrounds in the same room, and I must say it was an interesting way to do things. It was great to get a slightly different perspective on things.

It was certainly a fun-filled three days, with the material ranging from high level marketing stuff from the sponsors right down to some VERY detailed technical material on things to look for from a performance perspective, and also included some hands on labs. Don even managed to organize a fire alarm both at the hotel in the middle of a presentation AND at Jamie’s Italian restaurant where we went for dinner one night. </p />
</p></div>

    	  	<div class=

Partition-Wise Operations – New Features in 12c and 18c

Partition-wise operations are not something new. I do not remember when they were introduced, but at that time the release number was still a single digit. Anyway, the aim of this post is not to describe the basics, but only to describe what is new in that area in 12c and 18c.

The new features can be grouped in three categories:

  • Partition-wise GROUP BY enhancements available as of version 12.2
  • Partition-wise DISTINCT enhancements available as of version 12.2
  • Partition-wise windowing functions enhancements available as of version 18.1

Before looking at the new features, here are the SQL statements I executed to create a partitioned table that I use through the examples. You can download the script here.

CREATE TABLE t (
  id NUMBER,
  d1 DATE,
  n1 NUMBER,
  n2 NUMBER,
  n3 NUMBER,
  pad VARCHAR2(4000),
  CONSTRAINT t_pk PRIMARY KEY (id)
)
PARTITION BY RANGE (d1)
SUBPARTITION BY LIST (n1)
SUBPARTITION TEMPLATE (
  SUBPARTITION sp_1 VALUES (1),
  SUBPARTITION sp_2 VALUES (2),
  SUBPARTITION sp_3 VALUES (3),
  SUBPARTITION sp_4 VALUES (4)
)(
  PARTITION t_q1_2018 VALUES LESS THAN (to_date('2018-04-01 00:00:00','YYYY-MM-DD HH24:MI:SS')),
  PARTITION t_q2_2018 VALUES LESS THAN (to_date('2018-07-01 00:00:00','YYYY-MM-DD HH24:MI:SS')),
  PARTITION t_q3_2018 VALUES LESS THAN (to_date('2018-10-01 00:00:00','YYYY-MM-DD HH24:MI:SS')),
  PARTITION t_q4_2018 VALUES LESS THAN (to_date('2019-01-01 00:00:00','YYYY-MM-DD HH24:MI:SS'))
);

INSERT INTO t
SELECT rownum AS id,
       trunc(to_date('2018-01-01','YYYY-MM-DD')+rownum/27.4) AS d1,
       1+mod(rownum,4) AS n1,
       rownum AS n2,
       rownum AS n3,
       rpad('*',100,'*') AS pad
FROM dual
CONNECT BY level <= 10000;

BEGIN
  dbms_stats.gather_table_stats(
    ownname          => user,
    tabname          => 'T'
  );
END;
/

Partition-wise GROUP BY (12.2)

The enhancements in this section are minimal. In fact, there are only two new hints: USE_PARTITION_WISE_GBY and NO_USE_PARTITION_WISE_GBY. You can use the former to enable the feature in case the query optimizer does not select it, and with the latter, you can do the opposite.

The following examples show, for the query SELECT n1, d1, sum(n2) FROM t GROUP BY n1, d1, serial/parallel execution plans with/without the partition-wise optimization. In the serial execution plans notice the placement of the HASH GROUP BY operation. In the parallel execution plans, notice the reduction of involved processes and data distribution.

serial

-------------------------------------
| Id  | Operation            | Name |
-------------------------------------
|   0 | SELECT STATEMENT     |      |
|   1 |  HASH GROUP BY       |      |
|   2 |   PARTITION RANGE ALL|      |
|   3 |    PARTITION LIST ALL|      |
|   4 |     TABLE ACCESS FULL| T    |
-------------------------------------

serial+pwise

-------------------------------------
| Id  | Operation            | Name |
-------------------------------------
|   0 | SELECT STATEMENT     |      |
|   1 |  PARTITION RANGE ALL |      |
|   2 |   PARTITION LIST ALL |      |
|   3 |    HASH GROUP BY     |      |
|   4 |     TABLE ACCESS FULL| T    |
-------------------------------------

parallel

--------------------------------------------------------------------------
| Id  | Operation                | Name     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |          |        |      |            |
|   1 |  PX COORDINATOR          |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)    | :TQ10001 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY         |          |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE           |          |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH        | :TQ10000 |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY      |          |  Q1,00 | PCWP |            |
|   7 |        PX BLOCK ITERATOR |          |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS FULL| T        |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------

parallel+pwise

-------------------------------------------------------------------------
| Id  | Operation               | Name     |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          |        |      |            |
|   1 |  PX COORDINATOR         |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)   | :TQ10000 |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX PARTITION LIST ALL|          |  Q1,00 | PCWC |            |
|   4 |     HASH GROUP BY       |          |  Q1,00 | PCWP |            |
|   5 |      TABLE ACCESS FULL  | T        |  Q1,00 | PCWP |            |
-------------------------------------------------------------------------

Partition-wise DISTINCT (12.2)

The key enhancement of this section is the ability to execute a DISTINCT as a parallel partition-wise operation. In addition, there are two new hints: USE_PARTITION_WISE_DISTINCT and NO_USE_PARTITION_WISE_DISTINCT. You can use the former to enable the feature in case the query optimizer does not select it, and with the latter, you can do the opposite.

The following examples show, for the query SELECT DISTINCT n1, d1 FROM t, serial/parallel execution plans with/without the partition-wise optimization. In the serial execution plans notice the placement of the HASH UNIQUE operation. In the parallel execution plans, notice the reduction of involved processes and data distribution.

serial

-------------------------------------
| Id  | Operation            | Name |
-------------------------------------
|   0 | SELECT STATEMENT     |      |
|   1 |  HASH UNIQUE         |      |
|   2 |   PARTITION RANGE ALL|      |
|   3 |    PARTITION LIST ALL|      |
|   4 |     TABLE ACCESS FULL| T    |
-------------------------------------

serial+pwise

-------------------------------------
| Id  | Operation            | Name |
-------------------------------------
|   0 | SELECT STATEMENT     |      |
|   1 |  PARTITION RANGE ALL |      |
|   2 |   PARTITION LIST ALL |      |
|   3 |    HASH UNIQUE       |      |
|   4 |     TABLE ACCESS FULL| T    |
-------------------------------------

parallel

--------------------------------------------------------------------------
| Id  | Operation                | Name     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |          |        |      |            |
|   1 |  PX COORDINATOR          |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)    | :TQ10001 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH UNIQUE           |          |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE           |          |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH        | :TQ10000 |  Q1,00 | P->P | HASH       |
|   6 |       HASH UNIQUE        |          |  Q1,00 | PCWP |            |
|   7 |        PX BLOCK ITERATOR |          |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS FULL| T        |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------

parallel+pwise

-------------------------------------------------------------------------
| Id  | Operation               | Name     |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          |        |      |            |
|   1 |  PX COORDINATOR         |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)   | :TQ10000 |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX PARTITION LIST ALL|          |  Q1,00 | PCWC |            |
|   4 |     HASH UNIQUE         |          |  Q1,00 | PCWP |            |
|   5 |      TABLE ACCESS FULL  | T        |  Q1,00 | PCWP |            |
-------------------------------------------------------------------------

Partition-wise Windowing Functions (18.1)

The key enhancement of this section is the ability to execute a windowing function as a parallel partition-wise operation. In addition, there are two new hints: USE_PARTITION_WISE_WIF and NO_USE_PARTITION_WISE_WIF. You can use the former to enable the feature in case the query optimizer does not select it, and with the latter, you can do the opposite.

The following examples show, for the query SELECT n1, d1, avg(n2) OVER (PARTITION BY n1, d1) AS average FROM t, serial/parallel execution plans with/without the partition-wise optimization. Notice that the serial execution plan without the optimization is missing because either I did not correctly understand how to use the NO_USE_PARTITION_WISE_WIF hint (as it too often happens, no documentation about it is provided) or it does not work (bug?) for the serial execution plan. In the parallel execution plans, notice the reduction of involved processes and data distribution.

serial+pwise

-------------------------------------
| Id  | Operation            | Name |
-------------------------------------
|   0 | SELECT STATEMENT     |      |
|   1 |  PARTITION RANGE ALL |      |
|   2 |   PARTITION LIST ALL |      |
|   3 |    WINDOW SORT       |      |
|   4 |     TABLE ACCESS FULL| T    |
-------------------------------------

parallel

-------------------------------------------------------------------------
| Id  | Operation               | Name     |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          |        |      |            |
|   1 |  PX COORDINATOR         |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)   | :TQ10001 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    WINDOW SORT          |          |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE          |          |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH       | :TQ10000 |  Q1,00 | P->P | HASH       |
|   6 |       PX BLOCK ITERATOR |          |  Q1,00 | PCWC |            |
|   7 |        TABLE ACCESS FULL| T        |  Q1,00 | PCWP |            |
-------------------------------------------------------------------------

parallel+pwise

-------------------------------------------------------------------------
| Id  | Operation               | Name     |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          |        |      |            |
|   1 |  PX COORDINATOR         |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)   | :TQ10000 |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX PARTITION LIST ALL|          |  Q1,00 | PCWC |            |
|   4 |     WINDOW SORT         |          |  Q1,00 | PCWP |            |
|   5 |      TABLE ACCESS FULL  | T        |  Q1,00 | PCWP |            |
-------------------------------------------------------------------------

All in all, those are good features that can not only make some operation faster, but also reduce the number of involved processes in case the database engine uses parallel execution.

12c upuserxt.lst, upobjxt.lst & Oracle Maintained objects/users

Mike Dietrich has blogged recently about upuserxt.lst and upobjxt.lst and how to query them with external table. The first time I’ve seen those ‘.lst’ files, the default extension for sqlplus spool files, I wondered whether they were provided in ?/rdbms/admin on purpose, or if they were just some leftovers from some tests Oracle did before packaging the Oracle Home. Finally, I realized that they were there on purpose and that those ‘.lst’ are important files when upgrading to 12c.

I’ll look at an 18c Oracle Home (/rdbms/admin) in the Oracle Cloud but that applies to all 12c (and 18c is a 12cR2 patchset). One of the most important little feature of 12c is the tagging of Oracle Supplied objects and users. Before 12c it was a nightmare to distinguish system users from application ones. I detailed that in a previous post.

At database creation: _oracle_script

In a newly created 12c database, all the objects and users belonging to the system are flagged with ORACLE_MAINTAINED=Y

Here is an example listing system users and roles:

SQL> select listagg(username,',' on overflow truncate) within group (order by username) from dba_users where oracle_maintained='Y';LISTAGG(USERNAME,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYUSERNAME)
 
ANONYMOUS,APPQOSSYS,AUDSYS,CTXSYS,DBSFWUSER,DBSNMP,DIP,DVF,DVSYS,GGSYS,GSMADMIN_INTERNAL,GSMCATUSER,GSMUSER,LBACSYS,MDDATA,MDSYS,OJVMSYS,OLAPSYS,ORACLE_OCM,ORDDATA,ORDPLUGINS,ORDSYS,OUTLN,REMOTE_SCHEDULER_AGENT,SI_INFORMTN_SCHEMA,SYS,SYS$UMF,SYSBACKUP,SYSDG,SYSKM,SYSRAC,SYSTEM,WMSYS,XDB,XS$NULL
 
SQL> select listagg(role,',' on overflow truncate) within group (order by role) from dba_roles where oracle_maintained='Y';
LISTAGG(ROLE,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYROLE)
 
ADM_PARALLEL_EXECUTE_TASK,APPLICATION_TRACE_VIEWER,AQ_ADMINISTRATOR_ROLE,AQ_USER_ROLE,AUDIT_ADMIN,AUDIT_VIEWER,AUTHENTICATEDUSER,CAPTURE_ADMIN,CDB_DBA,CONNECT,CTXAPP,DATAPATCH_ROLE,DATAPUMP_EXP_FULL_DATABASE,DATAPUMP_IMP_FULL_DATABASE,DBA,DBFS_ROLE,DBJAVASCRIPT,DBMS_MDX_INTERNAL,DV_ACCTMGR,DV_ADMIN,DV_AUDIT_CLEANUP,DV_DATAPUMP_NETWORK_LINK,DV_GOLDENGATE_ADMIN,DV_GOLDENGATE_REDO_ACCESS,DV_MONITOR,DV_OWNER,DV_PATCH_ADMIN,DV_POLICY_OWNER,DV_PUBLIC,DV_REALM_OWNER,DV_REALM_RESOURCE,DV_SECANALYST,DV_STREAMS_ADMIN,DV_XSTREAM_ADMIN,EJBCLIENT,EM_EXPRESS_ALL,EM_EXPRESS_BASIC,EXECUTE_CATALOG_ROLE,EXP_FULL_DATABASE,GATHER_SYSTEM_STATISTICS,GDS_CATALOG_SELECT,GGSYS_ROLE,GLOBAL_AQ_USER_ROLE,GSMADMIN_ROLE,GSMUSER_ROLE,GSM_POOLADMIN_ROLE,HS_ADMIN_EXECUTE_ROLE,HS_ADMIN_ROLE,HS_ADMIN_SELECT_ROLE,IMP_FULL_DATABASE,JAVADEBUGPRIV,JAVAIDPRIV,JAVASYSPRIV,JAVAUSERPRIV,JAVA_ADMIN,JMXSERVER,LBAC_DBA,LOGSTDBY_ADMINISTRATOR,OEM_ADVISOR,OEM_MONITOR,OLAP_DBA,OLAP_USER,OLAP_XS_ADMIN,OPTIMIZER_PROCESSING_RATE,ORDADMIN,PDB_DBA,PROVISIONER,RDFCTX_ADMIN,RECOVERY_CATALOG_OWNER,RECOVERY_CATALOG_OWNER_VPD,RECOVERY_CATALOG_USER,RESOURCE,SCHEDULER_ADMIN,SELECT_CATALOG_ROLE,SODA_APP,SYSUMF_ROLE,WM_ADMIN_ROLE,XDBADMIN,XDB_SET_INVOKER,XDB_WEBSERVICES,XDB_WEBSERVICES_OVER_HTTP,XDB_WEBSERVICES_WITH_PUBLIC,XS_CACHE_ADMIN,XS_CONNECT,XS_NAMESPACE_ADMIN,XS_SESSION_ADMIN

And here is an exemple listing the owners of system objects flagged with ORACLE_MAINTAINED=Y

SQL> select listagg(num||' '||owner,',' on overflow truncate) within group (order by num) from (select owner,count(*) num from dba_objects where oracle_maintained='Y' group by owner);LISTAGG(NUM||''||OWNER,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYNUM)
 
6 APPQOSSYS,8 DBSFWUSER,8 ORACLE_OCM,8 SI_INFORMTN_SCHEMA,10 ORDPLUGINS,10 OUTLN,13 REMOTE_SCHEDULER_AGENT,22 DVF,24 OJVMSYS,25 OLAPSYS,35 AUDSYS,55 DBSNMP,209 GSMADMIN_INTERNAL,239 LBACSYS,292 ORDDATA,398 DVSYS,399 WMSYS,412 CTXSYS,466 SYSTEM,1029 XDB,2574 MDSYS,3171 ORDSYS,12173 PUBLIC,51069 SYS

How this is done? That’s easy. All system objects are created by Oracle scripts, such as those called by catalog.sql and catproc.sql during database creation. Those scripts set “_oracle_script”=true before running the DDL and all object created while “_oracle_script”=true is flagged as Oracle Maintained.

If, in a lab (not in prod), you create your own object in the same way, they will also be flagged as Oracle Maintained:

SQL> connect / as sysdba
Connected.
SQL> alter session set "_oracle_script"=true;
Session altered.
 
SQL> create user FRANCK identified by myself;
User FRANCK created.
SQL> alter user FRANCK quota unlimited on users;
User FRANCK altered.
 
SQL> create table FRANCK.DEMO(n primary key) as select 1 from dual;
Table FRANCK.DEMO created.
SQL> alter session set "_oracle_script"=false;
Session altered.
 
SQL> select username,oracle_maintained from dba_users where username='FRANCK';
 
USERNAME ORACLE_MAINTAINED
-------- -----------------
FRANCK Y
SQL> select owner,object_name,object_type,oracle_maintained from dba_objects where owner='FRANCK';
 
OWNER OBJECT_NAME OBJECT_TYPE ORACLE_MAINTAINED
------ ---- ----- -
FRANCK DEMO TABLE Y
FRANCK SYS_C007409 INDEX Y

So, this one is easy. Database creation runs with “_oracle_script”=true and objects and users created when this parameter is set to true are flagged as Oracle Supplied objects.

And during upgrades?

When you upgrade from 11g to 12c you don’t have this Oracle Maintained information. The catupgrd runs with “_oracle_script”=true but this script does not create all objects. However the upgraded database has all system objects flagged as Oracle Maintained. This is where upuserxt.lst upobjxt.lst are used.

When Oracle developers build a new Oracle Home to be shipped, they create a database (including all options I suppose) and then run the utlupox.sql script. This script will list all Oracle Maintained users and objects, just relying on the flag that has been set during creation, and spool to the upuserxt.lst upobjxt.lst files. And those files will be shipped in the Oracle Home (all that under /rdbms/admin).

These .lst files will be used when upgrading from pre-12c in order to set the flags for Oracle Maintained objects. The external tables SYS.USERXT on upuserxt.lst and SYS.OBJXT on upobjxt.lst are created by catupcox.sql and, finally, those tables are read by catuposb.sql to set Oracle Maintained in USER$ and OBJ$. The catuposb.sql is a bit more complex that that because there are objects that can have different name when a database is created.

Note that this information about Oracle Maintained objects, in addition to being very useful for us, is crucial when you further convert the non-CDB to a PDB because those will become metadata links.

 

Cet article 12c upuserxt.lst, upobjxt.lst & Oracle Maintained objects/users est apparu en premier sur Blog dbi services.

Time for #GLOC, #SQLSatDallas, #DataSummit18

The next nine days, I’m traveling to three cities for four events. We’ll just call this the 9-3-4 gauntlet of speaker life. I booked this travel as four, one-way flights to get the itinerary
I needed to make the most of my schedule and will have breaks between each event to make sure I don’t kill myself my last two weeks at Delphix.

GLOC

Today I’m heading to the Great Lakes Oracle Conference, (https://gloc.neooug.org/) where I get to present on DevOps and hang out with all my Oracle peeps. GLOC is the second largest regional Oracle user group event and closing fast on my region’s event, RMOUG Training Days. They’ve done a great job embracing APEX into their program in the last year and I’m looking forward to being in Cleveland, a city that’s really starting to come into its own since the olden days, when the river caught on fire….(yeah, I went there.)

SQL Saturday Dallas

I’ll depart early on Thursday morning to arrive in Dallas, TX in time for the second event, the Dallas SQL User Group’s Speaker Idol, (https://www.meetup.com/North-Texas-SQL-Server-User-Group/events/250612710/). I’ll be a judge, scoring and offering constructive, (as well as insightful) feedback to speakers at this event. I made a promise to, so don’t be afraid and come and speak! I attended my first Speaker Idol at Summit 2017 and really enjoyed the presentations, the quality of constructive advice to the speakers and the opportunity to receive that type of feedback.

Dallas happens to be home to a number of great people in the MSSQL community and I’ll get to hang out with Mindy Cornett, Amy Herald, Jen and Sean McGown, who reside in the area, along with staying at an AirBnB with Tracy Borrgiano. I also have a pair of gorgeous Dr. Marten oxfords for Ms. Angela Tidwell, too. Yes, they fit in my 20in suitcase with 9 days worth of my own belongings.

I’ll be staying the weekend, as I’m then speaking at SQL Saturday Dallas, ( http://www.sqlsaturday.com/734/eventhome.aspx) talking on DevOps, but this time to the MSSQL community. I’ll have Sunday to relax, (could have gone home, but too late now!) maybe hang out with friends, but definitely check out the area. Jessica Sharp and Mary Elizabeth McNeeley will be in attendance, so will be great to see everyone while I’m in Dallas.

Data Summit 2018

Monday I head back up north to Boston for DBTA’s Data Summit, ( http://www.dbta.com/DataSummit/2018/Default.aspx) conference. This time I’m speaking on DevOps with Big Data, focused on DataOps or in other words, “You can’t do anything if you can’t get your data along with it.” I’m kind of bummed I didn’t plan more time to visit in Boston, as it’s one of my favorite historical spots to invest time in. As some know, I’m a history buff and it just doesn’t seem right to not visit some of the awesome historical sites in Boston, but hell, after 9 days on the road, I’m going to be more than ready to head home!

I’ll be one of the first on the plane next Thursday so I can get home and spend the next number of days working on my latest “hobby”- the trailer. I have some trim to put down for the new floors I installed this last weekend and painting to do. After that, it’s move in time! Stay tuned…



Tags:  , , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Time for #GLOC, #SQLSatDallas, #DataSummit18], All Right Reserved. 2018.

Getting started with #Exasol

One nice and easy way to make yourself familiar with Exasol – the leading In-Memory Analytic Database – is the Community Edition. It’s free and can be downloaded here as a virtual machine running on VirtualBox.

A good description how to install the Community Edition can be found here.

There’s an Exasol SQL Client called EXAplus. You can use it as GUI, then it looks like this:

EXAplus GUIhttps://uhesse.files.wordpress.com/2018/05/exaplus_gui.png?w=150&h=116 150w, https://uhesse.files.wordpress.com/2018/05/exaplus_gui.png?w=300&h=232 300w, https://uhesse.files.wordpress.com/2018/05/exaplus_gui.png?w=768&h=593 768w, https://uhesse.files.wordpress.com/2018/05/exaplus_gui.png?w=1024&h=790 1024w, https://uhesse.files.wordpress.com/2018/05/exaplus_gui.png 1052w" sizes="(max-width: 620px) 100vw, 620px" />

A command line version of EXAplus is also available. I will use it for my articles subsequently because it works better than pictures from the GUI for that purpose. You will be able to copy & paste commands from the article that way, for example. If you install the Community Edition on a Windows host like I did, you get to the command line EXAplus this way: Open a cmd shell. Then

C:\>cd \Program Files (x86)\EXASOL\EXASolution-6.0\EXAplus

C:\Program Files (x86)\EXASOL\EXASolution-6.0\EXAplus>exaplusx64 -help

That gives you a basic help for the EXAplus CLI. I connect now as superuser to my Community Edition:

C:\Program Files (x86)\EXASOL\EXASolution-6.0\EXAplus>exaplusx64 -c 192.168.56.101:8563 -u sys -p exasol -lang EN
EXAplus 6.0.8 (c) EXASOL AG

Tuesday, May 15, 2018 2:13:50 PM CEST
Connected to database EXAone as user sys.
EXASolution 6.0.8 (c) EXASOL AG

SQL_EXA>

For those of you who are used to SQL*Plus, EXAplus will appear quite familiar:

SQL_EXA> col column_name for a40;
COLUMN   column_name ON
FORMAT   a40
SQL_EXA> desc exa_user_users;
EXA: desc exa_user_users;

COLUMN_NAME                              SQL_TYPE                                 NULLABLE DISTRIBUTION_KEY
---------------------------------------- ---------------------------------------- -------- ----------------
USER_NAME                                VARCHAR(128) UTF8
CREATED                                  TIMESTAMP
USER_PRIORITY                            VARCHAR(128) UTF8
USER_COMMENT                             VARCHAR(2000) UTF8

4 rows in resultset.

SQL_EXA> @c:/blogpostings/whoami;
EXA: select user_name from exa_user_users;

USER_NAME
--------------------------------------------------------------------------------------------------------------------------------
SYS

1 row in resultset.

I placed a file named whoami.sql in c:/blogpostings containing the SELECT command. Notice the mandatory ; at the end of each command.

Oh, and CURSOR UP and DOWN scrolls you through the history of commands out of the box </p />
</p></div>

    	  	<div class=

ADWC – connect from your premises

In the previous post about the Autonomous Data Warehouse Service, I’ve run queries though the Machine Learning Notebooks. But you obviously want to connect to it from your premises, with SQL*Net.

CaptureADWCconnect001Of course the connection, going through the public internet, must be secured. If you already use a managed service like the Oracle Exadata Express Cloud Service, you already know how to do: download a .zip containing the connection string and the wallet and certificate for SQL*Net encryption.

You get it from the Service Console, logged as the ADMIN user, and Administration tab. The Download Client Credentials asks you for the wallet password. However, this is not a password to protect the .zip file and the .zip file contains an auto-login wallet, so keep it secured.

SQL Developer

CaptureADWCconnect002 The simplest use of this file is with SQL Developer because you don’t even have to unzip it. Just choose a ‘Cloud PDB’ connection type, enter the path of the .zip file as Configuration File, the password as Keystore Password and the ADMIN user (or any user you have created with the Oracle ML Users).

In the tnsnames.ora provided in the .zip file there are 3 network service names connecting to 3 different services: _low, _medium and _high. They map to the resource manager plan so that you can run your queries with different priorities.

SQLcl thin

With SQLcl you do not need to unzip the credentials file, at least when you are using thin JDBC (the default).
You just register it with:

18:53:12 SQL> set cloudconfig /media/sf_share/ADWC/wallet_ADWC.zip
Using temp directory:/tmp/oracle_cloud_config4174171941677611695

and you are ready to connect to the _low, _medium and _high services.

As you see, it unzips the file into a temporary directory so you have to do it each time you run SQLcl. You can add this to login.sql and may add some housekeeping as this temporary directory may remain. Or run all this in a docker container.

This is simple, at least if you are running the latest Java 8 which includes the Java Cryptography Extension (JCE). If it is not the case, as when you use the Java Home provided with 18c (1.8.0_152), you have to add the jars yourself. But don’t worry, all is explained:

SQL> set cloudconfig /media/sf_share/ADWC/wallet_ADWC.zip
***** JCE NOT INSTALLED ****
***** CAN NOT CONNECT TO PDB Service without it ****
Current Java: /u01/app/oracle/product/18.0.0/dbhome_1/jdk/jre
Follow instructions on http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-21...
SQL>

Of course the alternative is to install the latest Java

$ sudo rpm -i jre-8u171-linux-x64.rpm
Unpacking JAR files...
plugin.jar...
javaws.jar...
deploy.jar...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...

and set JAVA_HOME to it before starting SQLcl

$ export JAVA_HOME=/usr/java/jre1.8.0_171-amd64
$ SQLPATH=~/sql bash $ORACLE_HOME/sqldeveloper/sqlcl/bin/sql /nolog

Anyway, in all cases, once the credential .zip is provided you can connect with Thin JDBC (the default) with a user/password that has been created in the ADWC:

SQL> connect admin@adwc_high
Password? (**********?) ****************
AArray = [B@24959ca4
AArray = [B@10289886
AArray = [B@32115b28
AArray = [B@2ad48653
Connected.
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
6 esj1pod6 12.2.0.1.0 13-MAY-18 OPEN YES 6 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE RAC

What’s in the .zip

The tnsnames.ora has entries for the low, medium, high services.

adwc_high = (description= (address=(protocol=tcps)(port=1522)(host=adwc.uscom-east-1.oraclecloud.com))(connect_data=(service_name=p7zyfbmcnl4kjy3_adwc_high.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US")) )
 
adwc_low = (description= (address=(protocol=tcps)(port=1522)(host=adwc.uscom-east-1.oraclecloud.com))(connect_data=(service_name=p7zyfbmcnl4kjy3_adwc_low.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US")) )
 
adwc_medium = (description= (address=(protocol=tcps)(port=1522)(host=adwc.uscom-east-1.oraclecloud.com))(connect_data=(service_name=p7zyfbmcnl4kjy3_adwc_medium.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US")) )

The sqlnet.ora mentions the wallet used for SQL*Net encryption:

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="?/network/admin")))
SSL_SERVER_DN_MATCH=yes

Note that the directory is an absolute path and you probably want to change it to your TNS_ADMIN one where you unzip the file.

In the wallet location, you find the ewallet.p12 that contain the certificate and private keys, protected with password, and the cwallet.sso which do not need to provide the password to open it, so protect them with file permissions.

You find also keystore.jks which also contains the Self-signed certificate but in JKS truststore format. and referenced from ojdbc.properties properties:

oracle.net.wallet_location=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=${TNS_ADMIN})))

Once you have unzipped the credentials, you can use them to connect with OCI.

SQL*Plus or SQLcl -oci

If you want to connect with an OCI client, you have to unzip this file to your TNS_ADMIN directory. That can be the $ORACLE_HOME/network/admin, but be careful to overwrite existing files, or it can be a new directory you will use by setting the TNS_ADMIN environment variable (or registry entry) to it.

Here are some examples where I set TNS_ADMIN to the directory where I unzipped the credentials:

TNS_ADMIN=/media/sf_share/ADWC/wallet_ADWC sqlcl -oci /nolog
TNS_ADMIN=/media/sf_share/ADWC/wallet_ADWC sqlplus /nolog

Any application using OCI (the oracle client, which can be the InstantClient or a full database installation) can use this without providing any password.

Databas Link

With the unzipped credentials you can access through OCI which means that you can also have a database link to the ADWC database. The credentials must be unzipped (or merged) in the TNS_ADMIN (or default ?/rnetwork/admin) of the instance:

SQL> create database link ADWC connect to ADMIN identified by "Ach1z0#dAch1z0#d" using 'adwc_high';
Database link created.
 
SQL> select banner from v$version@ADWC;
 
BANNER
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Client Credential password

When downloading the .zip you are asked for a password to ‘protect this file to prevent unauthorized database access‘. But that is misleading. The .zip is not password protected. Anyone can open it. And it contains an auto-login wallet, so anybody can use it. You can access the database without this password. Of course, you cannot connect if you don’t have a user/password with a create session privileges, but you access to it for user credentials verification.

So what is this password used for? We have seen that SQL Developer needs the password (or you will get a files as java.io.IOException: Keystore was tampered with, or password was incorrect). Then, you may remove the .sso auto-login wallet from the .zip file when it is used only by SQL Developer. But of course, you have to think about where the password is stored in SQL Developer. Is is more secured than the .sso ?

As long as the auto-login wallet is there, you do not need to store the wallet password. But of course, you will protect credential files.

 

Cet article ADWC – connect from your premises est apparu en premier sur Blog dbi services.

From file names to directory hierarchy

I had a fun request come in from a colleague the other day.  They had a simple list of fully qualified file names and they needed to present that data in the familiar hierarchical tree layout. 

To demonstrate, I took a little trip down memory lane Smile and grabbed a subset of presentations I’ve done over the years.


SQL> create table t ( fname varchar2(1000));

Table created.

SQL>
SQL> insert into t values ('C:\Users\Connor\Presentations\2002\scene_200205.pdf');

1 row created.

SQL> insert into t values ('C:\Users\Connor\Presentations\2002\scene_200205a.pdf');

1 row created.

SQL> insert into t values ('C:\Users\Connor\Presentations\2002\254old.ppt');

1 row created.

SQL> insert into t values ('C:\Users\Connor\Presentations\2002\254_full.ppt');

1 row created.

SQL> insert into t values ('C:\Users\Connor\Presentations\2002\257.ppt');

1 row created.
...
...
...

SQL> insert into t values ('C:\Users\Connor\Presentations\2018\london\optimizer.pptx');

1 row created.

SQL> insert into t values ('C:\Users\Connor\Presentations\2018\london\optimizer_full.pptx');

1 row created.

SQL> insert into t values ('C:\Users\Connor\Presentations\2018\odtug\common_disasters_short.pptx');

1 row created.

SQL> insert into t values ('C:\Users\Connor\Presentations\2018\odtug\common_disasters_short_comp.pptx');

1 row created.

SQL> select count(*) from t;

  COUNT(*)
----------
       634

1 row selected.

So the challenge was to present this list of files in a nice tree format which mirrored my directory structure.

(Truth be told, my preference would be that the query would do a better job than I do in trying to organize my files, but that is a blog post for another day Smile)

Step 1 was to use a little LATERAL trickery to parse out all of the elements of the file names into one row per “element”, where “element” an unqualified file name, or single level folder name.


SQL> select id, seq, token
  2  from
  3    ( select rownum id, fname from t ) t_base,
  4    lateral(select level seq, rtrim(regexp_substr(t_base.fname || '\', '(.*?)\\', 1, level),'\') token
  5            from dual
  6            connect by level <= 1+length(fname)-length(replace(fname,'\'))
  7            );

        ID        SEQ TOKEN
---------- ---------- ------------------------------------------------------------------------------------
         1          1 C:
         1          2 Users
         1          3 Connor
         1          4 Presentations
         1          5 2002
         1          6 scene_200205.pdf
         2          1 C:
         2          2 Users
         2          3 Connor
         2          4 Presentations
         2          5 2002
         2          6 scene_200205a.pdf
         3          1 C:
         3          2 Users
         3          3 Connor
         3          4 Presentations
         3          5 2002
         3          6 254old.ppt
         4          1 C:
         4          2 Users
         4          3 Connor
         4          4 Presentations
         4          5 2002
         4          6 254_full.ppt
         ...
         ...

We can use that as input to a standard LAG function to associate each file/folder with its parent, using the SEQ column to provide sequencing within each ID


SQL> with data as
  2  (
  3  select id, seq, token
  4  from
  5    ( select rownum id, fname from t ) t_base,
  6    lateral(select level seq, rtrim(regexp_substr(t_base.fname || '\', '(.*?)\\', 1, level),'\') token
  7            from dual
  8            connect by level <= 1+length(fname)-length(replace(fname,'\'))
  9            )
 10  )
 11  select
 12      id,
 13      seq,
 14      token,
 15      lag(token) over ( partition by id order by seq ) as parent
 16  from data
 17  order by id, seq desc;

        ID        SEQ TOKEN                                              PARENT
---------- ---------- -------------------------------------------------- -----------------------------------
         1          6 scene_200205.pdf                                   2002
         1          5 2002                                               Presentations
         1          4 Presentations                                      Connor
         1          3 Connor                                             Users
         1          2 Users                                              C:
         1          1 C:
         2          6 scene_200205a.pdf                                  2002
         2          5 2002                                               Presentations
         2          4 Presentations                                      Connor
         2          3 Connor                                             Users
         2          2 Users                                              C:
         2          1 C:
         3          6 254old.ppt                                         2002
         3          5 2002                                               Presentations
         3          4 Presentations                                      Connor
         3          3 Connor                                             Users
         3          2 Users                                              C:
        ...
        ...

That’s looking good, but we have duplicates in the sense that multiple files will roll up to a single folder (eg the “C:” root is repeated).. So we DISTINCT it out to leave just the relationships we need


SQL> with data as
  2  (
  3  select id, seq, token
  4  from
  5    ( select rownum id, fname from t ) t_base,
  6    lateral(select level seq, rtrim(regexp_substr(t_base.fname || '\', '(.*?)\\', 1, level),'\') token
  7            from dual
  8            connect by level <= 1+length(fname)-length(replace(fname,'\'))
  9            )
 10  )
 11  select
 12    distinct
 13      token,
 14      lag(token) over ( partition by id order by seq ) as parent
 15  from data
 16  /

TOKEN                                              PARENT
-------------------------------------------------- --------------------------------------------------
C:
254old.ppt                                         2002
257.ppt                                            2002
plus2.ppt                                          2002
254.ppt                                            UKOUG2002_final
error_handling.pdf                                 2003
2004                                               Presentations
mcdonald1.pdf                                      2004
sun_storage_brief.ppt                              sun_storage_presn
ausoug05_memory_x.ppt                              2005
jul05_read_consistency.ppt                         2005
saoug_oracle_home.ppt                              2005
plsql_80_a58236.pdf                                sql_plsql_evo
jun07_tas_moving_data_short.ppt                    misc
ausoug_adel                                        2008
successful_development.pdf                         ausoug_adel
apr08_11g_developers.ppt                           ausoug_apr
apr08_11g_developers_print.ppt                     ausoug_apr
mcdonald_oct08_flashback.pdf                       perth
oct08_11g_developers.ppt                           template
nov08_read_consistency.pdf                         rwwa
perf101b_rwwa.pptx                                 rwwa
sqlectric_melb                                     2009
sql_electric2.pdf                                  sqlectric_melb
sql_electric1.pptx                                 sqlectric_melb
ukoug                                              2009
...
...
...

And we’re pretty much done. At this point, we have the raw data we need in a child/parent form that we could use to traverse with a standard hierarchy CONNECT BY query – I’ll use LPAD to show the hierarchy relationships


SQL> with data as
  2  (
  3  select id, seq, token
  4  from
  5    ( select rownum id, fname from t ) t_base,
  6    lateral(select level seq, rtrim(regexp_substr(t_base.fname || '\', '(.*?)\\', 1, level),'\') toke
  7            from dual
  8            connect by level <= 1+length(fname)-length(replace(fname,'\'))
  9            )
 10  ),
 11  hier as(
 12    select
 13      distinct
 14        token,
 15        lag(token) over ( partition by id order by seq ) as parent
 16    from data
 17  )
 18  select lpad(' ',level*2)||token hier_list
 19  from hier
 20  start with parent is null
 21  connect by prior token = parent
 22  order siblings by token;

HIER_LIST
--------------------------------------------------------------------------------------------------------
  C:
    Users
      Connor
        Presentations
          2002
            254_full.ppt
            254old.ppt
            257.ppt
            UKOUG2002_final
              254.ppt
              257.ppt
              prelim.ppt
            dec_9i_features2.ppt
            dec_legends.ppt
            dec_legends_prelim.ppt
            jul_sig.ppt
            jul_sig_printout.ppt
            june_sig.ppt
            logo.ppt
            march_sig.ppt
            march_sig2.ppt
            plaus.ppt
            plus2.ppt
            scene_200205.pdf
            scene_200205a.pdf
            sep_unix_sig.ppt
            sep_unix_sig2.ppt
          2003
            254.ppt
            abm_succesful_development.ppt
            apr03_perth.ppt
            dec03_perth_9i_nf.ppt
            dec03_perth_9i_nf_popquiz.pdf
            dec03_perth_9i_nf_popquiz.ppt
            error_handling.pdf
            error_handling.ppt
            ims_performance_examples.pdf
            ims_performance_examples.ppt
            sep03_perth_9i_hidden.pdf
            sep03_perth_9i_hidden.ppt
            sep03_perth_9i_nf.ppt
            sep03_perth_legends.pdf
            sep03_perth_legends.ppt
            sep03_perth_prelim.ppt
            sf_oow_03.pdf
            sf_oow_03.ppt
            sf_oow_03a.ppt
            slide.ppt
            succesful_development.pdf
            succesful_development.ppt
            tools_for_performance.pdf
            tools_for_performance.ppt
          2004
            10046_scene.pdf
            bind_var.pdf
            bind_var.ppt
            dec03_perth_book.ppt
            dec03_perth_book2.ppt
            generating_test_data.pdf
            generating_test_data.ppt
            mar04_hotsos_legends.ppt
            mar04_hotsos_nf_w2k.ppt
            mar04_hotsos_nf_w2k_edit.ppt
            mcdonald1.pdf
            mcdonald2.pdf
            nov04_reorgs_rebuild.ppt
            nov04_succesful_development.ppt
            
            ...
            ...
            ...
            
          2018
            apex_connect
              apex_connect_sql_plsql.pdf
              apex_connect_sql_plsql.pptx
              apex_connect_sql_plsql_v2.pptm
              apex_connect_sql_plsql_v2.pptx
              apex_connect_sql_plsql_v3.pptm
              apex_connect_sql_plsql_v3_subset.pptm
              extensions1.pptx
              extensions2.pptx
            code_china
              better_sql_oracle_code.pptx
              better_sql_oracle_code_v2.pdf
              better_sql_oracle_code_v2.pptx
            code_hyderabad
              better_sql_oracle_code.pptx
            hong_kong
              hong_kong_122.pdf
              hong_kong_122.pptx
              hong_kong_122_no_multi.pptx
              hong_kong_partitioning_ora_template.pptx
              hong_kong_partitioning_ora_template2.pdf
              hong_kong_partitioning_ora_template2.pptx
            london
              london.pdf
              london.pptx
              optimizer.pdf
              optimizer.pptx
              optimizer_full.pptx
            odtug
              common_disasters_short.pptx
              common_disasters_short_comp.pptx      

And there we have it. From flat file listing to a nice hierarchical layout just by tackling the problem piece by piece using the trusty WITH clause.