Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

How to do EXPORT / IMPORT between #Exasol and #Oracle

The commands IMPORT and EXPORT provide an easy way to transfer data between Exasol and other data sources like an Oracle database. You may at first get this error message: Oracle instant client not available. Please ask your administrator to install it via EXAoperation.

Here’s how to do that yourself without having to ask your admin </p />
</p></div>

    	  	<div class=

Managing Cost-Based Optimizer Statistics for PeopleSoft

I gave this presentation to UKOUG PeopleSoft Roadshow 2018

PeopleSoft presents some special challenges when it comes to collecting and maintaining the object statistics used by the cost-based optimizer.

I have previously written and blogged on this subject.  This presentation focuses exclusively on the Oracle database and draws together the various concepts into a single consistent picture.  It makes clear recommendations for Oracle 12c that will help you work with the cost-based optimizer, rather than continually fight against it.

It looks at collecting statistics for permanent and temporary working storage tables and considers some other factors that can affect optimizer statistics.

This presentation also discusses PSCBO_STATS, that is going to be shipped with PeopleTools, and compares and contrasts it with GFCPSSTATS11.

Truncate upgrade

Connor McDonald produced a tweet yesterday linking to a short video he’d created about an enhancement to the truncate command in 12c. If you have referential integrity declared between a parent and child table then in 12c you can truncate the parent table and Oracle will truncate the child table for you – rather than raising an error. The feature requires the foreign key constraint to be declared “on delete cascade” – which is an option that I don’t see used very often. Unfortunately if you try to change an existing foreign key constraint to meet this requirement you’ll find that you can’t (yet) use the “alter table modify constraint” to make the necessary change. As Connor pointed out, you’ll have to drop and recreate the constraint – which leaves you open to bad data getting into the system or an outage while you do the drop and recreate.

If you don’t want to stop the system but don’t want to leave even a tiny window for bad data to arrive here’s a way to do it. In summary:

  1. Add a virtual column to the child table “cloning” the original foreign key column
  2. Create an index on the  virtual column (if you have concerns about “foreign key locking”)
  3. Add a foreign key constraint based on the virtual column
  4. Drop the old foreign key constraint
  5. Recreate the old foreign key constraint “on delete cascade”
  6. Drop the virtual column

Here’s some sample SQL:


rem
rem	Script:		122_truncate_workaround.sql
rem	Author:		Jonathan Lewis
rem	Dated:		Jun 2018
rem	Purpose:	
rem
rem	Last tested 
rem		18.1.0.0	via LiveSQL
rem		12.2.0.1
rem		12.1.0.2

drop table child;
drop table parent;

create table parent (
	p number,
	constraint p_pk primary key(p)
);

create table child (
	c	number,
	p	number,
	constraint c_pk primary key(c),
	constraint c_fk_p foreign key (p) references parent
);

create index c_fk_p on child(p);

insert into parent values(1);
insert into child values(1,1);

commit;

prompt	==========================================================================
prompt	Truncate  should fail with
prompt	ORA-02266: unique/primary keys in table referenced by enabled foreign keys
prompt	==========================================================================

truncate table parent;

alter table child add (
	pv generated always as (p+0) virtual
)
;

create index c_ipv on child(pv) online;

alter table child add constraint c_fk_pv
	foreign key (pv)
	references parent(p)
	on delete cascade
	enable novalidate
;
alter table child modify constraint c_fk_pv validate;

alter table child drop constraint c_fk_p;

prompt	===================================================================================
prompt	Illegal insert (first 1) should fail with
prompt	ORA-02291: integrity constraint (TEST_USER.C_FK_PV) violated - parent key not found
prompt	===================================================================================

insert into child (c,p) values(2,2);
insert into child (c,p) values(2,1);
commit;

alter table child add constraint c_fk_p
	foreign key (p)
	references parent(p)
	on delete cascade
	enable novalidate
;

alter table child modify constraint c_fk_p validate;

prompt	===================================================
prompt	Dropping the virtual column results in Oracle
prompt	dropping the index and constraint at the same time
prompt	===================================================

alter table child drop column pv;

The overhead of this strategy is significant – I’ve created an index (which you may not need, or want, to do) in anticipation of a possible “foreign key locking” issue – and I’ve used the online option to avoid locking the table while the index is created which means Oracle has to use a tablescan to acquire the data. I’ve enabled a new constraint without validation (which takes a brief lock on the table) then validated it (which doesn’t lock the table but could do a lot of work). Then I’ve dropped the old constraint and recreated it using the same novalidate/validate method to minimise locking time. If I were prepared simply to drop and recreate the original foreign key I wouldn’t need to create that index and I’d only do one validation pass rather than two.

 

PGIO, PG_STAT_ACTIVITY and PG_STAT_STATEMENTS

For an Oracle DBA, we are used to join active sessions (from V$SESSION where status=’ACTIVE) with active statements (from V$SQL where users_executing>0) on the SQL_ID. V$ACTIVE_SESSION_HISTORY also displays the TOP_LEVEL_SQL_ID to get the entrypoint of the usercall if we need it. With Postgres it is a bit more difficult because it seems that PG_STAT_ACTIVITY do not show the active statement but only the top-level one. But pg_stat_statement collects statistics for the final statements.

Here is an example where I’ve run pgio (the SLOB method for Postgres by Kevin Closson) in a Bitnami Postgres Compute service on the Oracle Cloud. pgio runs all statements from a PL/pgSQL function MYPGIO. This function runs the SELECT statements and some UPDATE statements depending on UPDATE_PCT. Most of the time is spend in those statements and very few in PL/pgSQL itself, which is the goal of pgio – measuring logical and physical I/O without the noise of other application components.

I have added the pg_stat_statements extension by setting the following in postgresql.conf and installing postgresql10-contrib

shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all

I’ve reset the statistics before running pgio:

select pg_stat_reset();
select pg_stat_statements_reset();

The I’ve run the runit.sh with 4 schemas and 20 threads.

pg_stat_activity

Here is what I can see in PG_STAT_ACTIVITY when it is running:

postgres=# select substr(query,1,100) query,backend_type,application_name,query_start,wait_event_type,wait_event,state from pg_stat_activity where datid=17021;
 
query | backend_type | application_name | query_start | wait_event_type | wait_event | state
------------------------------------------------------------------------------------------------------+-------------------+------------------------+-------------------------------+-----------------+---------------+--------
select query,backend_type,application_name,query_start,wait_event_type,wait_event,state from pg_stat | client backend | PostgreSQL JDBC Driver | 2018-06-27 20:15:11.578023+00 | Client | ClientRead | idle
autovacuum: VACUUM ANALYZE public.pgio3 | autovacuum worker | | 2018-06-27 20:01:19.785971+00 | | | active
autovacuum: VACUUM ANALYZE public.pgio4 | autovacuum worker | | 2018-06-27 20:02:19.811009+00 | | | active
autovacuum: VACUUM ANALYZE public.pgio1 | autovacuum worker | | 2018-06-27 20:08:19.864763+00 | | | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.915761+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.925117+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.934903+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio2', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.931038+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.9269+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.923288+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio2', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.95252+00 | LWLock | WALWriteLock | active
SELECT * FROM mypgio('pgio3', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.96237+00 | | | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.951347+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.962725+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio4', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.985567+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.98943+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio2', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.976483+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio4', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:56.032111+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio4', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.998013+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio4', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:56.043566+00 | Lock | transactionid | active
(20 rows)

My client sessions are active on the MYPGIO function. This is the top-level statement which runs some SELECT and UPDATE and there is a very small chance to get samples with the session active on the controller procedural code rather than one of those statements. However, that’s the only thing I can see here.

pg_stat_statements

Now looking at PG_STAT_STATEMENTS:

postgres=# create extension pg_stat_statements;
CREATE EXTENSION
 
postgres=# select substr(query,1,100) query,calls,total_time/1e6 seconds,total_time/calls/1e6 "sec./call",rows from pg_stat_statements where dbid=17021 order by total_time desc;
query | calls | seconds | sec./call | rows
------------------------------------------------------------------------------------------------------+-------+---------------------+----------------------+-------
UPDATE pgio1 SET scratch = scratch + $1 WHERE mykey BETWEEN $2 AND $3 | 2081 | 0.583796520366999 | 0.000280536530690533 | 18729
UPDATE pgio4 SET scratch = scratch + $1 WHERE mykey BETWEEN $2 AND $3 | 2076 | 0.565067778457 | 0.000272190644728805 | 18684
UPDATE pgio2 SET scratch = scratch + $1 WHERE mykey BETWEEN $2 AND $3 | 2229 | 0.412313896512 | 0.000184977073356662 | 20061
SELECT sum(scratch) FROM pgio3 WHERE mykey BETWEEN $1 AND $2 | 19199 | 0.317597650247998 | 1.65424058673888e-05 | 19199
UPDATE pgio3 SET scratch = scratch + $1 WHERE mykey BETWEEN $2 AND $3 | 2202 | 0.304893281095001 | 0.000138461980515441 | 19818
SELECT sum(scratch) FROM pgio2 WHERE mykey BETWEEN $1 AND $2 | 19500 | 0.29686205481 | 1.52236951184615e-05 | 19500
SELECT sum(scratch) FROM pgio1 WHERE mykey BETWEEN $1 AND $2 | 17873 | 0.285481468151 | 1.59727783892464e-05 | 17873
SELECT sum(scratch) FROM pgio4 WHERE mykey BETWEEN $1 AND $2 | 17889 | 0.273855678533 | 1.53086074421711e-05 | 17889
SELECT pgio_get_random_number($6, v_scale - v_select_batch_size) | 83129 | 0.00134782461999997 | 1.62136513130192e-08 | 83129
select blks_read from pg_stat_database where datname = $1 | 1 | 1.0456142e-05 | 1.0456142e-05 | 1
create table pgio_audit_table ( c1 numeric , c2 numeric, c3 numeric , c4 numeric, c5 numeric, c6 | 1 | 7.636071e-06 | 7.636071e-06 | 0
CREATE OR REPLACE FUNCTION mypgio( +| 1 | 6.425742e-06 | 6.425742e-06 | 0
v_mytab varchar, +| | | |
v_pctupd int, +| | | |
v_runtime_secs bigint, +| | | |
v_scal | | | |
select substr(query,$1,$2) query,calls,total_time/$3 seconds,total_time/calls/$4 "sec./call",rows fr | 9 | 5.017752e-06 | 5.57528e-07 | 139
CREATE OR REPLACE FUNCTION pgio_get_random_number(BIGINT, BIGINT) RETURNS BIGINT AS $$ +| 1 | 4.217947e-06 | 4.217947e-06 | 0
DECLARE +| | | |
v | | | |
drop table pgio_audit_table | 1 | 3.337471e-06 | 3.337471e-06 | 0
DROP TYPE pgio_return CASCADE | 1 | 2.451694e-06 | 2.451694e-06 | 0
CREATE TYPE pgio_return AS ( +| 1 | 1.746512e-06 | 1.746512e-06 | 0
mypid int, +| | | |
loop_iterations bigint , +| | | |
sql_selects bigint, +| | | |
sql_updates | | | |
select query,backend_type,application_name,query_start,wait_event_type,wait_event,state from pg_stat | 3 | 1.312499e-06 | 4.37499666666667e-07 | 172
SELECT pg_backend_pid() | 80 | 6.10539e-07 | 7.6317375e-09 | 80
select pg_stat_statements_reset() | 1 | 1.8114e-07 | 1.8114e-07 | 1
select datname, blks_hit, blks_read,tup_returned,tup_fetched,tup_updated from pg_stat_database where | 1 | 1.60161e-07 | 1.60161e-07 | 1
select blks_hit from pg_stat_database where datname = $1 | 1 | 1.48597e-07 | 1.48597e-07 | 1
select pg_stat_reset() | 1 | 5.3776e-08 | 5.3776e-08 | 1
SET application_name = 'PostgreSQL JDBC Driver' | 2 | 5.2967e-08 | 2.64835e-08 | 0
SET extra_float_digits = 3 | 2 | 3.7514e-08 | 1.8757e-08 | 0
show shared_buffers--quiet | 1 | 2.1151e-08 | 2.1151e-08 | 0
(26 rows)

Here I can see that the main activity is on UPDATE and SELECT. There’s even no mention of the MYPGIO function except for its creation. I don’t even see the calls for it. We can see the same graphically from Orachrome Lighty for Postgres where only the top-level statement is displayed:
CapturePGIO

This makes it very difficult to match those two views. From PG_STAT_ACTIVITY I know that I spend time on MYPGIO function call and its system activity (here lot of locks waiting for vacuum sessions and some latch contention on WAL). We have sampling information, but nothing about database time and read/write measures. From PG_STAT_STATEMENT we have all measures, but with no link to the sampling activity, and then no link with the wait events. Coming from Oracle databases, the most important in Oracle tuning is the DB time which can be detailed from the system point of view (wait events, CPU, I/O) and from the user point of view (time model, SQL statements, Segments). And it can be a cumulative measure (each session measuring the time in a call or an operation) or a sampling one (counting the sessions active and on which operation). And ASH makes the link between all those dimensions. With this, we can directly address the user response time with our actions on the system, digging from the end-user perceived problem to the internal application or system root cause.

With current Postgres statistics, it seems that we can look at SQL statements on one side and at the system on the other side. But can we ensure that we really address the user problems? Especially when calling the statements from a function, which is the optimized way to avoid useless roundtrips and context switches? Postgres is evolving a lot from the community. I hope that more and more information will be available in PG_STAT_ACTIVITY to be sampled and link all dimensions related to the system and the users. Like with any analytic query, we need all dimensions at the same level of granularity.

 

Cet article PGIO, PG_STAT_ACTIVITY and PG_STAT_STATEMENTS est apparu en premier sur Blog dbi services.

Lighty for PostgreSQL

If you follow this blog, you should know how I like Orachrome Lighty for Oracle, for its efficiency to monitor database performance statistics. Today Orachrome released the beta version of Lighty for Postgres: https://orachrome.com/news/la-beta-de-lighty-for-postgresql-est-ouverte/
The Cloud is perfect to do short tests with more resources than my laptop, especially the predictability of performance, then I started a Bitnami Postgres Compute service on the Oracle Cloud and did some tests with pgbench and pgio.

The installation is easy:

  • unzip the front-end
  • add the licence key
  • install the sampling job in the server

In the beta I tested, the sampling job was running from cronjob in Linux and a service on Windows but as far as I know this will run as a service on Linux as well.
The sampling service takes a sample of pg_stat_activity every 10 seconds, and gathers more details about sessions and statements. It stores its data in a ‘lighty’ database created by the initialization script (you provide the tablespace name that you should create before) The overhead is very small.

In my Oracle Cloud Bitnami environment I’ve added the following to the script called by crontab:

export PGPASSWORD=myPostgresPassword
export PATH=$PATH:/opt/bitnami/postgresql/bin

CapturePGL001If you look at this ‘lighty’ database with the tool itself you may think that the overhead is important because you will see one active session average. This is just a side effect of sampling. At the time of sampling (every 10 seconds) this job will always be seen as active, which is right and which is the reason for the one active session. What you don’t see is that this job is inactive for 10 seconds in-between. And you don’t see this inactivity because you have no samples then. So, the one session active on average is just the consequence of the perfect synchronization which is specific to the tool looking at its own activity.
In my opinion, we should just ignore this activity and there’s an option to check “Exclude Lighty proc” to hide this from the ‘all databases’ view.

The main screen is the Activity Viewer where you can see graphically the samples from the postgres waits class and CPU as well as some details on the top level SQL statements, the wait events, the sessions,… We have to enable the pg_stat_statements extension for that:

# to add in postgresql.conf after installing postgresql10-contrib
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all

Here is how it looks like:
DgrxdisXcAErgZv

Another important screen is the ‘real time’ performance views displayed as graphs, with information from the database as well as from the host operating system. There’s one non-defautl setting to get the full picture:

# to add in postgresql.conf to get Block R/W time
track_io_timing=on

Here is how it looks like:
CapturePG020202

As with Lighty for Oracle, there’s also a bunch of reports and a multi-database view which shows the main performance indicators (transactions per seconds, Block I/O, Tuples in/out, CPU and OS load average) for several databases on the same screen.

Of course, if you compare it with Lighty for Oracle, you will see some limitations. Not because of the tool but because of the statistics provided. Oracle has detailed wait events. They are more limited in postgres. Oracle has all information about what the current sessions are doing, in V$SESSION, and have even more information on V$ACTIVE_SESSION_HISTORY if you have Diagnostic Pack. In Postgres, PG_STAT_ACTIVITY is much more limited. And Lighty shows only what is available by default (and with the pg_stat_statements extension).

 

Cet article Lighty for PostgreSQL est apparu en premier sur Blog dbi services.

UTL_FILE_DIR and 18c

I wrote a blog post called The Death of UTL_FILE which attracted a comment from a reader:

“There is NO chance to stay at UTL_FILE as it is DESUPPORTED starting with database Version 18c”

This is not the case, but since I wanted to clarify what has changed in 18c, it warrants this small but separate blog post. When UTL_FILE first into existence in Oracle 7, the concept of directory object did not apply to UTL_FILE. Clearly we could not just let UTL_FILE to write to any destination, otherwise a malicious person could write a little PL/SQL block like this:


declare
  f utl_file.file_type;
begin
  for i in ( select 
                regexp_substr(name,'(.*)\\(.*)', 1, 1, 'i', 1) path,
                regexp_substr(name,'(.*)\\(.*)', 1, 1, 'i', 2) name
             from v$datafile 
             order by file# desc ) 
  loop
    f := utl_file.fopen(i.path,i.name,'W');
  end loop;
  end;
  utl_file.fclose_all;
end;
/

and voila! No more database. So an initialization parameter was created to nominate which directories UTL_FILE was allowed to access.  This parameter was called UTL_FILE_DIR, and the above code hopefully is justification enough to show that you should never ever set UTL_FILE_DIR to “*”, meaning it could write to wherever the OS permissions on the Oracle software account would allow it.  Bye Bye datafiles…bye bye database Smile

In more recent versions, UTL_FILE was improved so that the directory parameter could be supplied as a directory object. This is a much tighter implementation because read and write privileges on directory objects can be controlled from within the database.

In 18c, it is not UTL_FILE that has been de-supported, it is the ability to use the older style convention of hard-coded path names for the directory that is no longer allowed. You have to use directory objects. If you set the traditional UTL_FILE_DIR path in the spfile it will be ignored, and you’ll get a warning on startup.


SQL> alter system set utl_file_dir = 'c:\temp' scope=spfile;

System altered.


SQL> startup force
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.


--
-- alert log
--
Obsolete system parameters with specified values:
  utl_file_dir
End of obsolete system parameter listing

But even if you set UTL_FILE_DIR in the spfile, you will not be allowed to use OS directory paths in the UTL_FILE dir calls.


SQL> declare
  2    f utl_file.file_type;
  3  begin
  4    f := utl_file.fopen('c:\temp','demo.dat','W');
  5    utl_file.fclose_all;
  6  end;
  7  /
declare
*
ERROR at line 1:
ORA-29280: invalid directory object
ORA-06512: at "SYS.UTL_FILE", line 41
ORA-06512: at "SYS.UTL_FILE", line 478
ORA-06512: at line 4

If you prefer a video version of this explanation, I had some fun riding my bike when I talked about this when 18c first came out earlier this year.

Kernel panic – not syncing: Out of memory and no killable processes

This is a quick post to give a solution (maybe not the best one as this was just quick troubleshooting) if, at boot, you see something like:
Trying to allocate 1041 pages for VMLINUZ [Linux=EFI, setup=0x111f, size=0x41108d01]
and then:
Kernel panic - not syncing: Out of memory and no killable processes
Pid: 1228 comm: kworker Not tainted 3.8.13-118.17.4.el6uek.x86_64 #2


IMG_E5543
If you do not see the messages, then you may need to remove the ‘quiet’ option of kernel and replace it by ‘nosplash’ – this is done from grub.

So, there’s not enough memory to start the kernel. I got this from a server that had been shut down to be moved to another data center, so at first, I checked that the RAM was ok. Actually, the problem was with the configuration of hugepages (vm.nr_hugepages in /etc/sysctl.conf) which was allocating nearly the whole RAM. Probably because the setting has been erroneously changed without checking /proc/meminfo.

So, without any clue about what the problem was, I started by forcing the kernel to use 4GB (instead of getting it from hardware info). This is done by pressing any key on the GRUB menu, ‘e’ to edit, add the ‘mem=4g’. Then the boot was ok. Of course, if the server tries to start the Oracle instances, the systems starts to swap so better change also the runlevel.

This is sufficient to check the huge pages allocation and ensure that you leave enough memory for at least the kernel (but also consider also the system, the PGA, …). So don’t forget that the mem= option, usually there to give a maximum limit, may be useful also to guarantee a minimum of RAM for the kernel.

 

Cet article Kernel panic – not syncing: Out of memory and no killable processes est apparu en premier sur Blog dbi services.

More triggers are better

Yes, you heard me correctly. If you have got one trigger on a table, then you might be surprised to find that perhaps having a second one will be a better option. Then again, I also love the sweet scent of a clickbaity, inflammatory blog post title to draw the readers in Smile so you’ll just have to read on to see which is true.

As much as I am not a fan of using triggers to populate metadata in tables, I also concede that it is a popular approach taken by many of us as we code up our applications. Sure, we’d like to have our application code store that all important “who did this change” information with each database row, but more often than not, this get shoe-horned into the codebase via triggers as an afterthought to the development process.

But our well-read developer will often be thinking: “I’m a savvy developer – I’ve been reading blogs, and websites that tell me that less triggers is better, so I’ll use as few as I can” which leads to a scenario like the one I’ll present now.

I’ll start with a standard parent and child table setup, connected via an obvious foreign key


SQL> create table par (
  2    id           number(10)    not null primary key,
  3    description  varchar2(20)  not null,
  4    created_by   varchar2(20)  not null,
  5    updated_by   varchar2(20)  not null
  6    );

Table created.

SQL>
SQL> create sequence par_seq;

Sequence created.

SQL>
SQL> create sequence chd_seq;

Sequence created.

SQL>
SQL> create table chd (
  2    id           number(10)    not null primary key,
  3    par_id       number(10)    not null,
  4    constraint  fk foreign key ( par_id ) references par ( id )
  5    );

Table created.

SQL>
SQL>
SQL> create index chd_fk_idx on chd (par_id);

Index created.

I’ve got a sequence for each table, so in order to “bind” that sequence to the primary key for each table I’ll create a trigger.  So I’ll add some trigger code to implement some other common application requirements at the same time:

  • Populate the primary key with a sequence value,
  • Populate the CREATED_BY, UPDATED_BY columns on insert of a new row
  • Amend the UPDATED_BY column when I later update that row

Here is a trigger to implement that.


SQL> create or replace trigger trg_par
  2  before insert or update on par
  3  for each row
  4  begin
  5      if inserting then
  6         :new.id := par_seq.nextval;
  7         :new.created_by := user;
  8         :new.updated_by := user;
  9      end if;
 10
 11      if updating then
 12         :new.updated_by := user;
 13      end if;
 14  end;
 15  /

Trigger created.

Now my application is ready go.  I’ll insert some data to verify that my trigger has not broken any functionality.


SQL> insert into par (description) values ('test');

1 row created.

SQL> insert into par (description) values ('test2');

1 row created.

SQL> select * from par;

  ID DESCRIPTION          CREATED_BY           UPDATED_BY
---- -------------------- -------------------- --------------------
   1 test                 MCDONAC              MCDONAC
   2 test2                MCDONAC              MCDONAC

2 rows selected.

SQL>
SQL> insert into chd (id,par_id) values (chd_seq.nextval,1);

1 row created.

SQL> insert into chd (id,par_id) values (chd_seq.nextval,2);

1 row created.

SQL> select * from chd;

  ID     PAR_ID
---- ----------
   1          1
   2          2

2 rows selected.

SQL>
SQL> commit;

Commit complete.

So far so good. Let us see now what happens when I do a simple update on the parent table on the DESCRIPTION column.  Note that this column is not involved with any primary key, or index or foreign key relationship – it’s just a simple string column.


SQL> update par set description = 'anything' where id = 1;

1 row updated.

SQL> select locked_mode, object_name
  2  from v$locked_object l, all_objects ob
  3  where ob.object_id =l.object_id;

LOCKED_MODE OBJECT_NAME
----------- ----------------------------------------
          3 PAR
          3 CHD

2 rows selected.

Notice the locks that have been taken. Understandably, I had to take a lock on the PAR table because I’ve just updated a row on the table, so I need to make sure that no-one does anything like drop it, or modify the structure whilst I’ve got an outstanding transaction. But here is the interesting part – we also took a lock on the child table CHD. That seems superfluous because as I said – we are not performing an update of anything that could possibly impact the child table in any way. 

Why did the database choose to add this extra lock?

The cause is due to the way I coded the trigger.  Notice that the trigger has a reference to the primary key in the trigger body:


  5      if inserting then
  6         :new.id := par_seq.nextval;

Even though that part of the trigger body is not executed (because I am performing an update not an insert) the mere presence of it is enough for the database to cautiously add that extra lock on the CHD table because we “just might” be messing around with the parent table primary key here.

So now I’ll roll back that update and convert the single triggers to two triggers – one for insert and one dedicated for update.


SQL> rollback;

Rollback complete.

SQL>
SQL> drop trigger trg_par ;

Trigger dropped.

SQL>
SQL> create or replace trigger trg1_par
  2  before insert on par
  3  for each row
  4  begin
  5      :new.id := par_seq.nextval;
  6      :new.created_by := user;
  7      :new.updated_by := user;
  8  end;
  9  /

Trigger created.

SQL>
SQL> create or replace trigger trg2_par
  2  before update on par
  3  for each row
  4  begin
  5      if updating then
  6          :new.updated_by := user;
  7      end if;
  8  end;
  9  /

Trigger created.

Now I’ll run the same update


SQL> update par set description = 'anything' where id = 1;

1 row updated.

SQL> select locked_mode, object_name
  2  from v$locked_object l, all_objects ob
  3  where ob.object_id =l.object_id;

LOCKED_MODE OBJECT_NAME
----------- ----------------------------------------
          3 PAR

1 row selected.

Because the reference to the primary key columns for table PAR are no longer in the firing (update) trigger, I no longer longer see the lock on the CHD table.

In practice, it is unlikely that this additional lock is going to cause you a lot of harm – it will block some operations on the child table such as a truncate or DDL to make a structural change, so it is reasonably to assume that these are rare occurrences. But I’m a fan of the philosophy of: Have as few locks as possible, but always as many as required. So in this case, you might to want to consider opting for two triggers instead of one.

And of course, perhaps getting to zero triggers might be the best option Smile

Hacking Profiles

Saturday’s posting about setting cursor_sharing to force reminded me about one of the critical limitations of SQL Profiles (which is one of those little reason why you shouldn’t be hacking SQL Profiles as a substitute for SQL Plan Baselines). Here’s a demo (taking advantage of some code that I think Kerry Osborne published several years ago) of creating an SQL Profile from the current execution plan of a simple statement – first we create some data and find the sql_id and child_number for a simple query:

rem
rem     Script:         sql_profile_restriction.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jun 2018
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1
rem             12.1.0.2

create table t1
as
select
        rownum            n1,
        rownum            n2,
        lpad(rownum,10)   small_vc,
        rpad('x',100,'x') padding
from dual
connect by
        level <= 1e4 -- > comment to avoid WordPress format issue
;

alter system flush shared_pool;

select /*+ find this */ count(*) from t1 where n1 = 15 and n2 = 15;

column sql_id new_value m_sql_id
column child_number new_value m_child_number

select  sql_id , child_number
from    v$sql
where   sql_text like 'selec%find this%'
and     sql_text not like '%v$sql%'
;

Now I can create the SQL Profile for this query using the Kerry Osborne code:


declare
        ar_profile_hints        sys.sqlprof_attr;
        cl_sql_text clob;
begin
        select
                extractvalue(value(d), '/hint') as outline_hints
        bulk collect into 
                ar_profile_hints
        from
                xmltable(
                        '/*/outline_data/hint'
                        passing (
                                select
                                        xmltype(other_xml) as xmlval
                                from
                                        v$sql_plan
                                where
                                        sql_id = '&m_sql_id'
                                and     child_number =  &m_child_number 
                                and     other_xml is not null
                )
        ) d;

        select
                sql_fulltext
        into
                cl_sql_text
        from
                v$sql
        where
                sql_id = '&m_sql_id'
        and     child_number =  &m_child_number
        ;

        dbms_sqltune.import_sql_profile(
                sql_text        => cl_sql_text, 
                profile         => ar_profile_hints, 
                category        => 'DEFAULT',
                name            => 'PROFILE_LITERAL',
                force_match     =>  true
        );
end;
/

Note particularly that I have given the profile a simple name, put it in the DEFAULT category, and set force_match to true (which means that the profile ought to be used even if I change the literal values in the query). So now let’s check that the profile will be used as expected. First I’ll create an index that is a really good index for this query, then I’ll run the query to see if Oracle uses the index or obeys the profile; then I’ll change the query (literals) slightly and check again. I’ll also run a query that won’t be recognised as legally matching (thanks to the changed “hint”) to demonistrate that the index could have been used if the profile hadn’t been there:


alter system flush shared_pool;
set serveroutput off

prompt  =============================
prompt  Is the SQL Profile used ? Yes
prompt  =============================

select /*+ find this */ count(*) from t1 where n1 = 15 and n2 = 15;
select * from table(dbms_xplan.display_cursor);

select /*+ find this */ count(*) from t1 where n1 = 16 and n2 = 16;
select * from table(dbms_xplan.display_cursor);

select /*+ Non-match */ count(*) from t1 where n1 = 16 and n2 = 16;
select * from table(dbms_xplan.display_cursor);

Here (with a little cosmetic adjustment) are the three outputs from dbms_xplan.display_cursor():

SQL_ID  ayxnhrqzd38g3, child number 0
-------------------------------------
select /*+ find this */ count(*) from t1 where n1 = 15 and n2 = 15
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    24 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |     1 |     8 |    24   (5)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("N1"=15 AND "N2"=15))

Note
-----
   - SQL profile PROFILE_LITERAL used for this statement


SQL_ID  gqjb8pp35cnyp, child number 0
-------------------------------------
select /*+ find this */ count(*) from t1 where n1 = 16 and n2 = 16
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    24 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |     1 |     8 |    24   (5)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("N1"=16 AND "N2"=16))

Note
-----
   - SQL profile PROFILE_LITERAL used for this statement


SQL_ID  3gvaxypny9ry1, child number 0
-------------------------------------
select /*+ Non-match */ count(*) from t1 where n1 = 16 and n2 = 16
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       |     1 (100)|          |
|   1 |  SORT AGGREGATE   |       |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| T1_I1 |     1 |     8 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N1"=16 AND "N2"=16)

As you can see the SQL Profile is reported as used in the first two queries, and (visibly) seems to have been used. Then in the third query where we wouldn’t expect a match the SQL Profile is not used and we get a plan that shows the index would have been used for the other queries had the SQL Profile not been there. So far, so good – the profile behaves as everyone might expect.

Bind Variable Breaking

Now let’s repeat the entire experiment but first do a global find and replace to change every occurrence of “n2 = 16” to “n2 = :b1”. We’ll also change the name of the SQL Profile when we create it to PROFILE_MIXED, and we’ll put in a couple of lines at the top of the script to declare the variable b1 and set its value, then the final test in the script will look like this:


alter system flush shared_pool;
create index t1_i1 on t1(n1, n2);

exec :b1 := 15

select /*+ find this */ count(*) from t1 where n1 = 15 and n2 = :b1;
select * from table(dbms_xplan.display_cursor);

exec :b1 := 16

select /*+ find this */ count(*) from t1 where n1 = 16 and n2 = :b1;
select * from table(dbms_xplan.display_cursor);

And here are the execution plans from the two queries:


SQL_ID  236f82vmsvjab, child number 0
-------------------------------------
select /*+ find this */ count(*) from t1 where n1 = 15 and n2 = :b1
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    24 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |     1 |     8 |    24   (5)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("N1"=15 AND "N2"=:B1))

Note
-----
   - SQL profile PROFILE_MIXED used for this statement


SQL_ID  7nakm3tw27z3c, child number 0
-------------------------------------
select /*+ find this */ count(*) from t1 where n1 = 16 and n2 = :b1
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       |     1 (100)|          |
|   1 |  SORT AGGREGATE   |       |     1 |     8 |            |          |
|*  2 |   INDEX RANGE SCAN| T1_I1 |     1 |     8 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N1"=16 AND "N2"=:B1)


As you can see the execution plan for the original query is still doing a full tablescan and reporting the SQL Profile as used; but we’re not using (or reporting) the SQL Profile when we change the literal values – even though a query against dba_sql_profiles will tell us that the profile has force_matching = ‘YES’.

tl;dr

(Clarified in response to Mohammed Houri’s comment below)
If you use an SQL Profile with force_match => true to “hide” the literals in a statement that includes bind variables (even if they appear only in the select list, in fact) the mechanism will not be used, and the SQL Profile will apply only to the original statement.

Update

Christian Antognini has an elegant little script that uses the dbms_sqltune.sqltext_to_signature() function to highlight this point (among others).  Bear in mind, before you run the script, that you need to be licensed to use the dbms_sqltune package to do so.

 

DDL for constraints – subtle things

The DBMS_METADATA package is very cool. I remember the days of either hand-crafting DDL statements based on queries to the data dictionary, or many a DBA will be familiar with running “imp show=y” or “imp indexfile=…” in order to then laboriously extract the DDL required from the import log file.  DBMS_METADATA removed all of those annoyances to give us a simple API to get the true and complete DDL for a database object.

But when extracting DDL from the database using the DBMS_METADATA package, you need to be aware of some subtleties especially if you plan on executing that DDL in the database.

Consider this example – I have a few tables and I want to extract the referential integrity constraints for one of them. Being a good cautious developer Smile I’ll just output the DDL first before attempt to do any execution of the statements:


SQL> create table tab1(id number, name varchar2(100),
  2                      constraint pk_tab1_id primary key(id));

Table created.

SQL> create table tab2(id number, name varchar2(100),
  2                      constraint pk_tab2_id primary key(id));

Table created.

SQL> create table tab3(id number, name varchar2(100), int_id number,
  2                      constraint pk_tab3_id primary key(id),
  3                      constraint fk_tab1_id foreign key(int_id) references tab1(id),
  4                      constraint fk_tab2_id foreign key(int_id) references tab2(id));

Table created.

SQL>
SQL> set serverout on
SQL> begin
  2    for i in (
  3      select t.table_name,
  4          dbms_metadata.get_dependent_ddl('REF_CONSTRAINT', table_name) ddl
  5      from   user_tables t
  6      where  table_name = 'TAB3'
  7      )
  8    loop
  9            dbms_output.put_line(i.ddl);
 10            --execute immediate i.ddl;
 11    end loop;
 12  end;
 13  /

  ALTER TABLE "MCDONAC"."TAB3" ADD CONSTRAINT "FK_TAB1_ID" FOREIGN KEY ("INT_ID") REFERENCES "MCDONAC"."TAB1" ("ID") ENABLE

ALTER TABLE "MCDONAC"."TAB3" ADD CONSTRAINT "FK_TAB2_ID" FOREIGN KEY ("INT_ID") REFERENCES "MCDONAC"."TAB2" ("ID") ENABLE

PL/SQL procedure successfully completed.

That all looks fine – I have my two ALTER statements ready to go.  So now I’ll comment back in the ‘execute immediate’ command and all should be fine.


SQL> set serverout on
SQL> begin
  2    for i in (
  3      select t.table_name,
  4             dbms_metadata.get_dependent_ddl('REF_CONSTRAINT', table_name) ddl
  5      from   user_tables t
  6      where  table_name = 'TAB3'
  7      )
  8    loop
  9            dbms_output.put_line(i.ddl);
 10            execute immediate i.ddl;
 11    end loop;
 12  end;
 13  /

  ALTER TABLE "MCDONAC"."TAB3" ADD CONSTRAINT "FK_TAB1_ID" FOREIGN KEY ("INT_ID") REFERENCES "MCDONAC"."TAB1" ("ID") ENABLE

ALTER TABLE "MCDONAC"."TAB3" ADD CONSTRAINT "FK_TAB2_ID" FOREIGN KEY ("INT_ID") REFERENCES "MCDONAC"."TAB2" ("ID") ENABLE
begin
*
ERROR at line 1:
ORA-01735: invalid ALTER TABLE option
ORA-06512: at line 10
ORA-06512: at line 10

The seems an odd result. Since in this simple example I’m just running the commands straight back into the same database, I might have expected a “Constraint already exists”, or “Object name exists” style of error, but this is different. This error is telling that the statement is invalid – which obviously should not be the case if it came straight out of DBMS_METADATA. But a simple amendment to my anonymous block will reveal the answer. I will output a line of dashes each time I cycle through the cursor loop



SQL> set serverout on
SQL> begin
  2    for i in (
  3      select t.table_name,
  4             dbms_metadata.get_dependent_ddl('REF_CONSTRAINT', table_name) ddl
  5      from   user_tables t
  6      where  table_name = 'TAB3'
  7      )
  8    loop
  9            dbms_output.put_line('========================================');
 10            dbms_output.put_line(i.ddl);
 11            --execute immediate i.ddl;
 12    end loop;
 13  end;
 14  /
========================================

  ALTER TABLE "MCDONAC"."TAB3" ADD CONSTRAINT "FK_TAB1_ID" FOREIGN KEY ("INT_ID") REFERENCES "MCDONAC"."TAB1" ("ID") ENABLE

ALTER TABLE "MCDONAC"."TAB3" ADD CONSTRAINT "FK_TAB2_ID" FOREIGN KEY ("INT_ID") REFERENCES "MCDONAC"."TAB2" ("ID") ENABLE

PL/SQL procedure successfully completed.

And therein lies the issue. I got two ALTER commands back on screen, but in reality they both came back from a single row fetched from the cursor. The ALTER commands were separated by a carriage return, but if I try to execute that single row, then the statement is invalid because it is an attempt to run two ALTER commands in a single statement. The output looks like two rows from the cursor but it was not.

That is a problem if I want to store that DDL in a table or a file, because I can’t use it as it currently stands, and I don’t want to have to write some scripts to parse that DDL to add semi-colons or split it into multiple commands, because one of the motivations for DBMS_METADATA in the first place was to avoid all that irritation.

There is an easy fix to this. Rather than getting the dependent DDL for a table, we can get the “direct” DDL for the constraints themselves.  In this way, you’ll get a row from the cursor for each constraint, and hence one DDL statement per constraint as well.



SQL> set serverout on
SQL> begin
  2    for i in (
  3         select t.table_name,
  4                dbms_metadata.get_ddl('REF_CONSTRAINT', constraint_name) ddl
  5         from user_constraints t
  6         where table_name = 'TAB3'
  7         and constraint_type = 'R'
  8      )
  9    loop
 10            dbms_output.put_line('========================================');
 11            dbms_output.put_line(i.ddl);
 12            --execute immediate i.ddl;
 13    end loop;
 14  end;
 15  /
========================================

  ALTER TABLE "MCDONAC"."TAB3" ADD CONSTRAINT "FK_TAB1_ID" FOREIGN KEY ("INT_ID") REFERENCES "MCDONAC"."TAB1" ("ID") ENABLE
========================================

  ALTER TABLE "MCDONAC"."TAB3" ADD CONSTRAINT "FK_TAB2_ID" FOREIGN KEY ("INT_ID") REFERENCES "MCDONAC"."TAB2" ("ID") ENABLE

PL/SQL procedure successfully completed.