Search

OakieTags

Who's online

There are currently 0 users and 40 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

UKOUG post conference geek update part 1 – ACFS for Oracle databases

One of the many interesting things I heard at the conference this time around was that Oracle’s future direction includes the use of database files on ACFS. When ACFS came out this was strictly ruled out, but has been possible for a little while now, I believe with 12.1.0.1.0. With the Oracle Database Appliance (ODA) using this deployment option and hearing about it at the conference, a little further investigation was in order. During one of the presentation @OracleRACPM Markus Michalewicz had a reference to a script that I didn’t know on his slides. The script is called gDBClone, and I wanted to see how it works. The idea is that the script can be used to create a snap-clone of a database if the source is on ACFS and in archivelog mode.

As it turned out there were a few hurdles along the way and I will point them out so you don’t run into the same issues.

UPDATE: these hurdles might be there since Oracle databases aren’t supported on ACFS in Oracle Restart: https://docs.oracle.com/database/121/OSTMG/asmfilesystem.htm#OSTMG95961 Please consider yourself warned! For the rest of the article let’s pretend that this is a clustered environment. The article is therefore purely educational and no encouragement to do this in real life.

The script and associated white paper assume that you have two systems-production and dev/test/uat. The goal of the clone procedure is to be able to create a copy of your live database on your dev/test/uat cluster. This database can then be used as the source for ACFS snapshots. This can be represented in ASCII art:


+------+   clone   +------------------+
| PROD |   ---->   |      MASTER      |
+------+           |      /     \     |  snap #1
                   |     /       \    |  snap #2
                   | CLONE1    CLONE2 |
                   +------------------+

It is by no means required to follow this approach, and if nothing else then you can use the clone script to run a RMAN duplication in a single command. I once wrote a script to do the same but this was a truly complex thing to do.

For this article I’ll assume that you clone PROD (named ORCL) to MASTER, and snap MASTER to CLONE1.

The setup

Since my “travel-lab” is not a RAC cluster I opted for an installation of 12.1.0.2.1 for Oracle Restart and the database to keep it simple.

ACFS storage will be provided by and ADVM volume from disk group data. In my case this was quite easy to accomplish. Since this was an 12.1 system anyway I created my DATA disk group with ASM, RDBMS and ADVM compatibility for 12.1.0.2.0.

You create the ASM Dynamic Volume Manager (ADVM) volume on top of the ASM disk group, in my case on DATA. This is a simple task and can be performed by a variety of tools, I opted for a call to asmcmd:

ASMCMD> volcreate
usage: volcreate -G  -s  [ --column  ] [ --width  ]
[--redundancy {high|mirror|unprotected} ] [--primary {hot|cold}] [--secondary {hot|cold}] 
help:  help volcreate
ASMCMD> volcreate -G data -s 10G volume1
ASMCMD> volinfo --all
Diskgroup Name: DATA

         Volume Name: VOLUME1
         Volume Device: /dev/asm/volume1-162
         State: ENABLED
         Size (MB): 10240
         Resize Unit (MB): 64
         Redundancy: UNPROT
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage:
         Mountpath:

Initially I got a message that ASM could not communicate with the (ASM) Volume driver.

SQL> /* ASMCMD */alter diskgroup data add volume 'volume1' size 10G
ORA-15032: not all alterations performed
ORA-15477: cannot communicate with the volume driver
ERROR: /* ASMCMD */alter diskgroup data add volume 'volume1' size 10G

This was solved by modprobe calls to the oracle kernel modules.

[root@server5 ~]# modprobe  oracleacfs
[root@server5 ~]# modprobe  oracleadvm
[root@server5 ~]# modprobe  oracleoks
[root@server5 ~]# lsmod | grep oracle
oracleadvm            507006  7
oracleacfs           3307457  1
oracleoks             505749  2 oracleadvm,oracleacfs

There is a known issue with the execution of udev rules (/etc/udev/rules.d/55-usm.rules) that might delay the setting of permissions. On my system a udevadm trigger solved it. Still odd (Oracle Linux 6.6/ UEK 3 3.8.13-44.1.1.el6uek.x86_64), especially since a call to acfsdriverstate supported stated it was supported.

Once the volume is created it needs to be formatted using ACFS. This can be done in this way:

[oracle@server5 ~]$  mkfs -t acfs /dev/asm/volume1-162
mkfs.acfs: version                   = 12.1.0.2.0
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/volume1-162
mkfs.acfs: volume size               = 10737418240  (  10.00 GB )
mkfs.acfs: Format complete.

Trying to register the file system in the ACFS registry pointed me to the first problem with the procedure on Oracle Restart:

[root@server5 ~]# acfsutil registry -a /dev/asm/volume1-162 /u01/oradata
Usage: srvctl   []
    commands: enable|disable|start|stop|status|add|remove|modify|update|getenv|setenv|
       unsetenv|config|upgrade|downgrade
    objects: database|service|asm|diskgroup|listener|home|ons
For detailed help on each command and object and its options use:
  srvctl  -help [-compatible] or
  srvctl   -help [-compatible]
PRKO-2012 : filesystem object is not supported in Oracle Restart
acfsutil registry: ACFS-03111: unable to add ACFS mount /u01/oradata within Oracle Registry

Interesting-but not a problem in the lab. I usually mount file systems where in my opinion they logically belong to. In this case I mounted the file system to /u01/oradata. Spoiler alert: this is not what you are supposed to do if you want to use the gDBClone script.

To cut a long story short, the mount point was assumed to be in /acfs for a snap’d or cloned database. The script also assumes that your system is a RAC environment, and I found it not to work well in 12.1 at all due to the way it tries to get the database version from the OCR (OLR) profile. First the new ACFS file system is mounted, then made accessible to the oracle user:

[root@server5 ~]# mount -t acfs /dev/asm/volume1-162 /acfs
[root@server5 ~]# chown -R oracle:dba /acfs
[root@server5 ~]# mount | grep acfs
/dev/asm/volume1-162 on /acfs type acfs (rw)

Creating the MASTER database

My source system resides in ASM, and there is no way of creating COW clones in ASM. The MASTER database must be moved to ACFS first as a result, from where you can take storage snapshots.

A quick hack was required since I only had 1 machine, so I created the source database (PROD in the ASCII art example) as “orcl” using the following call to dbca and setting it to archivelog mode:

dbca -silent -createDatabase -templateName General_Purpose.dbc  -gdbname orcl \
> -sid orcl -sysPassword pwd1 -systemPassword pwd2  -emConfiguration none  \
> -storageType ASM  -asmsnmpPassword pwd3 -diskGroupName data -recoveryGroupName reco  \
> -totalMemory 2048

The next step is to create the MASTER database. The gDBClone script checks if a database is on ACFS in function checkIfACFS(). If a database is found to be on ASM (by checking the SPFILE location in the Clusterware profile) it requires it to be CLONED as opposed to SNAPped. Here is the command to clone ORCL to MASTER. You must set your environment to an RDBMS home before executing the script.

# ./gDBClone clone -sdbname orcl -sdbhost server5 -sdbport 1521 -tdbname master -acfs /acfs -debug

It uses a RMAN duplicate under the covers. The arguments are almost self-explanatory. It takes the location of the source database (can be remote) and where you want to store the database. Since I desperately want to store the clone on ACFS I specified it in the command. The -debug flag prints more verbose output, a lot of information is also found in /var/log/gDBName/. Note that the script is to be run as root :(

To get there a few tricks were necessary in Oracle Restart environments, I’ll feed them back to Oracle to see if they can be added to the script. You probably won’t encounter problems when using a clustered 11.2 installation.

At one point the script checks the VERSION flag in the database resource profile (crsctl stat res ora..db -p | grep ‘^VERSION’, and since that field no longer shows up in 12.1 the variable is undefined in perl and the execution fails. The problem with Oracle Restart is related to setting the database type to SINGLE on the local host (srvctl add database … -c SINGLE -x …). Oracle Restart doesn’t understand those switches. Also you can’t set the db_unique_name in 12c to a name of a database already registered in the OCR. Moving the step to register the database further down in the execution helped.

Please don’t get me wrong: the script is very neat in that it allows you to run an RMAN duplicate command over the network, potentially creating the backup on the fly. In 12c RMAN will pull backup pieces if they exist instead of creating a backup on the fly to reduce its impact on the production database. Oh and I forgot-you can even convert it to a clustered database if it is not already.

Creating an ACFS clone database

Next up is the creation of the CLONE database. My first attempts were unsuccessful. The main trick seems to be to keep the ACFS mount underneath the / (root) file system. Mounting it elsewhere caused the script to fail. If you can read perl, check the $acfs variable and checkIfACFS() routine to understand why. Also, the compatibility of your ASM diskgroup containing the volume has to be greater than 11.2.0.3 or you get this error:

[root@server5 ~]# ./gDBClone snap -sdbname master -tdbname clone1
2014-12-12 11:03:53: I Getting host info...
2014-12-12 11:03:53: I Starting.....
2014-12-12 11:03:53: I Validating environment.....

Enter the source MASTER SYSDBA password:
2014-12-12 11:03:56: I Getting OH version...
2014-12-12 11:04:04: I Checking SCAN listener server5.example.com:1521...
2014-12-12 11:04:05: I Checking database CLONE1 existence...
2014-12-12 11:04:05: I Checking registered instance CLONE1 ...
2014-12-12 11:04:10: I Checking if the Source Database MASTER it's on ASM
2014-12-12 11:04:10: I Source Database MASTER it's on ACFS
2014-12-12 11:04:10: I Checking snapshot CLONE1 existence
2014-12-12 11:04:10: I Setting up clone environment....
2014-12-12 11:04:10: I Starting auxiliary listener....
2014-12-12 11:05:10: I Creating ACFS snapshot.....
2014-12-12 11:05:10: I Start/stop MASTER to check consistency.
2014-12-12 11:05:42: I Checking if the source database MASTER is stored on an ACFS snapshot
acfsutil snap create: ACFS-03048: Snapshot operation could not complete.
acfsutil snap create: ACFS-03174: The Oracle ASM Dynamic Volume Manager (Oracle ADVM) compatibility
    attribute for the disk group is less than 11.2.0.3.0.
2014-12-12 11:05:42: E Error getting ACFS snapshot

This can be fixed quite easily provided that you don’t break anything. Remember that compatibility can be raised but never lowered. With everything in place, the clone is quick:

[root@server5 ~]# ./gDBClone snap -sdbname master -tdbname clone1
2014-12-12 11:20:17: I Getting host info...
2014-12-12 11:20:17: I Starting.....
2014-12-12 11:20:17: I Validating environment.....

Enter the source MASTER SYSDBA password:
2014-12-12 11:20:19: I Getting OH version...
2014-12-12 11:20:27: I Checking SCAN listener server5.example.com:1521...
2014-12-12 11:20:27: I Checking database CLONE1 existence...
2014-12-12 11:20:28: I Checking registered instance CLONE1 ...
2014-12-12 11:20:31: I Checking if the Source Database MASTER it's on ASM
2014-12-12 11:20:31: I Source Database MASTER it's on ACFS
2014-12-12 11:20:31: I Checking snapshot CLONE1 existence
2014-12-12 11:20:31: I Setting up clone environment....
2014-12-12 11:20:31: I Starting auxiliary listener....
2014-12-12 11:21:31: I Creating ACFS snapshot.....
2014-12-12 11:21:31: I Start/stop MASTER to check consistency.
2014-12-12 11:21:58: I Checking if the source database MASTER is stored on an ACFS snapshot
2014-12-12 11:21:58: I Setting up snapshot database.....
2014-12-12 11:21:58: I Creating Clone parameter files
2014-12-12 11:22:01: I Activating clone database.....
PRKO-2002 : Invalid command line option: -j
2014-12-12 11:23:04: I Successfully created clone "CLONE1" database

Now what does this database look like? I remember a conversation with Andy Colvin about this since he saw it on our ODA first, but consider this:

[oracle@server5 ~]$ sq

SQL*Plus: Release 12.1.0.2.0 Production on Fri Dec 12 11:51:22 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/acfs/.ACFS/snaps/CLONE1/MASTER/MASTER/datafile/o1_mf_system_b8p3pxs7_.dbf
/acfs/.ACFS/snaps/CLONE1/MASTER/MASTER/datafile/o1_mf_sysaux_b8p3pz4f_.dbf
/acfs/.ACFS/snaps/CLONE1/MASTER/MASTER/datafile/o1_mf_undotbs1_b8p3q2q0_.dbf
/acfs/.ACFS/snaps/CLONE1/MASTER/MASTER/datafile/o1_mf_users_b8p3qo2o_.dbf

Now the interesting thing is when you look into that top-level ACFS directory:

SQL> !ls -la /acfs/
total 76
drwxr-xr-x.  5 oracle dba       4096 Dec 12 10:35 .
dr-xr-xr-x. 27 root   root      4096 Dec 12 10:30 ..
drwx------.  2 oracle dba      65536 Dec 12 10:31 lost+found
drwxr-xr-x.  3 oracle oinstall  4096 Dec 12 10:52 MASTER

No .ACFS file! Even more interestingly, you can actually see what’s in the .ACFS directory when referring to it directly

SQL> !ls -l /acfs/.ACFS
total 8
drwxrwx---. 6 root root 4096 Dec 12 10:31 repl
drwxr-xr-x. 3 root root 4096 Dec 12 11:21 snaps

What I want to do next is to run a few performance benchmarks on a database in ACFS to see how it holds up. But that’s for another day…

push_pred – evolution

Here’s a query (with a few hints to control how I want Oracle to run it) that demonstrates the difficulty of trying to solve problems by hinting (and the need to make sure you know where all your hinted code is):


select
	/*+
		qb_name(main)
		leading (@main t1@main v1@main t4@main)
		push_pred(v1@main)
	*/
	t1.*,v1.*,t4.*
from
	t1,
	(
	select	/*+ qb_name(inline) no_merge */
		t2.n1, t3.n2, count(*)
	from	t2, t3
	where exists (
		select	/*+ qb_name(subq) no_unnest push_subq */
			null
		from	t5
		where	t5.object_id = t2.n1
		)
	and	t3.n1 = t2.n2
	group by t2.n1, t3.n2
	)	v1,
	t4
where
	v1.n1 = t1.n1
and	t4.n1(+) = v1.n1
;

Nominally it’s a three-table join, except the second table is an in-line view which joins two tables and includes an existence subquery. Temporarily I have made the join to t4 an outer join – but that’s just to allow me to make a point, I don’t want an outer join in the final query. I’ve had to include the no_merge() hint in the inline view to stop Oracle using complex view merging to “join then aggregate” when I want it to “aggregate then join”; I’ve included the no_unnest and push_subq hints to make sure that the subquery is operated as a subquery, but operates at the earliest possible moment in the inline view. Ignoring the outer join (which would make operation 1 a nested loop outer), this is the execution plan I want to see:


-------------------------------------------------------------------------------------------
| Id  | Operation                         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |       |    50 | 12850 |  4060   (1)| 00:00:21 |
|   1 |  NESTED LOOPS                     |       |    50 | 12850 |  4060   (1)| 00:00:21 |
|   2 |   NESTED LOOPS                    |       |    50 | 12850 |  4060   (1)| 00:00:21 |
|   3 |    NESTED LOOPS                   |       |    50 |  7400 |  4010   (1)| 00:00:21 |
|   4 |     TABLE ACCESS FULL             | T1    |  1000 |   106K|     3   (0)| 00:00:01 |
|   5 |     VIEW PUSHED PREDICATE         |       |     1 |    39 |     4   (0)| 00:00:01 |
|   6 |      SORT GROUP BY                |       |     1 |    16 |     4   (0)| 00:00:01 |
|   7 |       NESTED LOOPS                |       |     1 |    16 |     3   (0)| 00:00:01 |
|   8 |        TABLE ACCESS BY INDEX ROWID| T2    |     1 |     8 |     2   (0)| 00:00:01 |
|*  9 |         INDEX UNIQUE SCAN         | T2_PK |     1 |       |     1   (0)| 00:00:01 |
|* 10 |          INDEX RANGE SCAN         | T5_I1 |     1 |     4 |     1   (0)| 00:00:01 |
|  11 |        TABLE ACCESS BY INDEX ROWID| T3    |     1 |     8 |     1   (0)| 00:00:01 |
|* 12 |         INDEX UNIQUE SCAN         | T3_PK |     1 |       |     0   (0)| 00:00:01 |
|* 13 |    INDEX UNIQUE SCAN              | T4_PK |     1 |       |     0   (0)| 00:00:01 |
|  14 |   TABLE ACCESS BY INDEX ROWID     | T4    |     1 |   109 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   9 - access("T2"."N1"="T1"."N1")
       filter( EXISTS (SELECT /*+ PUSH_SUBQ NO_UNNEST QB_NAME ("SUBQ") */ 0 FROM
              "T5" "T5" WHERE "T5"."OBJECT_ID"=:B1))
  10 - access("T5"."OBJECT_ID"=:B1)
  12 - access("T3"."N1"="T2"."N2")
  13 - access("T4"."N1"="V1"."N1")

Note, particularly, operation 5: VIEW PUSHED PREDICATE, and the associated access predicate at line 9 “t2.n1 = t1.n1″ where the predicate based on t1 has been pushed inside the inline view: so Oracle will evaluate a subset view for each selected row of t1, which is what I wanted. Then you can see operation 10 is an index range scan of t5_i1, acting as a child to the index unique scan of t2_pk of operation 9 – that’s Oracle keeping the subquery as a subquery and executing it as early as possible.

So what happens when I try to get this execution plan using the SQL and hints I’ve got so far ?

Here’s the plan I got from 10.2.0.5:


--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |    50 | 12750 |    62   (4)| 00:00:01 |
|   1 |  NESTED LOOPS                |       |    50 | 12750 |    62   (4)| 00:00:01 |
|*  2 |   HASH JOIN                  |       |    50 |  7350 |    12  (17)| 00:00:01 |
|   3 |    TABLE ACCESS FULL         | T1    |  1000 |   105K|     3   (0)| 00:00:01 |
|   4 |    VIEW                      |       |    50 |  1950 |     9  (23)| 00:00:01 |
|   5 |     HASH GROUP BY            |       |    50 |   800 |     9  (23)| 00:00:01 |
|*  6 |      HASH JOIN               |       |    50 |   800 |     7  (15)| 00:00:01 |
|*  7 |       TABLE ACCESS FULL      | T2    |    50 |   400 |     3   (0)| 00:00:01 |
|*  8 |        INDEX RANGE SCAN      | T5_I1 |     1 |     4 |     1   (0)| 00:00:01 |
|   9 |       TABLE ACCESS FULL      | T3    |  1000 |  8000 |     3   (0)| 00:00:01 |
|  10 |   TABLE ACCESS BY INDEX ROWID| T4    |     1 |   108 |     1   (0)| 00:00:01 |
|* 11 |    INDEX UNIQUE SCAN         | T4_PK |     1 |       |     0   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("V1"."N1"="T1"."N1")
   6 - access("T3"."N1"="T2"."N2")
   7 - filter( EXISTS (SELECT /*+ PUSH_SUBQ NO_UNNEST QB_NAME ("SUBQ") */ 0
              FROM "T5" "T5" WHERE "T5"."OBJECT_ID"=:B1))
   8 - access("T5"."OBJECT_ID"=:B1)
  11 - access("T4"."N1"="V1"."N1")

In 10g the optimizer has not pushed the join predicate down into the view (the t1 join predicate appears in the hash join at line 2); I think this is because the view has been declared non-mergeable through a hint. So let’s upgrade to 11.1.0.7:

------------------------------------------------------------------------------------------
| Id  | Operation                        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |       |    50 | 12950 |  4008K  (1)| 05:34:04 |
|   1 |  NESTED LOOPS                    |       |    50 | 12950 |  4008K  (1)| 05:34:04 |
|   2 |   MERGE JOIN CARTESIAN           |       |  1000K|   205M|  2065   (3)| 00:00:11 |
|   3 |    TABLE ACCESS FULL             | T1    |  1000 |   105K|     3   (0)| 00:00:01 |
|   4 |    BUFFER SORT                   |       |  1000 |   105K|  2062   (3)| 00:00:11 |
|   5 |     TABLE ACCESS FULL            | T4    |  1000 |   105K|     2   (0)| 00:00:01 |
|   6 |   VIEW PUSHED PREDICATE          |       |     1 |    43 |     4   (0)| 00:00:01 |
|   7 |    SORT GROUP BY                 |       |     1 |    16 |     4   (0)| 00:00:01 |
|*  8 |     FILTER                       |       |       |       |            |          |
|   9 |      NESTED LOOPS                |       |     1 |    16 |     3   (0)| 00:00:01 |
|  10 |       TABLE ACCESS BY INDEX ROWID| T2    |     1 |     8 |     2   (0)| 00:00:01 |
|* 11 |        INDEX UNIQUE SCAN         | T2_PK |     1 |       |     1   (0)| 00:00:01 |
|* 12 |         INDEX RANGE SCAN         | T5_I1 |     1 |     4 |     1   (0)| 00:00:01 |
|  13 |       TABLE ACCESS BY INDEX ROWID| T3    |  1000 |  8000 |     1   (0)| 00:00:01 |
|* 14 |        INDEX UNIQUE SCAN         | T3_PK |     1 |       |     0   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   8 - filter("T4"."N1"="T1"."N1")
  11 - access("T2"."N1"="T4"."N1")
       filter( EXISTS (SELECT /*+ PUSH_SUBQ NO_UNNEST QB_NAME ("SUBQ") */ 0 FROM
              "T5" "T5" WHERE "T5"."OBJECT_ID"=:B1))
  12 - access("T5"."OBJECT_ID"=:B1)
  14 - access("T3"."N1"="T2"."N2")

Excellent – at operation 6 we see VIEW PUSHED PREDICATE, and at operation 11 we can see that the join predicate “t2.n1 = t1.n1″.

Less excellent – we have a Cartesian Merge Join between t1 and t4 before pushing predicates. Of course, we told the optimizer to push join predicates into the view, and there are two join predicates, one from t1 and one from t4 – and we didn’t tell the optimizer that we only wanted to push the t1 join predicate into the view. Clearly we need a way of specifying where predicates should be pushed FROM as well as a way of specifying where they should be pushed TO.

If we take a look at the outline information from the execution plan there’s a clue in one of the outline hints: PUSH_PRED(@”MAIN” “V1″@”MAIN” 3 2) – the hint has a couple of extra parameters to it – perhaps the 2 and 3 refer in some way to the 2nd and 3rd tables in the query. If I test with an outer join to t4 (which means the optimizer won’t be able to use my t4 predicate as a join INTO the view) I get the plan I want (except it’s an outer join, of course), and the hint changes to: PUSH_PRED(@”MAIN” “V1″@”MAIN” 2) – so maybe the 2 refers to t1 and the 3 referred to t4, so let’s try the following hints:


push_pred(v1@main 2)
no_push_pred(v1@main 3)

Unfortunately this gives us the following plan:


--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |    50 | 12300 |    62   (4)| 00:00:01 |
|   1 |  NESTED LOOPS OUTER          |       |    50 | 12300 |    62   (4)| 00:00:01 |
|*  2 |   HASH JOIN                  |       |    50 |  6900 |    12  (17)| 00:00:01 |
|   3 |    TABLE ACCESS FULL         | T1    |  1000 |   105K|     3   (0)| 00:00:01 |
|   4 |    VIEW                      |       |    50 |  1500 |     9  (23)| 00:00:01 |
|   5 |     HASH GROUP BY            |       |    50 |   800 |     9  (23)| 00:00:01 |
|*  6 |      HASH JOIN               |       |    50 |   800 |     7  (15)| 00:00:01 |
|*  7 |       TABLE ACCESS FULL      | T2    |    50 |   400 |     3   (0)| 00:00:01 |
|*  8 |        INDEX RANGE SCAN      | T5_I1 |     1 |     4 |     1   (0)| 00:00:01 |
|   9 |       TABLE ACCESS FULL      | T3    |  1000 |  8000 |     3   (0)| 00:00:01 |
|  10 |   TABLE ACCESS BY INDEX ROWID| T4    |     1 |   108 |     1   (0)| 00:00:01 |
|* 11 |    INDEX UNIQUE SCAN         | T4_PK |     1 |       |     0   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("V1"."N1"="T1"."N1")
   6 - access("T3"."N1"="T2"."N2")
   7 - filter( EXISTS (SELECT /*+ PUSH_SUBQ NO_UNNEST QB_NAME ("SUBQ") */ 0
              FROM "T5" "T5" WHERE "T5"."OBJECT_ID"=:B1))
   8 - access("T5"."OBJECT_ID"=:B1)
  11 - access("T4"."N1"(+)="V1"."N1")

We don’t have join predicate pushdown; on the other hand we’ve got the join order we specified with our leading() hint – and that didn’t appear previously when we got the Cartesian Merge Join with predicate pushdown (our hints were incompatible, so something had to fail). So maybe the numbering has changed because the join order has changed and I should push_pred(v1 1) and no_push_pred(v1 3). Alas, trying all combinations of 2 values from 1,2, and 3 I can’t get the plan I want.

So let’s upgrade to 11.2.0.4. As hinted we get the pushed predicate with Cartesian merge join, but this time the push_pred() hint that appears in the outline looks like this: PUSH_PRED(@”MAIN” “V1″@”MAIN” 2 1) – note how the numbers have changed between 11.1.0.7 and 11.2.0.4. So let’s see what happens when I try two separate hints again, fiddling with the third parameter, e.g.:


push_pred(v1@main 1)
no_push_pred(v1@main 2)

With the values set as above I got the plan I want – it’s just a pity that I’m not 100% certain how the numbering in the push_pred() and no_push_pred() hints is supposed to work. In this case, though, it no longer matters as all I have to do now is create an SQL Baseline for my query, transferring the hinted plan into the the SMB with the unhinted SQL.

In passing, I did manage to get the plan I wanted in 11.1.0.7 by adding the hint /*+ outline_leaf(@main) */ to the original SQL. I’m even less keen on doing that than I am on adding undocumented parameters to the push_pred() and no_push_pred() hints, of course; but having done it I did wonder if there are any SQL Plan Baslines in 11.1.0.7 production systems that include the push_pred() hint that are going to change plan on the upgrade to 11.2.0.4 because the numbering inside the hint is supposed to change with version.

Footnote:

Loosely speaking, this blog note is the answer to a question posted about five years ago.

How Do I Know I Have The Latest SLOB Kit?

This is a quick blog post to show SLOB users how to determine whether they are using the latest SLOB kit. If you visit kevinclosson.net/slob you’ll see the webpage I captured in the following screenshot.

Once on the SLOB Resources page you can simply hover over the “SLOB 2.2 (Click here)” hyperlink and the bottom of your browser will show the full name of the tar archive. Alternatively you can use md5sum(1) on Linux (or md5 on Mac) to get the checksum of the tar archive you have and compare it to the md5sum I put on the web page (see the arrow).

 

latest-slob

Filed under: oracle

Impressions from #ukoug_tech14

ACC Liverpool

The Oracle circus went to Liverpool this year for the annual conference of the UK Oracle User Group and it was a fantastic event there! Top speakers and a very knowledgeable audience too, I was really impressed by the quality we have experienced. Together with my friends and colleagues Iloon and Joel, I was waving the flag for Oracle University again – and it was really fun to do so :-)

The 3 of us

The 3 of us

One little obstacle was that I actually did many presentations and roundtables. So less time for me to listen to the high quality talks of the other speakers…

Joel and I hosted three roundtables:

About Exadata, where we had amongst others Dan Norris (Member of the Platform Integration MAA Team, Oracle) and Jason Arneil (Solutions Architect, e-DBA) contributing

Exadata Roundtable

Exadata Roundtable, Jason and Dan on my left side, Joel and Iloon on my right

About Grid Infrastructure & RAC, where Ian Cookson (Product Manager  Clusterware, Oracle) took many questions from the audience. We could have had Markus Michalewicz also if I only would have told him the day before during the party night – I’m still embarrassed about that.

About Data Guard, where Larry Carpenter (Master product Manager Data Guard and Maximum Availability Architecture, Oracle) took all the questions as usual. AND he hit me for the article about the Active Data Guard underscore parameter, so I think I will remove it…

Iloon delivered her presentation about Apex for DBA Audience, which was very much appreciated and attracted a big crowd again, same as in Nürnberg before.

Joel had two talks on Sunday already: Managing Sequences in a RAC Environment (This is actually a more complex topic than you may think!) and Oracle Automatic Parallel Execution (Obviously complex stuff)

I did two presentations as well: The Data Guard Broker – Why it is recommended and Data Guard 12c New Features in Action

Both times, the UKOUG was so kind to give me very large rooms, and I can say that they haven’t looked empty although I faced tough competition by other interesting talks. This is from the first presentation:

Uwe Hesse

A big THANK YOU goes out to all the friendly people of UKOUG who made this possible and maintained the great event Tech14 was! And also to the bright bunch of Oracle colleagues and Oracle techies (speakers and attendees included) that gave me good company there: You guys are the best! Looking forward to see you at the next conference :-)

Tagged: #ukoug_tech14

Oracle University Expert Summit 2015 in Dubai

Oracle University Expert Summit Dubai 2015

Together with Craig Shallahamer, Lucas Jellema, Mark Rittman, Martin Bach, Pete Finnigan and Tim Fox, I will be presenting in Dubai. My topic is Minimizing Downtime with Rolling Upgrade using Data Guard

Click on the picture for details, please!

Hope to see you there :-)

Day 2 of UKOUG Tech 14

Day 2 of the UKOUG Tech 14 conference for me started with the second session (I spent the first session typing up the blog post on Day 1 as there wasn’t a presentation I was interested in). Tammy Bednar was presenting on “Creating a Test / Dev Database in Minutes Using the Oracle Database Appliance”. Tammy is another old timer at Oracle. Like myself and Maria Colgan, she has been an Oracle employee for 20 years. I’d been discussing with Maria on day 1 that people either seem to move on from Oracle within a couple of years, or they stick around for a long time like us. For me, that’s because I’ve been able to move around and do different things that have kept my interest levels fresh.

But I digress. :) I wasn’t aware of what was happening in the ODA space with this topic. As is often the case at Oracle, there are multiple groups developing similar sorts of functionality. So we have the ODA snapshotting capability, Snap Clone on the EM side, and the ability to create a snapshot copy of a pluggable database in Oracle Database 12c. As I’ve been working on the Snap Clone functionality for some time, I was quite interested to see what the ODA folks had been up to as well. Tammy’s quite an accomplished presenter, so I really enjoyed her presentation.

Next up I went to Tim Gorman and Ron Ekin’s presentation title “Dueling Duplications – Physical vs Virtual Data Cloning”. Tim did most of the presenting, and of course, he’s a great presenter as well. The only issue I really had with the presentation was they were comparing apples with oranges, as they were comparing the times taken to duplicate an entire database versus snapshotting a thin clone, so of course the snapshotting was faster. :) Still, it was interesting to see the interface they were using, which I hadn’t seen in action before.

The next session was my first UKOUG presentation, which was on Snap Clone. I had heard from some other presenters that there had been some issues with getting the slides presenting properly, so I made sure that the room had one of the technicians ready. Unfortunately we had major dramas. Eventually even two technicians and myself had to admit defeat. Every time we tried to put PowerPoint into presenting mode, it would present on my laptop screen, rather than the big screen the attendees could see. The strange thing was when I connected to the projector it would display my desktop background but even when I tried to force the display onto the second monitor it would only come up on my laptop. The technicians finally got the slides onto the main display, but only in edit mode – if I tried to present from that configuration, PowerPoint would restart. So I presented the entire presentation in edit mode, losing all my build slides and so on, so it was incredibly frustrating. Adding to that frustration, both my slots were on in the rooms off the exhibition hall, which meant there was a lot of noise both from the hall itself and the adjoining rooms as I mentioned in yesterday’s blog. As I mentioned in that blog, I would certainly recommend those rooms NOT be used for presenting from again!

While the next session was on, I spent some time with the organizers of the conference and they borrowed a different laptop from the conference centre, so my second presentation worked very smoothly. A huge thanks to Faye Wood for managing to get that organized. My second session, titled “Best Practices for Deployment in a Private Cloud – Fast Track to DBaaS”, was the session I was asked to do just last Thursday, so it wasn’t listed in the printed agenda. It was on the agenda boards in the conference centre and the mobile app for the conference, so a few people realized it was on and turned up thankfully (I have heard of two presenters so far that had zero attendees, which must be incredibly disheartening for the presenters!)

That was the end of the day as far as sessions were concerned. There were drinks on in the exhibition hall again, and I spent some time catching up with Eric van der Sluis and Chris Lawless from DBvisit. It’s funny that I’d seen both of them just a couple of weeks ago on the other side of the world at the NZOUG Conference. :)

The conference finished on Tuesday with the Tech14 Social, but I took the opportunity instead to catch up with quite a number of my fellow Oak Table members over a few drinks and then dinner at the Smugglers Cove, a bar / restaurant just next door to the conference. A very enjoyable way to end the day!

Parse Time

Here’s a little query I wrote some time ago to see where my time went while running a query. It’s nothing sophisticated, just one of those simple things you can do with v$active_session_history (or dba_hist_active_sess_history, if you don’t get to the crime scene in time).


set null CPU

select
        sql_exec_id, in_parse, in_hard_parse, event, count(*)
from
        v$active_session_history
where
        sql_id = '{your choice here}'
group by
        sql_exec_id, in_parse, in_hard_parse, event
order by
        sql_exec_id, in_parse, in_hard_parse, count(*)
;

SQL_EXEC_ID I I EVENT                                            COUNT(*)
----------- - - ---------------------------------------------- ----------
   40341649 N N db file sequential read                                68
   40341649 N N CPU                                                    21
   40341649 N N db file scattered read                                  9
            Y Y CPU                                                     7

I had run the query that I was tracking exactly once, but my ASH query allows for, and separates, multiple executions of the same query by summing on sql_exec_id (the thing that the SQL Monitor also uses). The last row looks a little odd, though: it does’t have a value for sql_exec_id; that’s because those are ASH samples when the query is being optimized, not being executed – note that I’ve reported the columns in_parse and in_hard_parse – and both are set to “Y” for that row.

So  (statistically speaking) it’s probably taken about 7 CPU seconds for Oracle to optimise the statement, and from the rest of the results you can see that it’s taken about 21 CPU seconds to run, with 68 seconds spent on random I/Os and 9 seconds spent on multiblock reads for a total of 103 seconds elapsed.

Seven seconds sounds like quite a lot of time for parsing – but it was a fairly complex statement. However, the reason I’d been running the statement on a test system (a fairly good clone of production) was that I’d been seeing something stranger on production and I needed to get a baseline on the test system before I starting trying to fix the problem. Here’s the equivalent ASH reports for the same statement when it had run on production at a time that allowed me to capture its ASH samples.


SQL_EXEC_ID I I EVENT                                            COUNT(*)
----------- - - ---------------------------------------------- ----------
  514257929 N N CPU                                                    21
  514257929 Y Y latch: row cache objects                                1
            Y Y CPU                                                   119

Note the 119 CPU seconds spent parsing to run a 22 second query ! But that wasn’t the worst of it – sometimes the results looked more like this:


SQL_EXEC_ID I I EVENT                                            COUNT(*)
----------- - - ---------------------------------------------- ----------
  523748347 N N db file sequential read                                 1
  523748347 N N CPU                                                    32
            Y Y resmgr:cpu quantum                                     11
            Y Y latch: row cache objects                               38
            Y Y CPU                                                   415

That’s a pretty catastrophic optimsation time – especially since the statement can be optimised in seven seconds in another environment. You might note the resource manager kicking in there, the session is exceeding the CPU limit set for its resource group – though not very often given how infrequently it seems to be waiting on “resmgr:cpu quantum”. But there’s another important wrinkle to this report – which you can see when compare v$active_session_history with v$sql.


  1  select
  2     sql_id,
  3     round(cpu_time/1000000,2) cpu,
  4     round(elapsed_time/1000000,2) ela from v$sql
  5  where
  6     sql_text like '{some identifying text}'
  7* and        sql_text not like '%v$sql%'
SQL> /

SQL_ID               CPU        ELA
------------- ---------- ----------
2atyuc3vtpswy     285.81     516.13

The figures from v$sql don’t match very well with the summed results from ASH which has a total sample of 497 seconds and a CPU sample of 447 seconds. I think I can live with a statistical error of 4% ((516-497)/516) in a random sample for total time, but how do you explain the 36% error in the CPU time ?

The samples reporting “resmgr:cpu quantum” are a clue: the machine is overloaded; it’s trying to use far more CPU time than is available. As a result a process that gets pushed off the CPU by the operating system scheduler while it’s running can spend a long time in the run queue waiting to start running again. And if it’s an Oracle process that got pre-empted it doesn’t “know” that it’s not running, it didn’t put itself into a wait state so all it “knows” is that it’s not in a wait state.

So how do ASH and v$sql differ ? The code that derives the cpu_time for v$sql issues a call to the O/S asking “how much CPU have I used”. The code that takes an ASH sample says: “is this session active, if so is it in a wait state and if it’s not in a wait state then it’s either on the CPU or in the run queue waiting for the CPU”. So when we compare v$sql with ASH the difference in CPU is (statistically speaking) time spent in the run queue. So of our 447 seconds of CPU recorded by ASH, we spent 161 seconds in the CPU run queue waiting for CPU.

We still have to account for the difference between the 7 CPU seconds on a test system and the variation between 119 CPU seconds and 415 CPU seconds in optimisation on production. In a word – concurrency. Apart from everything else going on at the time there were, in the worst case, 16 slightly different versions of the same statement being called at the same time (on a machine with 24 cores) – all 16 statement were competing violently for the same resources at the same time, and the escalating conflict as more session joined in produced an exponential growth in time spent competing for resources rather than doing the job. (I managed to demonstrate the effect quite nicely during the evening by limiting the batch to 4 concurrent executions – and got a typical parse time of 40 CPU seconds).

I’ve often warned people about the problems of concurrency and encouraged them to think about how much time is being spent in competition rather then doing the job; I think this is the most extreme case I’ve seen in a production system. Given how dramatic the variation is, I can’t help wondering if the problem has been exaggerated by some corner case of sessions spinning for mutexes or latches; perhaps even an error in the code that allows resource management to put a session into “resmgr:cpu quantum” while it’s holding a latch or mutex. (I wasn’t able to emulate such an extreme display of the problem on a slightly newer version of Oracle, but I was able to construct a test that demonstrated the effect with a much smaller wastage of CPU.)

The solution (almost certainly): the statements are extremely similar, varying in just one predicate that is using a literal constant. It ought to be a relatively safe and simple change to make the query use a bind variable in that predicate. If the solution is adopted I’d expect to see the (once only) parse time on production drop back to about 7 seconds. Of course, if any other session tries to call the same statement at the same time it ought to end up reporting 7 seconds waiting on “cursor: pin S wait on X” before it starts executing – but that 7 seconds wait is a lot better than an extra 493 CPU seconds trying to optimise the “same” statement at the same time.

Footnote:

Running a minor variation on my ASH query to report the sql_plan_hash_value along with the sql_exec_id, I found that the “fix-up” code that updates older ASH rows with information that only becomes available later (e.g. plan_hash_value when optimising, or long time_waited values for a single wait event) only goes back 255 rows – so when I queried ASH for the statements that took 500 seconds to optimizer only 255 of the in_parse rows showed the final sql_plan_hash_value.

 

UKOUG Tech14 slides – Exadata Security Best Practices

I think 2 years is long enough to wait between posts!

Today I delivered a session about Oracle Exadata Database Machine Best Practices and promised to post the slides for it (though no one asked about them :). I’ve also posted them to the Tech14 agenda as well.

Direct download: UKOUG Tech14 Exadata Security slides

Day 1 of UKOUG Tech 14

Monday was the first day of the main part of the UKOUG Tech 14 conference, after SuperSunday’s additional content. I had a great night’s sleep and woke at 7 am, much to the disgust of Richard Foote who had been up for four hours already and didn’t seem to appreciate my #NoJetLagHere hashtag replying to his lack of sleep tweet. :) I managed to get to the conference relatively dry, but realized yet again that you don’t come to Liverpool in December for the weather. :)

The conference itself is at the Arena and Convention Centre in Liverpool. The conference and exhibition are spread over three floors of the conference centre, with the exhibition on floor 0 (it sounds better than the basement, I guess!) and most of the presentations and the speaker’s lounge on the 2nd floor. Some of the presentations are also held in rooms off the exhibition hall. The first session timeslot didn’t hold a lot to interest me specifically, so I went to the speaker lounge and caught up on a bit of email and the like. The second session included a presentation by Joze Senegacnik, a fellow Oak Table member that I had finally met in person the previous night at the Oracle ACE dinner. His presentation was titled “The Evolvement of Oracle Performance Troubleshooting”. When we met at the ACE dinner, he described his paper as a tour of performance since Oracle 2, so that sounded pretty interesting. Not many people have been around to use a lot of those early versions, so it’s always interesting to hear from someone who experienced just how limited some of that early stuff was!

IMG_0495

Unfortunately, Joze’s presentation was in one of the rooms off the exhibition hall and was separated from another one of those rooms by a wall that didn’t reach the roof. The end result is that neither presenter could use a microphone (as they would have drowned out the presenter in the other room) and it was quite difficult to hear, particularly as my hearing isn’t the best anyway. Still, it gave me a heads-up of what it will be like for my two papers today, as both of them are in that same room. I think I’ll have a bit of an advantage over Joze as he is a fairly quietly spoken guy, whereas I’m much more of a “basso profundo”! :)

After Joze’s presentation I finally got to catch up with my lovely colleague Tania Le Voi. Tania and I had worked together in the group I was with in the Enterprise Manager product management team before I moved to the DBaaS team, so it was great to actually spend some together before sharing lunch with her, James Kao and Jeff Barber (also fellow product managers in the EM team. I also caught up with a customer who has been a reader of my blog, so that’s always fun! Thanks for the positive feedback, Jon! :)

Straight after lunch we had the EM Roundtable slot, which is particularly valuable to us as product managers as these sessions are where we get to hear directly from customers about what they’re doing with the EM product suite, what pain points they might still have and so on. Again, this session was in one of the rooms off the exhibition hall, and if there’s one piece of feedback I would offer the UKOUG Tech 14 organizers, it would be to definitely NOT use these rooms for roundtables, as I had a lot of difficulty hearing some of the issues customers were raising. In fact, it would be better to not use these rooms at all. Those of us who have a hearing loss will realize just how difficult it can be to hear in rooms with competing presenters! The other time when I had problems hearing were in sessions that were on the second floor. From time to time during the day, there were quite heavy periods of rainfall and even hail. I don’t know what sort of material the roof of the convention centre is made from, but the noise level when it was hailing was horrendous!

Unfortunately, I had to spend most of the rest of the afternoon working on my day job, which is one of the downsides of being at a conference. I managed to be finished in time for the exhibition drinks slot, which was held in the exhibition hall. Once again, I enjoyed catching up with a few people over drinks that I hadn’t had the chance to meet in person before. I had to leave after that, so I missed the community drinks session while I went off for dinner with some customers from Barclays, an event organized by Ian Carney, a colleague from my days back in the late 90’s / early 2000’s with Oracle in the US.

All in all, a fairly enjoyable day at UKOUG Tech 14. Looking forward to tomorrow when I have two presentations to do myself, as well as more time catching up with good friends old and new!

Cardinality Change

Here’s an entertaining little change across versions of Oracle, brought to my attention by Tony Hasler during UKOUG Tech 14. It’s a join cardinality estimate, so here are a couple of tables to demonstrate the issue – the only columns needed are the alpha_06 columns, but I reused some code from other demonstrations to create my test case, so there are lots of irrelevant columns in the create table script:


create table t1 nologging as
with generator as (
        select rownum id
        from dual
        connect by rownum <= 1000
)
select
        rownum                                          id,
        mod(rownum-1,200)                               mod_200,
        trunc(dbms_random.value(0,300))                 rand_300,
        mod(rownum-1,10000)                             mod_10000,
        trunc(sysdate) +
                trunc(dbms_random.value(0,1000))        date_1000,
        dbms_random.string('l',6)                       alpha_06,
        dbms_random.string('l',20)                      alpha_20
from
        generator,
        generator
where
        rownum <= 1e6
;

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

create table t2 nologging as select * from t1;
execute dbms_stats.gather_table_stats(user,'t2',method_opt=>'for all columns size 1')

I’m going to join t1 to t2 with a predicate based on the alpha_06 columns – using a LIKE predicate. Before I do so I’ll point out that there are are 1,000,000 rows in the table, and (checking the column stats) 985,920 distinct values for alpha_06. Here’s my query, with the execution plan I got from 11.1.0.7:


select
        count(*)
from
        t1, t2
where
        t2.alpha_06 like t1.alpha_06
;

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     1 |    14 |  1122M  (6)|999:59:59 |
|   1 |  SORT AGGREGATE     |      |     1 |    14 |            |          |
|   2 |   NESTED LOOPS      |      |    50G|   651G|  1122M  (6)|999:59:59 |
|   3 |    TABLE ACCESS FULL| T1   |  1000K|  6835K|  1123   (6)| 00:00:06 |
|*  4 |    TABLE ACCESS FULL| T2   | 50000 |   341K|  1122   (6)| 00:00:06 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("T2"."ALPHA_06" LIKE "T1"."ALPHA_06")

The 50,000 cardinality estimate for t2 looks like the standard 5% guess for “column >= {unknown value}”, following which the join cardinality of 50G is the same 5% guess applied to the Cartesian join between t1 and t2 (1M * 1M * 0.05). It’s not a good estimate in my case because the right answer happens to be close to 1M rows, specifically 1,003,176. So let’s upgrade to 11.2.0.4 and see what we get instead:

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     1 |    14 |  1050M  (6)|999:59:59 |
|   1 |  SORT AGGREGATE     |      |     1 |    14 |            |          |
|   2 |   NESTED LOOPS      |      |  2014K|    26M|  1050M  (6)|999:59:59 |
|   3 |    TABLE ACCESS FULL| T1   |  1000K|  6835K|  1051   (6)| 00:00:06 |
|*  4 |    TABLE ACCESS FULL| T2   |     2 |    14 |  1050   (6)| 00:00:06 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("T2"."ALPHA_06" LIKE "T1"."ALPHA_06")

The estimate has dropped from 50 Billion rows down to 2 Million – a factor of about 25,000: possibly an indicator that the algorithm has changed, and that a few people might find execution plans changing as they upgrade to a newer version of Oracle. The change occurred at 11.2.0.2 as revealed by fix control 9303766 which has the description: “use 1/NDV+1/NROWS for col1 LIKE col2 selectivities”.

Just as a quick check on the arithmetic: there are 1 million rows in table t2, with (as noted above) 985,920 distinct values in the column, so the selectivity should be: 1/1000000 + 1/985920 = 2.014281 * e-6. Multiply the selectivity by 1e6 and you get 2, the cardinality estimate for t2; multiply the selectivity by 1M*1M (the Cartesian join) and you get 2,014,281, the cardinality estimate of the join. QED.

There are workarounds, of course. One would be to reverse out the fix control, either as an initialisation parameter or in a session logon trigger, another might be to modify the SQL – I think the following would be equivalent:


select
        *
from    t1, t2
where
        t2.alpha_06 like substr(t1.alpha_06,1,length(t1.alpha_06))||'%'
and     t1.alpha_06 is not null
and     t2.alpha_06 is not null

This changes the critical predicate from the form “col1 like col2″ to “col1 like {unknown value from function}” i.e. back to a case where the optimizer uses the 5% guess, and the cardinality estimates go back the original values.