Search

OakieTags

Who's online

There are currently 0 users and 36 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

OOW 2011 – Oracle XML DB and Big Data

Last day of Oracle Open World and I am currently attending the last presentations. The first presentation, “Oracle XMLDB: A noSQL Approach to Managing all your Unstructured Data”, deals with the no-SQL approach and using Oracle XML DB in the context of using it with “Big Data”, that is unstructured data. The title of the …

Continue reading »

Why Is My Index Not Being Used No. 2 Solution (The Narrow Way)

As many have identified, the first thing to point out is that the two queries are not exactly equivalent. The BETWEEN clause is equivalent to a ‘>= and <=’ predicate, whereas the original query only had a ‘> and <’ predicate. The additional equal conditions at each end is significant. The selectivity of the original query is basically costed [...]

Adding another node for RAC 11.2.0.3 on Oracle Linux 6.1 with kernel-UEK

As I have hinted at during my last post about installing Oracle 11.2.0.3 on Oracle Linux 6.1 with Kernel UEK, I have planned another article about adding a node to a cluster.

I deliberately started the installation of my RAC system with only one node to allow my moderately spec’d hardware to deal with a second cluster node. In previous versions of Oracle there was a problem with node additions: the $GRID_HOME/oui/bin/addNode.sh script did pre-requisite checks that used to fail when you had used ASMLib. Unfortuntely, due to my setup I couldn’t test if that was solved (I didn’t use ASMLib).

Cluvfy

As with many cluster operations on non-Exadata you should use the cluvfy tool to ensure that the system you want to add to the cluster meets the requirements. Here’s an example session for the cluvfy output. Since I am about to add a node, the stage has to be “-pre nodeadd”. rac11203node1 is the active cluster node, and rac11203node2 the one I want to add. Note that you run the command from (any) existing node, specifying the nodes to be added with the “-n” parameter. For convenience I have added the “-fixup” option to generate fixup scripts if needed. Also note that this is a lab environment, real production environments would use dm-multipath for storage and a bonded pair of NICs for the public network. Since 11.2.0.2 you no longer need to bond your private NICs, Oracle does that for you now.

[oracle@rac11203node1 ~]$ cluvfy stage -pre nodeadd -n rac11203node2 -fixup

Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "rac11203node1"

Checking user equivalence...
User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.99.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.99.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.99.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.99.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking CRS integrity...

Clusterware version consistency passed

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
"/u01/crs/11.2.0.3" is shared
Shared resources check for node addition passed

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.99.0"

Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.100.0"

Check: Node connectivity for interface "eth2"
Node connectivity passed for interface "eth2"
TCP connectivity check passed for subnet "192.168.101.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.99.0".
Subnet mask consistency check passed for subnet "192.168.100.0".
Subnet mask consistency check passed for subnet "192.168.101.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.99.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.99.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "192.168.100.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.100.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "192.168.101.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.101.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check failed
Check failed on nodes:
rac11203node2,rac11203node1
Free disk space check passed for "rac11203node2:/u01/crs/11.2.0.3"
Free disk space check passed for "rac11203node1:/u01/crs/11.2.0.3"
Free disk space check passed for "rac11203node2:/tmp"
Free disk space check passed for "rac11203node1:/tmp"
Check for multiple users with UID value 500 passed
User existence check passed for "oracle"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed

User "oracle" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all nodes

File "/etc/resolv.conf" is consistent across nodes

Pre-check for node addition was unsuccessful on all the nodes.

A few points are worth noting here:

  • Swap space check failed-this consistently fails on me as I don’t see the point in providing swap space according to Oracle’s formula
  • Each network has a multicast check performed-this is a huge step ahead from 11.2.0.2 which you could install happily only to see it fail miserably during the root.sh/rootupgrade.sh stage because of a multicast problem (MOS note 1212703.1). I would have liked to see it test the 240 range as introduced by patch 9974223 initially

Node Addition

Now that everything seems ok, I can continue to add the node. To  get rid of the swap space error preventing me from adding the node I have to set the IGNORE_PREADDNODE_CHECKS variable to Y (remember it’s a lab!). This time I specify the new node name and VIP names:

[oracle@rac11203node1 bin]$ export IGNORE_PREADDNODE_CHECKS=Y
[oracle@rac11203node1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac11203node2}" \
> "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac11203node2-vip}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 1023 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.

Performing tests to see whether nodes rac11203node2 are available
............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /u01/crs/11.2.0.3
New Nodes
Space Requirements
New Nodes
rac11203node2
/u01/crs: Required 3.70GB : Available 8.97GB
Installed Products
Product Names
Oracle Grid Infrastructure 11.2.0.3.0
Sun JDK 1.5.0.30.03
Installer SDK Component 11.2.0.3.0
Oracle One-Off Patch Installer 11.2.0.1.7
Oracle Universal Installer 11.2.0.3.0
Oracle USM Deconfiguration 11.2.0.3.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Enterprise Manager Common Core Files 10.2.0.4.4
Oracle DBCA Deconfiguration 11.2.0.3.0
Oracle RAC Deconfiguration 11.2.0.3.0
Oracle Quality of Service Management (Server) 11.2.0.3.0
Installation Plugin Files 11.2.0.3.0
Universal Storage Manager Files 11.2.0.3.0
Oracle Text Required Support Files 11.2.0.3.0
Automatic Storage Management Assistant 11.2.0.3.0
Oracle Database 11g Multimedia Files 11.2.0.3.0
Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
Oracle Globalization Support 11.2.0.3.0
Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
Oracle Core Required Support Files 11.2.0.3.0
Bali Share 1.1.18.0.0
Oracle Database Deconfiguration 11.2.0.3.0
Oracle Quality of Service Management (Client) 11.2.0.3.0
Expat libraries 2.0.1.0.1
Oracle Containers for Java 11.2.0.3.0
Perl Modules 5.10.0.0.1
Secure Socket Layer 11.2.0.3.0
Oracle JDBC/OCI Instant Client 11.2.0.3.0
Oracle Multimedia Client Option 11.2.0.3.0
LDAP Required Support Files 11.2.0.3.0
Character Set Migration Utility 11.2.0.3.0
Perl Interpreter 5.10.0.0.2
PL/SQL Embedded Gateway 11.2.0.3.0
OLAP SQL Scripts 11.2.0.3.0
Database SQL Scripts 11.2.0.3.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
SSL Required Support Files for InstantClient 11.2.0.3.0
SQL*Plus Files for Instant Client 11.2.0.3.0
Oracle Net Required Support Files 11.2.0.3.0
Oracle Database User Interface 2.2.13.0.0
RDBMS Required Support Files for Instant Client 11.2.0.3.0
RDBMS Required Support Files Runtime 11.2.0.3.0
XML Parser for Java 11.2.0.3.0
Oracle Security Developer Tools 11.2.0.3.0
Oracle Wallet Manager 11.2.0.3.0
Enterprise Manager plugin Common Files 11.2.0.3.0
Platform Required Support Files 11.2.0.3.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
RDBMS Required Support Files 11.2.0.3.0
Oracle Ice Browser 5.2.3.6.0
Oracle Help For Java 4.2.9.0.0
Enterprise Manager Common Files 10.2.0.4.3
Deinstallation Tool 11.2.0.3.0
Oracle Java Client 11.2.0.3.0
Cluster Verification Utility Files 11.2.0.3.0
Oracle Notification Service (eONS) 11.2.0.3.0
Oracle LDAP administration 11.2.0.3.0
Cluster Verification Utility Common Files 11.2.0.3.0
Oracle Clusterware RDBMS Files 11.2.0.3.0
Oracle Locale Builder 11.2.0.3.0
Oracle Globalization Support 11.2.0.3.0
Buildtools Common Files 11.2.0.3.0
Oracle RAC Required Support Files-HAS 11.2.0.3.0
SQL*Plus Required Support Files 11.2.0.3.0
XDK Required Support Files 11.2.0.3.0
Agent Required Support Files 10.2.0.4.3
Parser Generator Required Support Files 11.2.0.3.0
Precompiler Required Support Files 11.2.0.3.0
Installation Common Files 11.2.0.3.0
Required Support Files 11.2.0.3.0
Oracle JDBC/THIN Interfaces 11.2.0.3.0
Oracle Multimedia Locator 11.2.0.3.0
Oracle Multimedia 11.2.0.3.0
HAS Common Files 11.2.0.3.0
Assistant Common Files 11.2.0.3.0
PL/SQL 11.2.0.3.0
HAS Files for DB 11.2.0.3.0
Oracle Recovery Manager 11.2.0.3.0
Oracle Database Utilities 11.2.0.3.0
Oracle Notification Service 11.2.0.3.0
SQL*Plus 11.2.0.3.0
Oracle Netca Client 11.2.0.3.0
Oracle Net 11.2.0.3.0
Oracle JVM 11.2.0.3.0
Oracle Internet Directory Client 11.2.0.3.0
Oracle Net Listener 11.2.0.3.0
Cluster Ready Services Files 11.2.0.3.0
Oracle Database 11g 11.2.0.3.0
-----------------------------------------------------------------------------

Instantiating scripts for add node (Monday, September 26, 2011 4:12:37 PM BST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Monday, September 26, 2011 4:12:42 PM BST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Monday, September 26, 2011 4:17:41 PM BST)
.                                                               100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'rac11203node2'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oraInventory/orainstRoot.sh #On nodes rac11203node2
/u01/crs/11.2.0.3/root.sh #On nodes rac11203node2
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/crs/11.2.0.3 was successful.
Please check '/tmp/silentInstall.log' for more details.

I checked /tmp/silentInstall.log for any problems, and there weren’t any.

Root.sh

I simply followed instructions and ran the orainstRoot.sh and root.sh scripts on the new node, rac11203node2. Here’s the output from root.sh which is more interesting than orainstRoot.sh:

[root@rac11203node2 ~]# /u01/crs/11.2.0.3/root.sh 2>&1 | tee /tmp/root.sh.out
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=  /u01/crs/11.2.0.3

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/crs/11.2.0.3/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac11203node1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

That’s it-cluster extended successfully! It’s become an easy ride, but to be fair it always has been on Linux. I wouldn’t be so confident with the same operation in HP-UX though.

Book Review: Oracle Database 11g Performance Tuning Recipes (Part 2)

October 6, 2011 Hammering a Square Peg into a Round Hole: Fine Edges are Lost, Gaps in Detail http://www.amazon.com/Oracle-Database-Performance-Tuning-Recipes/dp/1430... (Back to the Previous Post in the Series) In an effort for my review to be fair, I have completed the review for the second half of the “Oracle Database 11g Performance Tuning Recipes” book (omitting the pages for chapters [...]

Rise of the Machines

This is the penultimate day of #OOW11 and I am here at the hotel lobby trying to put some order around the myriads of nuggets of information I have had over the last several days.

The announcements this year have been centered around introduction of various new products from Oracle - Oracle Database Cloud, Cloud Control, Database Appliance, Big Data Appliance, Exalytics, T4 Super cluster and so on. One interesting pattern that emerges from the announcements that is different  from all the previous years is the introduction of several engineered and assembled systems that perform some type of task - specialized or generic. In the past Oracle announced machines too; but not so many at the same time, leading to an observation by April Sims (Executive Editor, Select Journal) that this year can be summed up in one phrase - Rise of the Machines.

But many of the folks I met in person or online were struggling to put their head around the whole lineup. It's quite clear that they were very unclear (no pun intended) how these are different and what situation each one would fit in. It's perfectly normal to be little confused about the sweet spots of each product considering the glut of information on them and seemingly overlapping functionalities. In the Select Journal Editorial Board meeting we had earlier this morning, I committed to writing about the differences between the different systems announced at #OOW11 and their usages in Select Journal 2012 Q1 edition. I didn't realize at that time what a tall order that is. I need to reach out to several product managers and executives inside Oracle to understand the functionality differences in these machines. Well, now that I have firmly put my feet in mouth, I will have to do just that. [Update on 4/29/2012: I have done that. Please see below]

In the demogrounds I learned about Oracle Data Loader for Hadoop and Enterprise-R, two exciting technologies that will change the way we collect and analyze large data sets, especially unstructured ones. Another new technology, centered around Cloud Control, was the Data Subsetting. It allows you to pull a subset of data from the source system to create test data, mask it if necessary and even find sensitive data based on some format. The tool was due for quite some time.

Again, I really need to collect my thoughts and sort through all that information overload I was subjected to at OOW. This was the best OOW ever.

#990000; font-size: large;">Update on April 29th, 2011

I knew I had to wrap my head around these announcements and sort through the features available in the engineered machines. And I did exactly that. I presented a paper in the same name - Rise of the Machines - in Collaborate 2012, the annual conference of the Independent Oracle Users Group. Here is the presentation. In that session I explained the various features of 6 machines - Oracle Database Appliance, Exadata, Exalogic, Sparc Super Cluster, Exalytics and Big Data Appliance, the differences between them and where each one should be used. Please download the session if you want to know more about the topic.

Runstats utility

A variation on Tom Kyte's invaluable RUNSTATS utility that compares the resource consumption of two alternative units of work. Designed to work under constrained developer environments and builds on the original with enhancements such as "pause and resume" functionality, time model statistics and the option to report on specific statistics. ***Update*** Now available in two formats: 1) as a PL/SQL package and 2) as a free-standing SQL*Plus script (i.e. no installation/database objects needed). January 2007 (updated October 2011)

Mystats utility

A variation on Jonathan Lewis's SNAP_MY_STATS package to report the resource consumption of a unit of work between two snapshots. Designed to work under constrained developer environments, this version has enhancements such as time model statistics and the option to report on specific statistics. ***Update*** Now available in two formats: 1) as a PL/SQL package and 2) as a free-standing SQL*Plus script (i.e. no installation/database objects needed). June 2007 (updated October 2011)

OOW11: Monday and Tuesday…

Monday: I went to some presentations, hung around in the OTN lounge and ate at every possible opportunity. Tanel Poder‘s presentation on “Large-Scale Consolidation onto Oracle Exadata: Planning, Execution, and Validation” was pretty cool.

In the evening I planned to meet a former colleague at the OTN party. I decided they best way to find him was to visit every food station at the party, which of course meant sampling the goods. Unfortunately I spent too much time eating and not enough time looking for him. Sorry Ian! The cool thing about Open World is you can enter a giant tent full of thousands of people and pretty much guarantee you will bump into loads of people you know. :)

Tuesday: I spent most of Tuesday helping out at RAC Attack in the OTN Lounge. I did manage to get to see Greg Rahn‘s presentation called “Real-World Performance: How Oracle Does It”, which focussed on Real-Time SQL Monitoring. Greg’s presentation style is really easy to listen to and you know this isn’t just theoretical knowledge. He’s in the trenches doing this stuff as part of the Real-World Performance Group.

As the afternoon progressed I felt a little tired, so I went back to the hotel, puked and fell asleep. I think this was more to do with being over-tired than anything else. That meant I missed some of the later sessions and didn’t hook up with anyone in the evening.

This morning I feel a little ropey, but I’m going to head on down to RAC Attack again and see if I can make myself useful. Tonight is the appreciation event, but I’m not sure if I will be able to “appreciate it” unless I get a major energy injection at some point today. :)

Cheers

Tim…




Oracle Database 11gR2 on OL6 / RHEL6: Certified or Not?

There seems to be a little confusion out there about the certification status of Oracle Database 11gR2, especially with the release of the 11.2.0.3 patchset which fixes all the issues associated with RAC installs on OL/RHEL 6.1.

Currently, 11gR2 is *NOT* certified on OL6 or RHEL6. How do I know? My Oracle Support says so! Check for yourself like this:

  • Log on the My Oracle Support (support.oracle.com).
  • Click the “Certifications” link.
  • Type in the product name, like “Oracle Database”
  • Select the product version number, like “11.2.0.3.0″.
  • Select the platform, like “Linux x86_64″ or a specific distro beneath this.
  • Click the “Search” button.

From the results you will see that Oracle Database 11.2.0.3 is certified on OL and RHEL 5.x. Oracle do not differentiate between different respins of the major version. You will also notice that it is not currently supported on OL6 or RHEL6.

Having said that, we can expect this certification really soon. Why? Because Red Hat has submitted all the certification information to Oracle and (based on previous certifications) expects it to happen some time in Q4 this year, which is any time between now and the end of the year.

With a bit of luck, by the time I submit this post MOS certification will get updated and I will happily be out of date… :)

Cheers

Tim…




HCC – 2

Just a little follow-up to my previous note on hybrid columnar compression. The following is the critical selection of code I extracted from the trace file after tracing a run of the advisor code against a table with 1,000,000 rows in it:


create table "TEST_USER".DBMS_TABCOMP_TEMP_ROWID1
tablespace "USERS" nologging
as
select  /*+ FULL(mytab) NOPARALLEL(mytab) */ 
        rownum rnum, mytab.*
from    "TEST_USER"."T1"  mytab
where   rownum <= 1000001


create table "TEST_USER".DBMS_TABCOMP_TEMP_ROWID2 
tablespace "USERS" nologging 
as 
select  /*+ FULL(mytab) NOPARALLEL(mytab) */ * 
from    "TEST_USER".DBMS_TABCOMP_TEMP_ROWID1 mytab 
where   rnum >= 1


alter table "TEST_USER".DBMS_TABCOMP_TEMP_ROWID2 set unused(rnum)


create table "TEST_USER".DBMS_TABCOMP_TEMP_UNCMP
tablespace "USERS" nologging
as
select  /*+ FULL(mytab) NOPARALLEL (mytab) */ *
from    "TEST_USER".DBMS_TABCOMP_TEMP_ROWID2 mytab


create table "TEST_USER".DBMS_TABCOMP_TEMP_CMP
organization heap
tablespace "USERS"
compress for archive high
nologging
as
select  /*+ FULL(mytab) NOPARALLEL (mytab) */ *
from    "TEST_USER".DBMS_TABCOMP_TEMP_UNCMP mytab


drop table "TEST_USER".DBMS_TABCOMP_TEMP_ROWID1 purge
drop table "TEST_USER".DBMS_TABCOMP_TEMP_ROWID2 purge
drop table "TEST_USER".DBMS_TABCOMP_TEMP_UNCMP purge
drop table "TEST_USER".DBMS_TABCOMP_TEMP_CMP purge

Note: in my example the code seems to pick the first 1M rows in the table; if this is the way Oracle works for larger volumes of data this might give you a unrepresentative set of data and misleading results. I would guess, though, that this may be a side effect of using a small table in the test; it seems likely that if I had a much larger table – perhaps in the 10s of millions of rows – then Oracle would use a sample clause to select the data. If Oracle does use the sample clause then the time to do the test will be influenced by the time it takes to do a full tablescan of the entire data set.

Note 2: The code to drop all 4 tables runs only at the end of the test. If you pick a large sample size you will need enough free space in the tablespace to create three tables hold data of around that sample size, plus the final compressed table. This might be more space, and take more time, than you initially predict.

Note 3: There are clues in the trace file suggesting that Oracle may choose to sort the data (presumably by adding an order by clause in the final CTAS) to maximise compression.

Note 4: You’ve got to wonder why Oracle creates two copies of the data before coming up with the final compressed copy. You might also why the UNCMP copy isn’t created with PCTFREE 0 to allow for a more reasonable comparison between the “free” option for archiving the table and the compressed version. (It would also be more useful to have a comparision between the free “basic compression” and the HCC compression, rather than the default 10% free space copy.)

For reference (though not to be taken too seriously) the following figures show the CPU and Elapsed times for creating the four tables:

Table      CPU    Ela
-----    -----    ---
Rowid1    1.12  6.67
Rowid2    0.70  0.20
UNCMP     0.59  7.31
CMP      18.29  0.04

Don’t ask me why the elapsed times don’t make sense; but do note that this was 11.2.0.2 on 32-bit Windows running in a VM.

And a few more statistics for comparison, showing block sizes of the test table of 1M rows:


Original size:                           10,247
Data size reported by dbms_compression:  10,100
Final size reported by dbms_compression:  2,438
Original table recreated at pctfree 0:    9,234
Original table with basic compression:    8,169
Optimal sort and basic compression:       6,781

There’s no question that HCC can give you much better results than basic compression – but it’s important to note that the data patterns and basic content make a big difference to how well the data can be compressed.

Footnote: The question of how indexes work with HCC tables came up in one of the presentations I went to. The correct answer is: “not very well”.