Search

OakieTags

Who's online

There are currently 0 users and 43 guests online.

Recent comments

Affiliations

Deferred Segment Creation

Implementing Deferred Segment Creation After an Upgrade

I have written previously about Deferred Segment Creation. Each empty table and its indexes use only 64Kb (assuming an 8Kb blocksize and locally managed tablespaces), but in a PeopleSoft there can be be tens of thousands of such tables and that adds up to a saving worth making.

If you are upgrading your database to 11gR2 , you might want to make sure that you are using it.  Deferred segment creation was introduced in Oracle 11.2.0.2 and it became the default in 11.2.0.3.  However, any table created in a previous version will have a physical segment.

This problem could affect any system, but it also manifests itself in PeopleSoft in a particular way.

Implementing Deferred Segment Creation After an Upgrade

I have written previously about Deferred Segment Creation. Each empty table and its indexes use only 64Kb (assuming an 8Kb blocksize and locally managed tablespaces), but in a PeopleSoft there can be be tens of thousands of such tables and that adds up to a saving worth making.

If you are upgrading your database to 11gR2 , you might want to make sure that you are using it.  Deferred segment creation was introduced in Oracle 11.2.0.2 and it became the default in 11.2.0.3.  However, any table created in a previous version will have a physical segment.

This problem could affect any system, but it also manifests itself in PeopleSoft in a particular way.

Deferred Segment Creation in PeopleSoft

This note has been in my to do folder for a while since I found these two excellent blog entries about Deferred Segment Creation by Chistian Antognini.

They made me think about the use of this feature in PeopleSoft. In most PeopleSoft systems there are lots of empty tables, sometimes because not all the modules delivered in the database are in use, but also because not all the temporary table instances have ever been used. This blog entry from Tom Kyte feels very close to home

Deferred Segment Creation in PeopleSoft

This note has been in my to do folder for a while since I found these two excellent blog entries about Deferred Segment Creation by Chistian Antognini.

They made me think about the use of this feature in PeopleSoft. In most PeopleSoft systems there are lots of empty tables, sometimes because not all the modules delivered in the database are in use, but also because not all the temporary table instances have ever been used. This blog entry from Tom Kyte feels very close to home

ORA-14404 / 14405 and Deferred Segment Creation

Have a look at the following SQL*Plus output snippet:

SQL>> select count(*) from dba_segments where tablespace_name in ('TS_TO_DROP', 'TS_TO_NOT_DROP');

COUNT(*)
----------
0

SQL>> drop tablespace ts_to_drop including contents and datafiles cascade constraints;
drop tablespace ts_to_drop including contents and datafiles cascade constraints
*
ERROR at line 1:
ORA-14404: partitioned table contains partitions in a different tablespace

How is it possible to get a message about an object that is supposed to have partitions in the tablespace to be dropped and in some other tablespaces if there is no segment contained in the tablespace to be dropped?

May be a dictionary corruption, or a problem with the recyclebin?

11.2 New Features - Subtle Restrictions

This is just a short note that some of the new features added in 11.2 have some subtle restrictions that are not documented properly.

1. Hybrid Columnar Compression and Function-Based Indexes

The Hybrid Columnar Compression (HCC) introduced in 11.2 that is only enabled in tablespaces residing on Exadata Storage in 11.2.0.1 has the subtle restriction that the mere presence of a function-based index either explicitly created or indirectly via an index on a virtual column disables HCC completely on that segment with direct path / parallel DML inserts. It silently falls back to no compression at all, rather than using for example basic or advanced oltp compression instead.

This can be easily reproduced using the following simple test case:

set echo on timing on

spool hcc_function_based_indexes_testcase.log

drop table t_hcc_fbi purge;

-- Create table with HCC enabled
create table t_hcc_fbi compress for query low as select * from dba_objects where 1 = 2;

truncate table t_hcc_fbi;

-- Direct path inserts support HCC by default
insert /*+ append */ into t_hcc_fbi select * from dba_objects;

commit;

-- This is supposed to return DBMS_COMPRESSION.COMP_FOR_QUERY_LOW = 8
select distinct dbms_compression.get_compression_type(USER, 'T_HCC_FBI', rowid) from t_hcc_fbi;

-- So it is supported to have an FBI on HCC data
create index t_hcc_fbi_idx on t_hcc_fbi (lower(object_type));

truncate table t_hcc_fbi;

-- Let's make the FBI unusable, so it doesn't have to be maintained by the load
alter index t_hcc_fbi_idx unusable;

insert /*+ append */ into t_hcc_fbi select * from dba_objects;

commit;

-- But this will return DBMS_COMPRESSION.COMP_NOCOMPRESS = 1
select distinct dbms_compression.get_compression_type(USER, 'T_HCC_FBI', rowid) from t_hcc_fbi;

-- So the mere existence of a FBI, no matter if usable or unusable will prevent HCC with direct-path inserts / parallel DML
-- It is however fully supported to have them on HCC data, but you need to drop / re-create them in order to load the data
-- via direct-path inserts / parallel DML with HCC enabled

spool off

And this is the output I get from 11.2.0.1:

SQL>
SQL> drop table t_hcc_fbi purge;

Table dropped.

Elapsed: 00:00:00.13
SQL>
SQL> -- Create table with HCC enabled
SQL> create table t_hcc_fbi compress for query low as select * from dba_objects where 1 = 2;

Table created.

Elapsed: 00:00:00.09
SQL>
SQL> truncate table t_hcc_fbi;

Table truncated.

Elapsed: 00:00:00.01
SQL>
SQL> -- Direct path inserts support HCC by default
SQL> insert /*+ append */ into t_hcc_fbi select * from dba_objects;

72545 rows created.

Elapsed: 00:00:01.24
SQL>
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL>
SQL> -- This is supposed to return DBMS_COMPRESSION.COMP_FOR_QUERY_LOW = 8
SQL> select distinct dbms_compression.get_compression_type(USER, 'T_HCC_FBI', rowid) from t_hcc_fbi;

DBMS_COMPRESSION.GET_COMPRESSION_TYPE(USER,'T_HCC_FBI',ROWID)
-------------------------------------------------------------
8

Elapsed: 00:00:09.51
SQL>
SQL> -- So it is supported to have an FBI on HCC data
SQL> create index t_hcc_fbi_idx on t_hcc_fbi (lower(object_type));

Index created.

Elapsed: 00:00:00.23
SQL>
SQL> truncate table t_hcc_fbi;

Table truncated.

Elapsed: 00:00:00.03
SQL>
SQL> -- Let's make the FBI unusable, so it doesn't have to be maintained by the load
SQL> alter index t_hcc_fbi_idx unusable;

Index altered.

Elapsed: 00:00:00.02
SQL>
SQL> insert /*+ append */ into t_hcc_fbi select * from dba_objects;

72546 rows created.

Elapsed: 00:00:01.01
SQL>
SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
SQL>
SQL> -- But this will return DBMS_COMPRESSION.COMP_NOCOMPRESS = 1
SQL> select distinct dbms_compression.get_compression_type(USER, 'T_HCC_FBI', rowid) from t_hcc_fbi;

DBMS_COMPRESSION.GET_COMPRESSION_TYPE(USER,'T_HCC_FBI',ROWID)
-------------------------------------------------------------
1

Elapsed: 00:00:08.93
SQL>

I've been told that this restriction is going to be lifted in some future version, but I haven't had the chance yet to test this with the 11.2.0.2 patch set, in particular because the patch set is not yet released for existing Exadata installations as upgrade - it is only available for new Exadata deployments.

2. Deferred Segment Creation and Parallel DML

11.2 introduced the deferred segment creation that interestingly is enabled by default when doing a fresh installation. I haven't done a migration from previous versions to 11.2 yet so I don't know if this is also enabled in migrated environments but I assume so.

Note that the 11.2.0.2 patch set extends the deferred segment creation to partitioned objects which wasn't supported in the initial 11.2.0.1 release.

The deferred segment creation has some subtle side effects, for example sequences seem not to start with the defined START WITH value because the recursive transaction fails initially, the segment gets created and the next sequence value will be used for the actual DML insert operation.

Another side effect that is not properly documented is the fact that parallel DML is not supported on "segment-less" objects.

So if you have a freshly created segment that is supposed to be populated via parallel DML inserts then this will silently fall back to serial direct-path inserts for the first time executed.

Once the segment has been created from the next time on parallel DML is going to be used which might help to confuse the issue since it is only reproducible under particular circumstances.

So if you have migrated an application to 11.2 and wonder why sometimes load operations take significantly longer than before but mostly do not then this might be a viable explanation.

This behaviour is not part of the deferred segment creation restrictions described in the official documentation, but it is mentioned towards the end of the MOS note "11.2 Database New Feature Deferred Segment Creation [ID 887962.1]"

According to the mentioned MOS note this still seems to apply to 11.2.0.2, but I haven't tested this yet myself. So it looks like this significant restriction is not yet lifted with the current patch set.

Update July 2011: See comment below by Radoslav, the problem seems to be fixed in 11.2.0.2.

Here is a simple test case demonstrating the issue:

set echo on timing on

set linesize 160 trimspool on tab off pagesize 0

spool deferred_segment_creation_parallel_dml_testcase.log

drop table t_source purge;

drop table t_deferred_parallel_dml purge;

-- Create a simple table as source for the insert
create table t_source parallel nologging
as
select * from dba_objects;

exec dbms_stats.gather_table_stats(null, 't_source')

-- No AUTO DOP for this session to prevent any side-effects of this feature
alter session set parallel_degree_policy = manual;

-- Note that even a CTAS operation can be segment-less
-- if the query doesn't return any rows
create table t_deferred_parallel_dml
segment creation deferred
as
select * from t_source where 1 = 2;

-- No segment although CTAS has been used
select count(*) from user_segments where segment_name = 'T_DEFERRED_PARALLEL_DML';

alter session enable parallel dml;

explain plan for
insert /*+ append parallel(t) */ into t_deferred_parallel_dml t select * from t_source;

-- This plan does not show parallel DML
select * from table(dbms_xplan.display);

drop table t_deferred_parallel_dml purge;

create table t_deferred_parallel_dml
segment creation immediate
as
select * from t_source where 1 = 2;

-- One segment
select count(*) from user_segments where segment_name = 'T_DEFERRED_PARALLEL_DML';

explain plan for
insert /*+ append parallel(t) */ into t_deferred_parallel_dml t select * from t_source;

-- This plan does show parallel DML
select * from table(dbms_xplan.display);

spool off

And this is the output I get from 11.2.0.1:

SQL>
SQL> drop table t_source purge;

Table dropped.

Elapsed: 00:00:00.04
SQL>
SQL> drop table t_deferred_parallel_dml purge;

Table dropped.

Elapsed: 00:00:00.01
SQL>
SQL> -- Create a simple table as source for the insert
SQL> create table t_source parallel nologging
2 as
3 select * from dba_objects;

Table created.

Elapsed: 00:00:03.32
SQL>
SQL> exec dbms_stats.gather_table_stats(null, 't_source')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.64
SQL>
SQL> -- No AUTO DOP for this session to prevent any side-effects of this feature
SQL> alter session set parallel_degree_policy = manual;

Session altered.

Elapsed: 00:00:00.00
SQL>
SQL> -- Note that even a CTAS operation can be segment-less
SQL> -- if the query doesn't return any rows
SQL> create table t_deferred_parallel_dml
2 segment creation deferred
3 as
4 select * from t_source where 1 = 2;

Table created.

Elapsed: 00:00:00.01
SQL>
SQL> -- No segment although CTAS has been used
SQL> select count(*) from user_segments where segment_name = 'T_DEFERRED_PARALLEL_DML';
0

Elapsed: 00:00:00.00
SQL>
SQL> alter session enable parallel dml;

Session altered.

Elapsed: 00:00:00.00
SQL>
SQL> explain plan for
2 insert /*+ append parallel(t) */ into t_deferred_parallel_dml t select * from t_source;

Explained.

Elapsed: 00:00:00.01
SQL>
SQL> -- This plan does not show parallel DML
SQL> select * from table(dbms_xplan.display);
Plan hash value: 1453990251

------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 73251 | 6938K| 117 (1)| 00:00:02 | | | |
| 1 | LOAD AS SELECT | T_DEFERRED_PARALLEL_DML | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM)| :TQ10000 | 73251 | 6938K| 117 (1)| 00:00:02 | Q1,00 | P->S | QC (RAND) |
| 4 | PX BLOCK ITERATOR | | 73251 | 6938K| 117 (1)| 00:00:02 | Q1,00 | PCWC | |
| 5 | TABLE ACCESS FULL| T_SOURCE | 73251 | 6938K| 117 (1)| 00:00:02 | Q1,00 | PCWP | |
------------------------------------------------------------------------------------------------------------------------------

12 rows selected.

Elapsed: 00:00:00.06
SQL>
SQL> drop table t_deferred_parallel_dml purge;

Table dropped.

Elapsed: 00:00:00.03
SQL>
SQL> create table t_deferred_parallel_dml
2 segment creation immediate
3 as
4 select * from t_source where 1 = 2;

Table created.

Elapsed: 00:00:00.01
SQL>
SQL> -- One segment
SQL> select count(*) from user_segments where segment_name = 'T_DEFERRED_PARALLEL_DML';
1

Elapsed: 00:00:00.01
SQL>
SQL> explain plan for
2 insert /*+ append parallel(t) */ into t_deferred_parallel_dml t select * from t_source;

Explained.

Elapsed: 00:00:00.01
SQL>
SQL> -- This plan does show parallel DML
SQL> select * from table(dbms_xplan.display);
Plan hash value: 4111574378

------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 73251 | 6938K| 117 (1)| 00:00:02 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 73251 | 6938K| 117 (1)| 00:00:02 | Q1,00 | P->S | QC (RAND) |
| 3 | LOAD AS SELECT | T_DEFERRED_PARALLEL_DML | | | | | Q1,00 | PCWP | |
| 4 | PX BLOCK ITERATOR | | 73251 | 6938K| 117 (1)| 00:00:02 | Q1,00 | PCWC | |
| 5 | TABLE ACCESS FULL| T_SOURCE | 73251 | 6938K| 117 (1)| 00:00:02 | Q1,00 | PCWP | |
------------------------------------------------------------------------------------------------------------------------------

12 rows selected.

Elapsed: 00:00:00.06
SQL>
SQL> spool off

In general I would advise to disable deferred segment creation and enable it only in environments where a clear benefit can be seen and the side effects are known, understood and tested.

This can be accomplished by simply setting "deferred_segment_creation = false" on instance level.