I was on a customer site recently where I needed to add a NOT NULL constraint to a table of 200 million rows – without taking any downtime. It’s not difficult (provided you are happy with a check constraint rather than a column definition.)
alter table t1 add constraint t1_ck_colX_nn check (colX is not null) enable novalidate ;
The first step creates the constraint and enables it – but doesn’t validate it. This means that future data (and changes) will obey the constraint, but there may be illegal data already in the table that will not be checked. You will have an interruption to service doing this, as your session will wait to lock the table in share mode (mode 4) to add the constraint – so will be blocked by current update transactions, and will block new update transactions. In a typical OLTP system this should result in just a brief pause.
The second step validates the constraint, which needs a slightly more complex piece of code – perhaps something like the following:
declare resource_busy exception; pragma EXCEPTION_INIT(resource_busy, -54); begin loop begin execute immediate 'alter table t1 modify constraint t1_ck_colX_nn validate'; dbms_output.put_line('Succeeded'); exit; exception when resource_busy then dbms_output.put_line('Failed'); end; dbms_lock.sleep(0.01); end loop; end; /
This code tries to validate the constraint and goes into a loop, sleeping for 1/100 second, if it hits the “resource busy” error. Unlike the call to add the constraint, the call to validate it doesn’t wait for a lock – it requests an immediate lock, so it’s going to fail immediately if there are any active transactions on the table. Once it has the lock it drops it, so you’re not going to shut your users out while the validation takes place.
A couple of years ago I wrote about a poster on the OTN db forum who was puzzled by the fact that when he started rebuilding tables they got bigger. (Bad luck, sometimes that’s what happens !)
A few days ago a related question appeared: I rebuilt some indexes and my query got slower. (Bad luck, sometimes that’s what happens – again!)
If you rebuild an index it’s physically different and its statistics are different. Plans can change and go slower because the index stats look sufficiently different; plans can stay the same and go slower because the index is physically different.
I’ve added a couple of comments to the thread – there may still be some further mileage in it.
I think anyone who has read Wolfgang Breitling’s material about the optimizer will be familiar with the concept of Cardinality Feedback and one particular detail that when Oracle gets a cardinality estimate of one for a “driving” table then there’s a good chance that the execution plan will go wrong. (That’s not rule, by the way, just a fairly common observation after things have gone wrong.)
A recent note on OTN reminded me of a particular scenario where this specific problem can occur. It’s not particularly common, but it may hit people who are building data warehouses from multiple different sources. We start with an unlikely looking data set and very simple query:
drop table t1; create table t1 as select rownum id1, rownum id2 from all_objects where rownum <= 10000 ; execute dbms_stats.gather_table_stats(user,'t1'); set autotrace traceonly select count(*) from t1 where id1 = id2 ;
What do you think Oracle estimated cardinality will be for this predciate ? We know, because we saw the data being built, that we’re going to identify 10,000 rows. But the optimizer doesn’t see it that way – check line 2 of the execution plan. The optimizer thinks it will find just one row:
I’ve just been reminded of a thread on OTN which turned into a Q&A about index space usage and various related topics. On re-reading it, I decided it was too good to waste in the backwaters of OTN, so here’s a link to it.
This is part one of my thesis that “all joins are nested loop joins – it’s just the startup overheads that vary”; there will be a note on “Joins – HJ” and “Joins – MJ” to follow. In some ways, the claim is trivially obvious – a join simply takes two row sources and compares [...]
This note is part four of a four-part series, and covers Index fragmentation. The whole series is as follows Introduction Disk and Tablespace Fragmentation Table Fragmentation Index Fragmentation – this bit 4. Index “fragmentation”. The multiple extent and ASSM “fragmentation” that I described in the previous about table fragmentaiton applies equally well to indexes, of course, [...]
This note is part three of a four-part series, and covers Table fragmentation. The whole series is as follows Introduction Disk and Tablespace Fragmentation Table Fragmentation – this bit Index Fragmentation 3. Table “fragmentation”. In the introduction we discussed one type of table fragmentation that doesn’t (usually) matter – the fragmentation of a table into [...]
This note is part two of a four-part series, and covers Disk and Tablespace fragmentation. The whole series is as follows Introduction Disk and Tablespace Fragmentation – this bit Table Fragmentation Index Fragmentation 2.1 Disk “fragmentation”. Tablespaces are made up of files, and files are stored on discs – which are often “logical volumes” rather than [...]
From time to time people run into problems with UNDO tablespaces that have grown much larger than expected (perhaps due to a rogue process doing far too much work) and refuse to shrink. The workaround is to create a new undo tablespace and switch the instance to use it – but even this simple procedure [...]
This note started life as a nutshell until I realised that it was going to be more of a coconut than a hazel nut and decided to turn it into a short series instead. I should manage to post four parts over the next two weeks: Introduction – this bit Disk and Tablespace Fragmentation Table [...]