There are many suggestions floating around the internet about identifying which Oracle indexes to rebuild. One of these involves running the validate (or analyze index validate) command on an index and checking the resulting figures in view index_stats to determine whether or not del_lf_rows exceeds some percentage (usually 20%) of lf_rows and rebuilding the index if so. Since this suggestion came up, yet again, in a recent OTN thread about Oracle index rebuilding I thought I’d write up a quick note highlighting one of the obvious flaws in the argument.
I’m not going to bother pointing out the threat inherent in the table-locking when you use the “validate index” command (or the “analyze index validate” equivalent) but I think it’s worth making some comments about the misunderstanding built into this policy. So let’s start by building some data, creating an index on it, then deleting 90% of the data:
create table t1 as with generator as ( select --+ materialize rownum id from dual connect by rownum <= 10000 ) select 1000000 + rownum id, lpad(rownum,10,'0') small_vc from generator v1, generator v2 where rownum <= 40000 ; create index t1_i1 on t1(id); validate index t1_i1; select 100 * del_lf_rows/lf_rows deletion_percent from index_stats ; delete from t1 where mod(id,100) >= 10 ; commit; validate index t1_i1; select 100 * del_lf_rows/lf_rows deletion_percent from index_stats ;
I haven’t bothered to collect statistics in this code as I’m not interested in execution plans, only in the amount of data deleted and what this does to the physical structure of the index. Here’s the the output of my script starting from the moment just after I’ve created the index:
Index analyzed. DELETION_PERCENT ---------------- 0 1 row selected. 36000 rows deleted. Commit complete. Index analyzed. DELETION_PERCENT ---------------- 90 1 row selected.
According to the general advice, this index is now in need of a rebuild since del_lf_rows is far more than 20% of lf_rows – but before we rebuild the index let’s delete a little more data.
delete from t1 where mod(id,100) = 0 ; commit; validate index t1_i1; select 100 * del_lf_rows/lf_rows deletion_percent from index_stats ;
My first delete statement got rid of 90% of the data leaving the 4,000 rows where mod(id,100) was between zero and nine. So my second delete has eliminated 10% of the remaining 4,000 rows. Let’s see what we get when we validate the index:
400 rows deleted. Commit complete. Index analyzed. DELETION_PERCENT ---------------- 10 1 row selected.
How wonderful – by deleting a few more rows we’ve got to a state where we don’t need to rebuild the index after all !
This note is not intended to tell you when you should, or should not, rebuild an index. It is simply trying to highlight the fact that anyone who thinks that a figure exceeding 20% for del_lf_rows / lf_rows does not have a proper understanding of how indexes work, and is basing their assumption on an over-simplistic model. In this case the error in this model is that it allows you to miss indexes which might actually benefit from some action.
The problem is based on a fundamental misunderstanding, which is this: if you believe that an index entry reserves a spot in an index and leave a little hole that can never be reused when it is deleted (unless you re-insert exactly the same value) then inevitably you will think that the del_lf_rows figure is some sort of measure of actual space that could be reclaimed.
But, as the Oracle myth busters like Richard Foote have been saying for years, that’s not how Oracle’s B-tree indexes work. When you delete an index entry, Oracle marks it as deleted but leaves it in place. When you commit your transaction Oracle does nothing to the index entry – but other processes now know that the entry can be wiped from the block allowing the space to be re-used.
This is what del_lf_rows is about – it’s the number of rows that are marked as deleted by transactions that have committed; and since the validate command can only run if it has locked the table in exclusive mode, any index entries marked for deletion will inevitably be committed deletes. So after I had deleted (and commited) 36,000 rows there were 36,000 entries in the index marked as deleted and committed; when I deleted a few more entries my second transaction wiped the deleted entries from any leaf blocks it visited, tidying the blocks (with a “heap block compress”) before marking a few more rows as deleted.
The upshot of this is that many systems (especially OLTP systems) will see del_lf_rows as a fairly small fraction of lf_rows because most systems tend to do little updates scattered randomly across lots of leaf blocks – constantly wiping out the rows marked as deleted by earlier transactions.
It’s really only in the case where a single large delete has taken place in the recent past that you’re likely to see a lot of deleted rows still in the index when you validate it and – as most people are now aware – a process that is supposed to do a very large delete is a process that should be considered in the design phase as a candidate for special treatment such as dropping/disabling some indexes before the delete then rebuilding afterwards. It won’t be a process where you will have to validate the index to decide roughly how much data you’ve deleted, and where in the index that data was, it’s a process where you’ll know what’s going to happen before it happens.
December 26, 2010 Yesterday I started reading another book on the topic of Oracle SQL (for the moment I will keep the title of the book a mystery). I am not much of a fan of ANSI style syntax – that syntax style is easy to read when there are just two tables involved, but [...]
December 24, 2010 Imagine that a developer approached you with the following SQL statement, and explained that he (or she) noticed that the execution plan showed a Cartesian merge join. Even adding an ORDERED hint did not affect the execution plan, and the execution performance was the same with the ORDERED hint. SELECT X.TIME_PERIOD EXPOSURE_PERIOD,Y.TIME_PERIOD [...]
This is just a small windows O/S related note, covering how to track down memory usage of a specific windows service. This all happened on Windows Vista. The same steps are likely to work perfectly well on Windows 7 and both flavours of Server 2008. I’d expect them to also work on earlier versions of [...]
Well, it’s that time of the year again. Here in sunny Canberra, it’s beginning to finally to warm up a bit but I hear it’s a “little” snowy in many other parts of the world !! A big thank-you to everyone that reads and comments on my blog, your support is very much appreciated. I [...]
In Part I of my series on Oracle Database 11g Release 2 Direct NFS Clonedb, I offered short videos with a presentation and a demonstration of the feature. I received a significant amount of email which essentially asked the same few questions. So, instead of answering a bunch of email individually, I’ll address the questions here:
Q. Why is your demonstration modeled around cloning an RMAN backup stored in a file system? All Oracle customers use ASM!
A. I’ve put out Part I as an introduction to the technology. I’ll have more on ASM later. I’m certain that not all customers use ASM. Some might even comment on this post accordingly.
Q. What are the NFS server requirements for the CLONE_FILE_CREATE_DEST?
A. Any NFS mount that supports Oracle Database 11g Direct NFS. Now, this is a bit tricky. Since we are talking about test and development instances I’m not convinced it has to be a commercial-grade filer. After all, the only data that will be stored on NFS with this model is the changed block files (COW) and any new datafiles the test/development instance creates. I have tested on a simple HP Proliant storage server running Linux and exporting NFS shares, but that shouldn’t be misconstrued as a support statement.
Q. What is the My Oracle Support note number mentioned in the Part I?
A. The MOS note is 1210656.1. Keep an eye out for it.
Q. Where can I get the clonedb.pl script?
A. Once the MOS note is online the perl script will be available there. It is just a script that automates a few important tasks and generates SQL (very helpful by the way). I’ll offer a copy at the following link: clonedb.pl
Note, this copy of the script is suitable for clonedb usage with production 220.127.116.11 Oracle. After the performance patch (10403302) is applied this rendition of the script will not work. With that performance patch, the clonedb instance needs to boot with the new init.ora parameter clonedb set to true. The new script will generate the requisite text into the auto-generated clone init.ora.
Q. Does the RMAN backup need to be stored in an NFS mount?
A. No. The copy-on-write/thin-provisioning aspect of the feature is implemented in libodm.so. For this reason the CLONE_FILE_CREATE_DEST assignment needs to be to an NFS mount. That RMAN backup can be elsewhere. More on that later.
Q. What about hot RMAN backups and incremental backups?
A. As I mentioned in Part I, I’ll be going into more detail about such topics as how the snapshot features of commercial NFS filers can augment the Oracle Database 11g Direct NFS clonedb feature. I’ll go over hot backups in that post.
Folks, thus far the intent was to get introductory materials out so we can all end 2010 thinking about a new way to do something old. I’ve left questions unanswered because, after all, we are only at Part I (and a half) in the series.
Filed under: oracle
Its almost Christmas and the end of this 2010 year and I keep wondering what will happen, Oracle wise, in 2011. I have, had, my idea’s about what might happen, will happen, if I would be Mr. Larry Ellison. Being “in the trade”, a Oracle geek, since 1993, I have seen some movements like, “the raw iron project”, Mr. Ellison buying nCube, Oracle Powerbrowser, the Oracle network computer, the arrival of InterOffice, Collaboration Suite, Beehive, buying data connector and security service and product companies, building Oracle Fusion from scratch. Most impressive are those fully optimized hardware machines like Exadata and ExaLogic, and the supporting OS Oracle Linux. But hold your horses wasn’t Oracle the “data company”…
Does Oracle still fit in the internet age? Stuff is going fast. ROI, Time to Market are most important. If you miss the change, the new trend, it can kill your company almost instantly (iPhone, Android, Oops: Symbian…). Whatever you think of Oracle, Mr. Ellison’s strategies, IMHO I think that he has vision but sometimes is to fast regarding its implementation. Just like “Google Wave”, you can have a hell of a app/idea, but if it is too early, no one will jump after you in the water, to get it on shore… The nerds will like it, but if it doesn’t sell, you’re betting on a dead horse. So what makes Oracle tick, money wise? I think that its mostly licenses regarding their main products like the database and Oracle E-Business Suite, but the “old arena” for those products don’t show that much growth. The solution to this, IMHO, is the internet and this much buzzed hype called “Cloud” (in all its variations). The internet has the ability to reach everyone at any time and everywhere…
So if I where Mr. Ellison, why wouldn’t I make use of this enormous huge market out there that is internet enabled?
Why only provide my services and software to the classic environment, the company in-house Oracle architectures which can not be pushed that easily into new features, new methods, to scale more easily. In principal that’s what my (his) customers want. The latest stuff, with a high level of ROI and Time to Market, beating the competition to bring their core business faster to the customer/market. And I can, being Mr. Ellison…
I have Exadata, I have ExaLogic, I have the whole stack that is cloud service enabled, including the new Oracle Fusion Apps. So my step would be to buy a company like Amazon Web Services (I already read rumors indicating such a possibility on twitter), place my hardware stack, Exadata and ExaLogic machines, with all those software goodies like Oracle Fusion etc, whatever can be directly serviced, licensed and packaged by Oracle, via a data center, on the internet and open this new customer base. From that moment on, if you (temporarily) need a piece of ExaLogic, Exadata, Oracle Fusion, Oracle Cloud Open Office or else, just go to the Oracle web portal, pay your Oracle fee or license via your credit card for those specific customer needs you have and via a click everything is at hand (almost) instantly… Voila!. No need to buy software, install, configure, upgrade, secure or else: Oracle provides the instant service needed. Optimized services, hard- and software, configured and maintained by their best people. No intermediate companies needed. Shop via your OneClick@Oracle account and start your business. Need new software, more functionality, or else? Oracle provides via your secured one click account via the internet.
Hold on, I hear you say, but what about backup and recovery and foremost security?
Backup and recovery can be easily provided via Oracle somewhere worldwide via a back office data center solution. Security is probably better maintained than most current in-house “secure environments”, were sometimes, zillions of client machines, connected via the internet (using old “secure” software) directly connect to “the back office systems”, so also zillions of possible entries to attack the back office. I think this is probably one of the reasons, including its complexity, that big customer environments are so incredible difficult to upgrade or to maintain, because most of no idea what will tumble if one domino gets pushed (or upgraded…). So most of those environments still work with Internet Explorer 6, old operating system versions, etc, etc, etc… How rigid can your environment get?
If Oracle would provide, for example, a Oracle Fusion Cloud service, then there is only one “portal/entree”, or at least an architecture which can be way more easily secured due to the limited amount of possible attack options. Of course if you don’t dare to make this step, you always could create your private cloud based on a dedicated (outsourced nearby) data center. But what is the big difference…? The only reason I can think of is custimization, but if parts of your needs can be generalized, such a Oracle Cloud Software service could be a big advantage, especially regarding ROI and Time to Market – that is world wide and globally!
Don’t be scared. We are already doing this… I read my newspaper via iPad, check my mail via Google Mail, look television shows in specialized web portals, socialize my brains out via Twitter, Facebook and others. So why don’t I do my Oracle software based business also not via the internet? Buy my Oracle licenses via this internet, and check its validity via a license Oracle website. Use my Oracle Open Office via the internet, and generate reporting and file,check and manage my business via a internet Oracle app. Need a virtual desktop environment, fully secured for you Oracle Cloud Fusion environment? Check your business via your internet enable devices, like an iPhone or iPad or others: anytime, everywhere, worldwide. Oracle can provide.
There is no reason, AFAIK (that can not be overcome) to not do it…
I was wondering. If this would be happening, me an Oracle DBA, how do I adapt? Do I have to adapt to a different environment and needs? My workplace would change, but as long there is data, I will manage. There always will be needs to solve data problems beyond the scope of average architectures and methods…and that said…I manage most of that via the internet… The world is changing, its time to adapt or change the world yourself.
Oracle Database 11g Release 2 has a bit of a “stealth feature” that few are aware of. The feature is called clonedb which is functionality built into Oracle Database 11g Direct NFS (DNFS). The best way to explain this feature is to pose a short list of questions:
If you say yes to most of these then you’ll appreciate the clonedb feature.
Database Clones Without Storage Snapshot/Clone Technology?
This is Part I in the series so at this stage I’ll clearly point out that with Oracle Database 11g Direct NFS clonedb functionality you can create a fully read/write clone database without storage snapshots or clones. Moreover, the clonedb feature is a thin-provisioning approach.
I could type a lot more words about this new feature, but this is a Part I blog entry and since I have a video presentation and a video demonstration of creating and using a direct NFS clone I think I’ll just offer the following links. At only 9 and 3 minutes for the presentation and demonstration videos respectively I hope you’ll find the time for a viewing.
I’m excited about this feature. In terms of administrative effort, it is by far the easiest way to provision clones for development/test instances that I am aware of. In my assessment it is simple, stable and it performs. I don’t get to say that as often as I’d like to about technology.
Next In This Series
In Part II I’ll cover how adding storage snapshot functionality adds value to Oracle Database 11g Direct NFS clonedb. I’ll also blog about large-scale provisioning using this feature based on a recent test I performed of 64 clones running OLTP in a 8-socket Nehalem EX system running Linux. Future installments on this blog thread will include upcoming patches that improve performance and reference to a MOS note so you don’t fear this is some sort of unsupported feature.
Filed under: oracle