Even though a number of articles and blog posts have already been written on this topic (e.g. on Pete Finnigan’s site I found references dating back from 2003), from time to time I’m still asked “How to trace predicates generated by VPD?”. Hence, here’s yet another blog post about this topic…
Let’s setup the scene before [...]
This story is long overdue and no its NOT about the Oracle Database 11g Database File System (DBFS). Its about an “undocumented” NFS functionality that, maybe someday, will be serviced by the XMLDB XDB Protocol Adapter. This post is “long overdue” because the actual attempts to try to figure it out were done during the bank holidays between X-mas and new year 2009.
So what is it all about. I once discovered in the Oracle 11gR1 documentation a small entry in the xmlconfig.xsd XML Schema regarding NFS elements that look like that they are or will be used for enabling NFS functionality based on the Oracle XMLDB Protocol Server architecture. In those days, when Oracle 11gR1 was just of the shelve, I made a few attempts, based on the xdbconfig.xsd XML Schema to adjust the corresponding xdbconfig.xml file that controls the XDB Protocol Server functionality, to see what would happen. At that time I only was able to get this far (see the picture) and I promised myself that I should look deeper into it trying to figure out if I could get it working and/or what the concepts were that made it tick in the XMLDB architecture but somewhere down the line I just didn’t come to it and it got “forgotten” by me due to my daily DBA workload.
Click picture to enlarge
Recently I discovered that the size of an ORACLE_HOME for a given release varies from machine to machine although the machines are of the same architecture and run the same operating system. A small difference in size can be explained by the fact that one ORACLE_HOME was re-linked in the past while the other wasn’t, […]
I admit it. I'm a binge blogger (I borrowed this term from a friend of mine who posted on the same topic). I make several blog entries over the course of a week or so and then it may be a month or two before I show up again. I suppose my good intentions of blogging regularly just get swept under the rug of day-to-day reality and how/where I choose to spend my time. But, regardless of frequency, I suppose it's fairly obvious from this post that I'm still here.
I was doing a bit of blog surfing tonight to catch up on a long list of blogs I enjoy reading and came across the following from a recent Seth Godin post:
"Firing the customers you can't possibly please gives you the bandwidth and resources to coddle the ones that truly deserve your attention and repay you with referrals, applause and loyalty."
NoSQL is an unfortunate name – it doesn’t give any description of what the product does except what query language it will not support. What’s worse, it makes people treat the various non-relational databases as interchangable, while in fact many of them solve completely different problems and have different trade-offs, strengths, etc.
What is common to all these DBs is that they don’t do ACID in an attempt to improve scalability, most of them are distributed and most of them were built to handle semi-structured or unstructured data.
The theoretical case for these databases starts from the CAP theorem which says you can’t have consistency, availability and partition tolerance all at once. Partition tolerance is the prevention of split-brain in a cluster or distributed system – you don’t want network failures to allow data corruptions or incorrect results.
Since you can’t have all three, you choose two. So RAC does partition tolerance and consistency at the expense of availability – if the voting disk crashes or loses network connectivity, the entire cluster will go down.
NoSQL databases keep availability and partition tolerance at the expense of consistency. They have something called “Soft-State” and “Eventual Consistency”. To the best of my understanding, “Eventual Consistency” means that all the DML statements in the transaction are inserted into a queue (or some equivalent), from which they are executed at different times by different servers. Eventually they are all executed and you reach a consistent state, but you don’t know when. Of course with such system, it appears nearly impossible to prevent lost updates.
For anyone who is interested in recording the process and cpu elapsed via instrumentation, I have made the elapsed time collection module I'm currently using available at the Expert Oracle Practices: Database Administration from the Oak Table source code download site.The first step is to download the Instrumentation Library for Oracle (ILO) version 2.3 from SourceForge.Next, install ILO per the
Unknown to me anyway until just this week.
Some time ago I read a post about RMAN on Oracle-L that detailed what seemed like a very good idea.
The poster's RMAN scripts were written so that the only connection while making backups was a local one using the control file only for the RMAN repository.
#eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;">
rman target sys/manager nocatalog
After the backups were made, a connection was made to the RMAN catalog and a SYNC command was issued.
The reason for this was that if the catalog was unavailable for some reason, the backups would still succeed, which would not be the case with this command:
In my previous series regarding “dNFS” vs “kNFS” I reference a script that monitors dNFS traffic by sampling the v$dnfs_stats view. A reader requested the script so I thought it might be useful to a wider audience. This simple script samples some values from the view and outputs the a date/timestamp along with rate information. I hope it is useful.
------ mondnfs.sql ------- set serveroutput on format wrapped size 1000000 create or replace directory mytmp as '/tmp'; DECLARE n number; m number; x number := 1; y number := 0; bnio number; anio number; nfsiops number; fd1 UTL_FILE.FILE_TYPE; BEGIN fd1 := UTL_FILE.FOPEN('MYTMP', 'dnfsmon.log', 'w'); LOOP bnio := 0; anio := 0; select sum(nfs_read+nfs_write) into bnio from v$dnfs_stats; n := DBMS_UTILITY.GET_TIME; DBMS_LOCK.SLEEP(5); select sum(nfs_read+nfs_write) into anio from v$dnfs_stats; m := DBMS_UTILITY.GET_TIME - n ; nfsiops := ( 100*(anio - bnio) / m ) ; UTL_FILE.PUT_LINE(fd1, TO_CHAR(SYSDATE,'HH24:MI:SS') || '|' || nfsiops ); UTL_FILE.FFLUSH(fd1); x := x + 1; END LOOP; UTL_FILE.FCLOSE(fd1); END; / ========================
I just couldn’t resist. One of the great myths in Oracle is that bitmap indexes are only suitable and should only be used with columns that have so-called low cardinality (few distinct) values. A classic example of this myth being propagated is in this latest Burleson news item, “Oracle bitmap index maximum distinct values“, from Feb 16 [...]