Search

Top 60 Oracle Blogs

Recent comments

performance

Indexing Null Values - Part 2

In the previous post I've demonstrated that Oracle has some problems to make efficient use of B*Tree indexes if an IS NULL condition is followed by IN / OR predicates also covered by the same index - the predicates following are not used to navigate the index structure efficiently but are applied as filters on all index entries identified by the IS NULL.

In this part I'll show what results I got when repeating the same exercise using Bitmap indexes - after all they include NULL values anyway, so no special tricks are required to use them for an IS NULL search. Let's start again with the same data set (actually not exactly the same but very similar) and an index on the single expression that gets searched for via IS NULL - results are again from 18.3.0:

Being generous to the optimizer

In a perfect world, the optimizer would reach out from the server room, say to us: “Hey, lets grab a coffee and have a chat about that query of yours”. Because ultimately, that is the task we are bestowing on the optimizer – to know what our intent was in terms of running a query in a way that meets the performance needs of our applications. It generally does a pretty good job even without the coffee Smile, but if we can keep that caffeine hit in mind, we can do our bit as SQL developers to give the optimizer as much assistance as we can.

Hacking together faster INSERTs

Most developers tools out there have some mechanism to unload a table into a flat file, either as CSV, or Excel, and some even allow you to unload the data as INSERT statements. The latter is pretty cool because it’s a nice way of having a self-contained file that does not need Excel or DataPump or any tool additional to the one you’re probably using to unload the data.

SQLcl and SQL Developer are perhaps the easiest to utilize for such an extract. You simply add the pseudo-hint INSERT to get the output as insert statements. For example:

Indexing Null Values - Part 1

Indexing null values in Oracle is something that has been written about a lot in the past already. Nowadays it should be common knowledge that Oracle B*Tree indexes don't index entries that are entirely null, but it's possible to include null values in B*Tree indexes when combining them with something guaranteed to be non-null, be it another column or simply a constant expression.

Jonathan Lewis not too long ago published a note that showed an oddity when dealing with IS NULL predicates that in the end turned out not to be a real threat and looked more like an oddity how Oracle displays the access and filter predicates when accessing an index and using IS NULL together with other predicates following after.

PeopleSoft Administrator Podcast: #183 – Effective Performance Monitoring

I recently recorded a podcast with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about instrumentation, monitoring the performance of PeopleSoft system, and Performance Monitor.  There is also just a little about cursor sharing.

I/O Benchmark Minor Update

I've recently published a new version 1.03 of the I/O benchmark scripts on #333333;">my #336699;">github repository#333333;"> (ideally pick the #336699;">IO_BENCHMARK.ZIP containing all the scripts#333333; font-family: "verdana" , "arial" , sans-serif;">).

Bloom Filter Efficiency And Cardinality Estimates

I've recently came across an interesting observation I've not seen documented yet, so I'm publishing a simple example here to demonstrate the issue.

In principle it looks like that the efficiency of Bloom Filter operations are dependent on the cardinality estimates. This means that in particular cardinality under-estimates of the optimizer can make a dramatic difference how efficient a corresponding Bloom Filter operation based on such a cardinality estimate will work at runtime. Since Bloom Filters are crucial for efficient processing in particular when using Exadata or In Memory column store this can have significant impact on the performance of affected operations.

Quick and easy masking

I had a request from a client a while back regarding masking of data. They had an application with sensitive data in the Production environment (where access and audit were very tightly controlled) but the issue was how to respect that sensitivity in non-Production environments whilst still preserving full size data sizes for application testing.

After some conversations about requirements, it turned out that since (even in non-Production environments) all access to application components was logged and audited, the issue was simply protecting against “inadvertent” access to sensitive data. For example, in the application, if I searched for “males with black hair with an interest in technology” I should never see the name “Connor McDonald” on screen in the results, because simply viewing that data could be treated as a breach of privacy.

More chances to bulk process

I’m sure most of us have read or heard at a conference the benefits of array fetching and array binding when it comes to passing data back and forth to the database. And we’ve all seen the numerous demo scripts in PL/SQL along the lines of:


FORALL i in 1 .. n 
   INSERT ...

As such, there is a misconception out there that you are only going to be able to use bulk binding for basic DML operations. So I thought I’d share this example that came in via AskTOM recently. We got asked if there was any means of improving the performance of this row-by-row operation where the DML was a complex Text index search, with the additional complication that on a row by row basis, the DML may fail but that this was an anticipated outcome that needed to be handled and moved past. The scenario presented was as follows:

Describe Upgrade

Here’s an odd little change between Oracle versions that could have a stunning impact on the application performance if the thing that generates your client code happens to use an unlucky selection of constructs.  It’s possible to demonstrate the effect remarkably easily – you just have to describe a table, doing it lots of times to make it easy to see what’s happening.