Who's online

There are currently 0 users and 30 guests online.

Recent comments

Oakies Blog Aggregator

Exadata Storage Indexes – Part I (Beginning To See The Light)

Thought I might discuss Exadata Storage Indexes, explore what they are, how they work and discuss their advantages and disadvantages. Following is but a brief introduction on this very cool Exadata feature. A Storage Index basically maintains summary information about database table data down on the Exadata storage servers. The information consists of the minimum value, the maximum […]

Oracle OpenWorld 2012 : Day 3

Day 3 was similar to day 2 for me. I spent a lot of time hanging around the RAC Attack sessions. A lot of people pass through, so in addition to helping people set up virtualized RAC, there are always interesting people to talk to.

I did some serious demo grounds cruising again, including:

  • Data Pump Enhancements in 12c.
    - You can now do all the transportable tablespace stuff in a single hit using NETWORK_LINK.
  • PL/SQL Enhancements in 12c.
    - I’ll talk more about these in a summary of a Tom Kyte presentation in a later post.
  • Global Data Services in 12c.
    - Define a single service that can load balance across multiple replicated databases. This is independent of replication method.
  • Oracle Linux. I went back and spent some time chatting with Lenz Grimmer again. He’s really good value!
  • Oracle Text in 12c (Rodger Ford).

I finally got to meet Jeff Smith and David Haimes in person. It’s always a bit odd meeting people for the first time when you feel you already know them…

I went back to the hotel to dump my bag before heading out and accidentally went to bed, so that’s a second night of no social events for me… :)



Oracle OpenWorld 2012 : Day 3 was first posted on October 3, 2012 at 8:22 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Openworld day 3

Yesterday was another day of good quality sessions at OpenWorld.  Yes its true that there are many sessions that could only be described as pure marketing, but with a little effort, there are gems in the agenda !

It started with partitioning facilities that are coming in 12c.

Interval/Reference partitions will be supported, although no word on whether you can have subpartitions under interval based range partitions.  Time will tell.

One of the nice things was the CASCADE option coming for truncate/exchange partition.  In the past, when you had foreign keys between tables (as of course we always should :-) ), then truncate (being a DDL) would require a lot of manual FK maintenance before and after the truncate command.

This restriction has been lifted to a reasonable degree in 12c.

Next up was a session on optimizer hints by Maria Colgan, who is product manager for the optimizer team.  As she quoted: “I’m the chief apologiser for the Oracle optimizer”.  Which is a bit harsh to be honest – I’ve generally found most optimizer issues are due to lack of understanding rather than any particular lack of function within the product itself.

The session on hints was good, in particular, a strong stress toward trying to implement “hints” via baselines rather than code changes.  I’m not entirely sure I agree with that policy, given that a hint in the code is “self-documenting” whereas a baselines is just that little bit less visible.

Another thing coming in the 12c optimizer is “adaptive execution plans”, namely, the database at run time may recognise that the plan chosen as parse time was possibly not the best one, and may change it on the fly.  Sounds impressive, but also sounds a little risky in terms of ensuring consistency of performance.  Time will tell.

Finished the day with some clam chowder !  Yum.


Partitioning 12c

Most useful presentation of OOW so far, from Hermann Baer of Oracle on improvements in partitioning features in 12c – and there are lots of useful ones, including:

Online move of a partition – so easy to compress a partition when it has reached its final “read-only” state

multiple partition maintenance in one operation – e.g. merge 3 monthly partitions into one quarterly partition, or split one partition into 24 (think about “how do I partition a non-partitioned table”, and 12c has just made it easier and quicker – exchange it into an empty partitioned table, then do one massive split).

partial indexing – define which partitions of a partitioned table should be included in the indexes you create on the table – and the optimizer also knows that different partitions need different plans (an enhancement of “table expansion”.

interval-reference partitionining – you can create an interval partitioned table, and user ref-partitioning to create child (and further decendent) tables, and their partitions will automatically be generated, truncated, and dropped as the parent is extended, truncated or dropped (needs enabled foreign key constraints).

Lots more details – and lots of stress-testing to be done – but I’m off to hear “the optimizer lady” talk about hints.


Oracle has a “safe harbour” slide at the start of all presentations about future developments pointing out that the information presented is an indication of direction, but not guaranteed to make it into production.


Excellent presentation from Maria Colgan, as usual. Had to have a word with her afterwards because she said at one point “… so the optimizer ignored your hint …”  (she didn’t really mean it though).

To the person who asked me afterwards about subverting the optimizer’s join order algorithm -  there is a hidden parameter which changes the choice of algorithm, but you might be better off trying to address the problem by disabling cartesian joins (_optimizer_cartesian_enabled = false); I haven’t done any experimentation with this, so it might not do what it’s name and description suggest; and it may, or may not, be one of those parameters that can be set with the /*+ opt_param() */ hint.

There’s also the parameter _optimizer_search_limit which defaults to 5. I don’t have a complete picture of what this parameter does, but in some tests I did about 10 years ago ( one of its effects was to limit the number of cartesian joins that could appear in the examined join orders; so, possibly, if you set this to zero then you would end up with the initial join order that’s causing a problem. Again, it might be possible to set it up for the query with the /*+ opt_param() */ hint.

As ever, check with Oracle support before playing around with hidden parameters.

Openworld Day 2

Yesterday I went to a presentation on histograms in 12c.  A number of good improvements made there by Oracle coming in 12c.

Particular emphasis seems to be placed on some of the short comings of height balanced histograms.  In 12c, we will have TOP-N histograms (where the "top scorers" get their own frequency bucket, and the other values get collapsed together.  Similarly, there is a hybrid form of height balanced histogram.  Lots of playing around to be done there.

The limit on histograms has been bumped up to 2048, although you have to ask the question – if you have 2000 special values, then maybe, you don’t actually have *any* special values :-)

Later in the day, I went to Bryn Llewellyn’s PLSQL enhancements talk.  PLSQL features in new releases is always an interesting one for me in that its such a mature product, so does an absence of "headline" new features mean that its no longer cared for, is that it is functionallty complete ?  You never can tell.

Better SQL and PLSQL interaction seemed to be the main thrust.  You can have functions within a WITH clause, as a means of being able to leverage the ‘procedural’ elements of PLSQL without having to recast all of that into SQL expressions. 

There is also a new pragma to make PLSQL that it called from SQL even faster.  I think this will get a lot of takeup from those who move to 12c, because calling PLSQL from SQL has become so commonplace nowadays.

UTL_CALL_STACK is a now package introduced, which (finally) gives some structure to the call stack that we all use the parse out of DBMS_UTILITY.

Oracle OpenWorld 2012 : Day 2

The cold continues…

I decided than rather than trying to sit in sessions and struggle to cope I would do my stint at RAC Attack and spend the day on the demo grounds…

The demo grounds are a really awesome way to get information because you are often speaking to the developers and product specialists working on code itself. In my opinion, you get a much better fix on what is going on that sitting in a presentation.

I got to speak to a bunch of people for a few hours about:

  • Oracle Linux (Lenz Grimmer).
  • Unstructured Data and XML DB (Mark Drake).
    - XML DB is mandatory in 12c.
    - The new Enterprise Manager Express runs from within the database using the XML DB infrastructure,. This replaces the deprecated Database Control.
  • Optimizer enhancements in 12c (Alison Lee and Maria Colgan).
    - Adaptive Plans : The optimizer can change the plan during execution if it determines the cardinality estimates it made are not accurate.
    - Adaptive Statistics : The optimizer can store information about adaptive plans to allow it to make better estimates in future.
    - Existing information, like cardinality feedback can now be persisted, rather than being aged out and lost.
  • DBFS Enhancements in 12c.
  • Edition-Based Redefinition Enhancements in 12c.
    - Marking objects as non-editionalble.
  • Pluggable Databases in 12c.
    - Separating the internal Oracle metadata (container database – CDB) from the user metadata (plugable database – PDB)
    - Easier upgrades, clones and migrations.
    - Reduced namespace clashes. Public synonyms and DB links for each plugable database within the same instance.
  • Compression Enhancements in 12c (Graham Wood).
  • Far Sync Server (Graham Wood).
  • SQL Developer.
  • Data Redaction in 12c.
  • Data masking and data subsetting in Cloud Control 12cR2.

Sorry if I missed anyone or anything out of the list.

I think it may have been the Blogger meetup last night, but I was too tired so I went back to the hotel and dropped into a coma for a few hours. :)



Oracle OpenWorld 2012 : Day 2 was first posted on October 2, 2012 at 6:20 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle OpenWorld 2012 : Day 1

Day 1 turned out to be a little awkward for me because I was feeling pretty under the weather. I’ve been nursing a very minor cold for a couple of weeks and I think the lack of sleep has made it kick up a gear.

I went to a DB tuning session and an ADF deployment session, then I felt out for the count. At lunch time I went and watched a movie (review to come), then I went back to my room and vegetated for the rest of the day. It felt like a bit of a waste of a day, but I really wasn’t able to cope with anything more.

In the evening it was the ACE Dinner. I ate loads of food and talked until my throat was dead. It was fun, but I was glad to get back to my bed.



Oracle OpenWorld 2012 : Day 1 was first posted on October 2, 2012 at 5:51 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

OOW 2012 – XQuery Update (HOL)

As promised, hereby the Hands-On Lab Oracle OpenWorld XQuery Update example statements. The following XQuery Update code will work from Oracle database and upwards and is fully supported from this version onwards. For more information see some of the blogposts here on this site or the OTN XMLDB forum for more examples. The code …

Continue reading »

OOW 2012 – And was there news on the XMLDB front…?

Just finished my Birds of a Feather XMLDB panel session in the Marriott Hotel and now looking back on an event full day. It all started of with a Keynote session with among others Mark Hurd. the most interesting bit, in my honest opinion, was the announcement of more details in Oracle 12c. I will …

Continue reading »

overlapping ranges with priority

A customer of ours (a leading Italian consumer goods retailer) has asked us to solve the following problem, that occurs quite frequently and that is not trivial to solve efficiently - and that is very interesting to design and fun to blog about!
the problem
Prices of skus (i.e. goods) have validity ranges (time intervals) and can overlap; on an overlapping range, the strongest priority (lower number) wins. In pictures:


the expected output is, since priority 0 is stronger then 1:


I'm going to illustrate in detail the pure-SQL algorithm I have designed, and then discuss about its performance and efficiency. As usual, everything I discuss is supported by an actual demo. Please note that the algorithm uses analytics functions very, very heavily.
pure SQL solution
The input table:

create table ranges (
  sku   varchar2(10) not null,
  a     int not null,
  b     int not null,
  prio  int not null,
  price int not null
alter table ranges add constraint ranges_pk primary key (sku, a, b);

Let's provide the opening example as input:

insert into ranges(sku, a, b, prio, price) values ('sku1', ascii('b'), ascii('d'), 0, 200);
insert into ranges(sku, a, b, prio, price) values ('sku1', ascii('c'), ascii('e'), 1, 300);

The algorith is implemented as a single view; let's comment each step and show its output over the example:

create or replace view ranges_output_view

The instants in time where the ranges start or begin:

with instants as (
  select sku, a as i from ranges
  select sku, b as i from ranges

Output: b,c,d,e.
The base ranges, i.e. the consecutive ranges that connect all the instants:

base_ranges as (
  select *
    from (
  select sku,
         i as ba,
         lead(i) over (partition by sku order by i) as bb
    from instants
   where bb is not null

The original ranges factored over the base ranges; in other words, "cut" by the instants:

factored_ranges as (
  select i.sku,,, i.a, i.b, i.prio, i.price
    from ranges i, base_ranges bi
   where i.sku = bi.sku
     and (i.a <= and < i.b)

Then, let's filter out the factored ranges with weaker priority (that have a stronger priority range with the same extremes "covering" them):

strongest_factored_ranges as (
  select sku, ba, bb, prio, price
    from (
  select sku, ba, bb, prio, price,
         dense_rank () over (partition by sku, ba, bb order by prio) as rnk
    from factored_ranges
   where rnk = 1

The problem could be now considered solved, if you could live with consecutive intervals showing the same price (such as b--c and c--d above). If you can't for whatever reason (I couldn't), we can join them using analytics again in a way similar to this asktom technique (look at the bottom for "Analytics to the Rescue (Again)").
First, we calculate "step", a nonnegative number that will be zero if a range can be joined to the previous one, since:
a) they are consecutive (no gap between them)
b) they have the same price:

ranges_with_step as (
  select sku, ba, bb, prio, price,
         decode ( price, lag(price) over (partition by sku order by ba),  ba - lag(bb) over (partition by sku order by ba), 1000 ) step
    from strongest_factored_ranges
RANGE_CODED                    STEP
------------------------ ----------
b---(0,$200)---c               1000
c---(0,$200)---d                  0
d---(1,$300)---e               1000

Then we compute the integral of step over the ranges; joinable ranges will hence have the same value for "interval" since step is zero:

ranges_with_step_integral as (
  select sku, ba, bb, prio, price, step,
         sum(step) over (partition by sku order by ba rows between unbounded preceding and current row) as integral
    from ranges_with_step
------------------------ ----------
b---(0,$200)---c               1000
c---(0,$200)---d               1000
d---(1,$300)---e               2000

The joined joinable ranges :

ranges_joined as (
  select *
    from (
  select sku, ba, bb, prio, price, step, integral,
         min(ba) over (partition by sku, integral) as a,
         max(bb) over (partition by sku, integral) as b
    from ranges_with_step_integral
   where step > 0
select sku, a, b, price from ranges_joined;

The first desirable property of this view is that a predicate (such as an equality predicate, but it works even for the "between" operator, less-than, etc) on sku can be pushed down the view to the base tables. For:

select * from ranges_output_view where sku = 'k100';

the plan is:

| Id  | Operation                            | Name      |
|  10 |           TABLE ACCESS BY INDEX ROWID| RANGES    |
|* 11 |            INDEX RANGE SCAN          | RANGES_PK |
|* 17 |                INDEX RANGE SCAN      | RANGES_PK |
|* 18 |                INDEX RANGE SCAN      | RANGES_PK |
11 - access("I"."SKU"='k100')
17 - access("SKU"='k100')
18 - access("SKU"='k100')

That means that only the required SKU(s) are fed to the view, and proper indexes (such as RANGES_PK in this case) can be used. So, if you need to refresh only a few skus the response time is going to be almost istantaneous - provided that you have only sane (a few) ranges per sku. Hence you can use the same view for both calculating prices of all skus (say, in a nightly batch) and calculating a small subset of skus (say, online), and that is a great help for maintenance and testing.
running in parallel
Another desirable property is that the view can operate efficiently in parallel, at least in (I have not tested other versions):

| Operation                                   |IN-OUT| PQ Distrib |
| SELECT STATEMENT                            |      |            |
|  PX COORDINATOR                             |      |            |
|   PX SEND QC (RANDOM)                       | P->S | QC (RAND)  |
|    VIEW                                     | PCWP |            |
|     WINDOW SORT                             | PCWP |            |
|      VIEW                                   | PCWP |            |
|       WINDOW SORT                           | PCWP |            |
|        VIEW                                 | PCWP |            |
|         WINDOW BUFFER                       | PCWP |            |
|          VIEW                               | PCWP |            |
|           WINDOW SORT PUSHED RANK           | PCWP |            |
|            HASH JOIN                        | PCWP |            |
|             PX RECEIVE                      | PCWP |            |
|              PX SEND HASH                   | P->P | HASH       |
|               PX BLOCK ITERATOR             | PCWC |            |
|                TABLE ACCESS FULL            | PCWP |            |
|             PX RECEIVE                      | PCWP |            |
|              PX SEND HASH                   | P->P | HASH       |
|               VIEW                          | PCWP |            |
|                WINDOW SORT                  | PCWP |            |
|                 PX RECEIVE                  | PCWP |            |
|                  PX SEND HASH               | P->P | HASH       |
|                   VIEW                      | PCWP |            |
|                    SORT UNIQUE              | PCWP |            |
|                     PX RECEIVE              | PCWP |            |
|                      PX SEND HASH           | P->P | HASH       |
|                       UNION-ALL             | PCWP |            |
|                        PX BLOCK ITERATOR    | PCWC |            |
|                         INDEX FAST FULL SCAN| PCWP |            |
|                        PX BLOCK ITERATOR    | PCWC |            |
|                         INDEX FAST FULL SCAN| PCWP |            |

There's no point of serialization (all servers communicate parallel-to-parallel), the rows are distributed evenly using an hash distribution function (probably over the sku) and all operations are parallel.
sku subsetting and partitioning
It is well known that analytics functions use sort operations heavily, and that means (whether or not you are running in parallel) that the temporary tablespace may be used a lot, possibly too much - as it actually happened to me , leading to (in my case) unacceptable performance.
Side note: as I'm writing this, I realize now that I had probably been hit by the bug illustrated by Jonathan Lewis in Analytic Agony, but of course, overuse of temp could happen, for large datasets, without the bug kicking in.
A possible solution is to process only a sub-batch of the skus at a time, to keep the sorts running in memory (or with one-pass to temp), leveraging the predicate-pushability of the view. In my case, I have made one step further: I have partitioned the table "ranges" by "sku_group", replaced in the view every occurrence of "sku" with the pair "sku_group, sku", and then run something like:

 for s in (select sku_group from "list of sku_group") loop
  select .. from ranges_output_view where sku_group = s.sku_group;
end loop;

The predicate gets pushed down to the table, hence partition elimination kicks in and I can visit the input table one time only, one partition at a time, using a fraction of the resources at a time, and hence vastly improving performance.
And that naturally leads to "do-it-yourself parallelism": running a job for every partition in parallel. I'm going to implement it since the customer is salivating about it ... even if it is probably over-engineering :D .