Search

OakieTags

Who's online

There are currently 0 users and 36 guests online.

Recent comments

Oakies Blog Aggregator

Index Usage

The question of how to identify indexes that could be dropped re-appeared (yet again) on the OTN database forum last week. It’s not really surprising that it recurs so regularly – the problem isn’t an easy one to solve but new (and even less new) users keep hoping that there’s a quick and easy solution.

There are, however, strategies and pointers that can help you to optimise the trade-off between effort, risk, and reward. Broadly the idea is to spend a small amount of effort finding a relatively small number of “expensive” indexes that might be safe to drop, so that when you do the detailed analysis you have a good chance that the time spent will be rewarded by a positive result.

Before we get to some results posted on OTN, it’s worth thinking about the global impact and what we’re trying to achieve, and the threats that go with our attempt to achieve it.

The key detail, of course, is that index maintenance is an expensive process. We could insert 1,000 rows into a table at a cost of writing about 25 table blocks plus a few undo blocks plus something like half a megabyte of redo (assuming, for the purposes of illustration that each row is about 200 bytes on insert). Add one index to the table and we might have to locate and modify 1,000 separate index leaf blocks. The increment on the redo might be about quarter of a megabyte and we may have to access 1,000 different undo blocks for read consistency reasons, but the simple fact that we may need 1,000 buffers to be able to maintain that index is likely to be a significant extra cost on the insert. Make that 10 indexes, or 70 (as one unhappy DBA once told me) and the probability of being able to do high-speed inserts becomes rather low.

Of course we hope that our indexes will allow our queries to operate efficiently with great precision, but inevitably we get to a point where the benefit of precision is outweighed by the cost of maintenance. Our target, then, is to design the set of indexes that makes it possible for the optimizer to find good paths for all the important queries and “good enough” paths for the rest. By the time the system is live, though, it’s too late for “proper design”, and the only option is for damage limitation, a bit of guesswork, and some live testing with fingers crossed (thank goodness for invisible indexes).

The starting point is usually an attempt to identify “the indexes we are not using”, which is typically translated into “the indexes that do not appear in execution plans” – but that’s not actually a good target, for various reasons:

  • Problem 1: If we are using an index it’s possible that we shouldn’t be and that there’s an alternative index available that ought to be more efficient. A corollary to this is that if you do identify and drop such an index you may find that the optimizer doesn’t use the alternative index you were expecting it to use until you take some action to help the optimizer recognise that the alternative is a good choice.
  • Problem 2: if we aren’t using a particular index then perhaps we should be using it and would use it if we dropped one of the other indexes on the table. (And there’s always the possibility that we didn’t happen to use it during the interval we were checking but do use it at some other times)
  • Problem 3: the optimizer is capable of using information about the number of distinct keys in a multi-column index to select an executon plan even though it may not use that index in the plan it finally chooses. We may be able to work around this problem in current versions of Oracle by creating a column group (extended statistics) that matches the definition of each indexes we drop – but there’s a limit of 20 column groups per table.
  • Problem 4: There are some indexes we might not be using but which must exist to avoid the “foreign key locking” problem. It should be easy enough to check, before dropping an index, whether it has to exist to match a foreign key; and even then it may be possible to show that nothing in the application would cause the locking problem to appear – and as a safety measure you could disable locks on the (child) table to ensure that the application doesn’t grind to a halt because of foreign key locking problems.

Provided you remember that problems like these exist, and think carefully about the indexes that your strategy suggests, there are various ways you could approach the problem of identifying indexes that don’t get into execution plans.

v$object_usage

The ink had barely dried on the manual pages for this view before several people (including me) had written notes explaining why this view wasn’t particularly helpful. (I think I even said something about this in Practical Oracle 8i). I won’t repeat the discussion here but it revolves around the fact that an index is flagged as “used” even if it has only been used once in a single execution of a single statement – so you don’t get any idea of the real importance of the index.

v$sql_plan et. al.

If you review the set of in-memory execution plans (and the AWR or Statspack equivalents) you can identify indexes which definitely have been used – but (a) it’s expensive to scan v$sql_plan frequently and (b) the AWR/Statspack repositories only capture a subset of the more expensive plans, so it’s easy to miss indexes which have been used and are relatively important but aren’t in the repository and don’t happen to be in memory at the moments you look.

Review the definitions

If you examine the index definitions you may spot indexes where look very similar. If one index starts with the same columns, in the same order, as another index, there is a good chance that you could reduce two indexes to one – especially if the whole of one of the indexes is the “leading edge” of the other – for example:

  • (dp_datetime_date)
  • (dp_datetime_date, dp_compid)

Even if the leading edges match and the trailing edges differ we might be able to collapse two indexes into one – depending on how selective the leading columns are and how the indexes are used – for example:

  • (dp_compid, ddzt, cirmhcx, ct_nxr_mhcx, dp_datetime_date)
  • (dp_compid, ddzt, cirmhcx, ct_nxr_mhcx, pnr_cfrqsj_date)

which could perhaps be replaced by one of :

  • (dp_compid, ddzt, cirmhcx, ct_nxr_mhcx, dp_datetime_date, pnr_cfrqsj_date)

or

  • (dp_compid, ddzt, cirmhcx, ct_nxr_mhcx, pnr_cfrqsj_date, dp_datetime_date)

Guessing about the use of a typical date column, though, it’s possible that in this example the current trailing date columns are used with a range-based predicate, so it’s possible that this strategy won’t be effective for this pair of indexes.

Even if the order of later columns in the index doesn’t match you may still find that a pair of indexes could be reduced to a single index – for example the pair:

  • (dp_datetime_date, dp_compid)
  • (dp_datetime_date, ddzdt, dp_compid, ct_nxrdh, ct_smsmobilno)

which could perhaps be replaced by just:

  • (dp_datetime_date, dp_compid, ddzdt, ct_nxrdh, ct_smsmobilno)

As a safety measure, of course, you would probably create a new index, then make the subject indexes invisible, and wait for at least a week to see whether any performance problems appear (remembering that one automatic performance threat would be the increase in workload as yet another index – temporarily – has to be maintained).

The difficulty of eliminating indexes by examination is that it takes a lot of effort to investigate all the possibilities, so you really need some way of choosing a relatively small subset of indexes that might be worth the effort. This brings me to the principle topic of this posting – using segment statistics to help you pick which indexes might be worth the effort.

v$segstat / v$segment_statistics

Oracle records a number of workload statistics for each object in memory. The view v$segstat is an efficient version of these statistics, and v$segment_statistics is a friendlier version that joins v$segstat to tables user$, obj$ and ts$, with a filter against ind$ to turn meaningless numbers into names.

SQL> desc V$segstat
 Name                    Null?    Type
 ----------------------- -------- ----------------
 TS#                              NUMBER
 OBJ#                             NUMBER
 DATAOBJ#                         NUMBER
 STATISTIC_NAME                   VARCHAR2(64)
 STATISTIC#                       NUMBER
 VALUE                            NUMBER

SQL> desc V$segment_statistics
 Name                    Null?    Type
 ----------------------- -------- ----------------
 OWNER                            VARCHAR2(30)
 OBJECT_NAME                      VARCHAR2(30)
 SUBOBJECT_NAME                   VARCHAR2(30)
 TABLESPACE_NAME                  VARCHAR2(30)
 TS#                              NUMBER
 OBJ#                             NUMBER
 DATAOBJ#                         NUMBER
 OBJECT_TYPE                      VARCHAR2(18)
 STATISTIC_NAME                   VARCHAR2(64)
 STATISTIC#                       NUMBER
 VALUE                            NUMBER

For each segment Oracle records the following statistics (according to v$segstat_name – but there are a couple more hidden statistics reported in the underlying x$ksolsstat object):

NAME                             SAMPLED
-------------------------------- -------
logical reads                    YES
buffer busy waits                NO
gc buffer busy                   NO
db block changes                 YES
physical reads                   NO
physical writes                  NO
physical read requests           NO
physical write requests          NO
physical reads direct            NO
physical writes direct           NO
optimized physical reads         NO
optimized physical writes        NO
gc cr blocks received            NO
gc current blocks received       NO
ITL waits                        NO
row lock waits                   NO
space used                       NO
space allocated                  NO
segment scans                    NO

Both Statspack (at level 7) and the AWR report have several “Top N” sections for segment statistics. If we examine these stats for all the indexes on a given table we can get some clues about which indexes are likely to be worth further investigation to see if they could be dropped.

One very simple measure is the number of “physical reads” (which, for indexes, will generally be very similar to “physical read requests”). Since a (real) physical read is generally going to take a significant amount of time, segments with very large numbers of physical reads could be contributing a lot of of time to the total database time – so it’s worth knowing why it’s responsible for so many physical reads and worth cross-checking with v$sql_plan (and its historic equivalents) which statements seem to be using or modifying this index.

Even if it turns out that the index is absolutely necessary, you might still be able to spot opportunities to improve efficiency. If it is subject to a significant number of physical reads it may be that the index is just very large – could you make it smaller by rebuilding it with compression on some of the leading columns, is it an index which (for some reason you can identify) tends to degenerate over time and waste a lot of space and should you rebuild it occasionally. It might be possible (depending on the predicates used) to re-arrange the column order in such a way that the activity is focused onto a particular section of the index rather than being spread across the entire index – or you could even find that by careful choice of global partitioning (which is legal on even a non-partitioned table) you might be able to isolate the activity to a small section of the index.

A more interesting measure, though, comes from comparing the “logical reads” with the number of “db block changes”; and that’s the point of this posting – except that I’ve spent so much time on it already that I’m going to have to write part 2 some time next week.

 

MobaXterm 8.1

MobaXtermcommand-prompt 8.1 has been released.

The downloads and changelog are in the usual places.

If you are a Windows user and you use SSH and X Emulation, you need this in your life!

Cheers

Tim…


MobaXterm 8.1 was first posted on August 17, 2015 at 7:58 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

KeePass 2.30

KeePass 2.30 was released about a week ago. This passed me by as I was distracted with the whole tour thing. :)

The downloads and changelog are in the usual placed.

You can read how I use KeePass (Windows & Linux) and KeePassX2 (Mac) here.

Cheers

Tim…


KeePass 2.30 was first posted on August 17, 2015 at 7:50 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Loading file data … easier than you think

So a request comes in from a colleague…

”Hi Connor.  I’ve created this new table, and I need to populate it with some data.  I’ve got it in Excel – can you help me load it”

Now, at this point, my heart normally sinks, because whilst the Oracle tools to load flat file data are comprehensive, I know that I now have to

  • go find out the table definition, column names and data types
  • write up a SQL Loader control file, or
  • write up an external table definition script
  • repeat this 10 times until I get the syntax right

all of which is going to be a drain on my time…

What I really would like, is to do this:

1) Check that my table is ready for loading


SQL> desc EMP_TAB
 Name                                      Null?    Type
 ----------------------------------------- -------- -------------
 EMPNO                                     NOT NULL NUMBER(4)
 ENAME                                              VARCHAR2(10)
 JOB                                                VARCHAR2(9)
 MGR                                                NUMBER(4)
 HIREDATE                                           DATE
 SAL                                                NUMBER(7,2)
 COMM                                               NUMBER(7,2)
 DEPTNO                                             NUMBER(2)
 
SQL> select * from EMP_TAB;

no rows selected

 

2) Have a quick squizz at the data, make sure it looks sensible

7369,SMITH,CLERK,7902,17-DEC-80,800,,20
7499,ALLEN,SALESMAN,7698,20-FEB-81,1600,300,30
7521,WARD,SALESMAN,7698,22-FEB-81,1250,500,30
7566,JONES,MANAGER,7839,02-APR-81,2975,,20
7654,MARTIN,SALESMAN,7698,28-SEP-81,1250,1400,30
7698,BLAKE,MANAGER,7839,01-MAY-81,2850,,30
7782,CLARK,MANAGER,7839,09-JUN-81,2450,,10
7788,SCOTT,ANALYST,7566,09-DEC-82,3000,,20
7839,KING,PRESIDENT,,17-NOV-81,5000,,10
7844,TURNER,SALESMAN,7698,08-SEP-81,1500,0,30
7876,ADAMS,CLERK,7788,12-JAN-83,1100,,20
7900,JAMES,CLERK,7698,03-DEC-81,950,,30
7902,FORD,ANALYST,7566,03-DEC-81,3000,,20
7934,MILLER,CLERK,7782,23-JAN-82,1300,,10

 

3) And then just load it ! 

I mean seriously…why can’t I just run sqlldr, pass the table name, and just have the thing work !

Well…maybe….just maybe….

C:\temp>sqlldr userid=scott/tiger data=emp.dat table=emp_tab

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Aug 17 11:08:33 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Express Mode Load, Table: EMP_TAB
Path used:      External Table, DEGREE_OF_PARALLELISM=AUTO

Table EMP_TAB:
  14 Rows successfully loaded.

Check the log files:
  emp_tab.log
  emp_tab_%p.log_xt
for more information about the load.

Yup…Once you get to version 12c, there is now express mode for SQL Loader, which for all those simple loads that dont need all the bells and whistles, you can just jump straight in… and load your data. Very cool indeed.

The log file reveals what is going on “under the covers”

creating external table "SYS_SQLLDR_X_EXT_EMP_TAB"

CREATE TABLE "SYS_SQLLDR_X_EXT_EMP_TAB" 
(
  "EMPNO" NUMBER(4),
  "ENAME" VARCHAR2(10),
  "JOB" VARCHAR2(9),
  "MGR" NUMBER(4),
  "HIREDATE" DATE,
  "SAL" NUMBER(7,2),
  "COMM" NUMBER(7,2),
  "DEPTNO" NUMBER(2)
)
ORGANIZATION external 
(
  TYPE oracle_loader
  DEFAULT DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000
  ACCESS PARAMETERS 
  (
    RECORDS DELIMITED BY NEWLINE CHARACTERSET WE8MSWIN1252
    BADFILE 'SYS_SQLLDR_XT_TMPDIR_00000':'emp_%p.bad'
    LOGFILE 'emp_tab_%p.log_xt'
    READSIZE 1048576
    FIELDS TERMINATED BY "," LRTRIM 
    REJECT ROWS WITH ALL NULL FIELDS 
    (
      "EMPNO" CHAR(255),
      "ENAME" CHAR(255),
      "JOB" CHAR(255),
      "MGR" CHAR(255),
      "HIREDATE" CHAR(255)
        DATE_FORMAT DATE MASK "DD-MON-RR",
      "SAL" CHAR(255),
      "COMM" CHAR(255),
      "DEPTNO" CHAR(255)
    )
  )
  location 
  (
    'emp.dat'
  )
)REJECT LIMIT UNLIMITED

executing INSERT statement to load database table EMP_TAB

INSERT /*+ append parallel(auto) */ INTO EMP_TAB 
(
  EMPNO,
  ENAME,
  JOB,
  MGR,
  HIREDATE,
  SAL,
  COMM,
  DEPTNO
)
SELECT 
  "EMPNO",
  "ENAME",
  "JOB",
  "MGR",
  "HIREDATE",
  "SAL",
  "COMM",
  "DEPTNO"
FROM "SYS_SQLLDR_X_EXT_EMP_TAB"

dropping external table "SYS_SQLLDR_X_EXT_EMP_TAB"

Table EMP_TAB:
  14 Rows successfully loaded.

Run began on Mon Aug 17 11:08:33 2015
Run ended on Mon Aug 17 11:08:36 2015

Elapsed time was:     00:00:03.21
CPU time was:         00:00:00.09

Learn more about SQL Loader Express here

Hyper-Extended Oracle Performance Monitor 7.0 Beta

August 16, 2015 Three and a half years ago I offered the Hyper-Extended Oracle Performance 6.0 Beta for download – that Beta version expired a year later.  This program has been somewhat of a pet project for the last 13 years (almost 10 years at the time of the previous Beta offering), so I was […]

My Glamorous Life : Just so you don’t misunderstand…

If you’ve subscribed to my YouTube channel, you will have noticed me posting some videos with the title “My Glamorous Life : …“.

I had several distinct plans for this trip:

  • Do the OTN tour itself. That is of course the real reason we are doing this!
  • Collect some video footage of the conferences so I could produce little montage for each, just to help me remember it. I’ll do that when I get home and can sift through the footage to see if any is usable. Fingers crossed.
  • Film Machu Picchu. I kind-of failed there because I got I’ll, but I do have this little montage of the journey.
  • Document how boring, tedious and stressful the logistics of doing these tours really is.

I started on that last task with the footage of Charles de Gaule airport and Buenos Aires airport, which I think pretty much summed up how dull travelling is. Its not a criticism of the airports themselves. Just that most of your time on these tours is spent sitting in airports, planes, taxies and sleeping in hotels. There is very little time actually in each country.

After those first two videos, I went a bit off the plan and started to film the hotel rooms, which are actually rather glamorous really, at least to me anyway. Added to that, we were rushing around airports so much I kept forgetting to film them. So this series that was meant to convince you how bad travelling can be, now looks more like two weeks in the life of a budget Kim Kardashian.

That makes me a little nervous, as I don’t want people to get the wrong message about what we are doing here. Just to clear things up, here are a few things to keep in mind:

  • We use Oracle approved hotels, typically with an Oracle discount, unless we can get it cheaper than the corporate rate. In most cases, this discount makes them a similar price to staying in a Travelodge in London. So despite how cool some of these places look, they are really rather cheap. If you booked them yourself they are crazily expensive, but with the corporate discount, they are a bargain.
  • Several people on the tour travel for work and have airline and hotel status, allowing them to sign mere mortals like me into executive lounges to get freebies, like breakfast and evening meals, which means I’m not having to pay for them myself. Without this, the tour would be even more expensive as we can’t claim those expenses back.
  • All sightseeing discussed is naturally at our own expense. We (Debra really) arranged flight times to maximise the time we spent in cities, so we could fit in the odd tour, but if we had gone for midday flights we would have seen pretty much nothing of any of the cities, as it was conference-fly-conference-fly pretty much all the way through.
  • Since this tour finished in Peru, Debra and I decided to tag on an extra couple of days to go and see Machu Picchu. All flights, transport, hotels etc. during this time came out of our own pockets.
  • During my trip home from Peru I spent the day in a hotel because of a long layover (14 hours) and upgraded my flight home to business class. These costs came out of my own pocket. They are not paid for by the ACE Program.

I guess I’m getting a bit paranoid now, but it does make me nervous to think I might be giving people the wrong impression about these tours. They are bloody hard work. Anything else you can fit in around them is a bonus, but certainly not the main focus.

Anyway, enough of my paranoid wittering. I’m off to eat more food in an airport executive lounge, which I paid for myself. :)

Cheers

Tim…


My Glamorous Life : Just so you don’t misunderstand… was first posted on August 16, 2015 at 5:10 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Lima to Amsterdam

I left the hotel a little late, but the airport was literally across the road, so it was no big deal. Having a business class ticket meant I checked in immediately (+1) and even had time to hit the lounge (+2). High class swanky time, and without needing to be signed in for once. :)

Boarding the flight was pretty straight forward. Once again, the business class ticket gives priority boarding (+3), without me having to tag along with Debra.

The KLM flight from Lima, Peru to Amsterdam, Netherlands was about 12 hours and 30 minutes, but it was a great flight. Upgrading to business class was a great move. I find it really hard to sleep in an upright position, so being able to lie flat is awesome (+4). I was in a seat with nobody either side of me, so I felt really isolated, which made sleeping even easier. These long flights are so much better if you can get some sleep!

Aside from sleeping, I watched:

  • Wild Card : Not too bad. I like quite a few of the films Jason Statham has been in. Even the bad ones. :)
  • Seventh Son : Typically fantasy stuff. Witches, dragons and slayers etc. Quite good, but Jeff Bridges voice annoyed me.
  • The Big Lebowski : Seeing Jeff Bridges in the previous film made me want to re-watch this film, where his voice does not annoy me. :)
  • The Amityville Horror : Slept through a lot of it. I’ve seen it before. It’s an OK remake I guess.
  • The Green Lantern : OK. I know it is a pretty poor film, but I just scanned through to find clips that looked cool. :)

The staff were really pleasant and helpful. All in all a very good experience and well worth the money in my opinion.

On arriving in Amsterdam, I headed over to the lounge to see if I could get in. I’m not sure how other lounges work, but KLM allow you in on arrival as well as departure (+5), which is awesome, because I’m stuck here for about 6 hours in total. If I had spent 14 hours in Lima airport and 12.5 hours in economy, I would be feeling totally psycho by now. As it is, I’m feeling pretty good. Hopefully, by the time I get home I will be tired enough to sleep and I can wake up and go to work as normal tomorrow…

So for me, that was +5 for the flight upgrade. Thanks KLM! I could get addicted to this, and very poor. :)

I’ll write a wrap-up post when I get home… :)

Cheers

Tim…

PS. I’ve also got some quick montage videos of the conferences to edit when I get home, provided the footage I’ve got works OK…


Lima to Amsterdam was first posted on August 16, 2015 at 3:59 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Parallel Projection

A recent case at a client reminded me of something that isn't really new but not so well known - Oracle by default performs evaluation at the latest possible point in the execution plan.So if you happen to have expressions in the projection of a simple SQL statement that runs parallel it might be counter-intuitive that by default Oracle won't evaluate the projection in the Parallel Slaves but in the Query Coordinator - even if it was technically possible - because the latest possible point is the SELECT operation with the ID = 0 of the plan, which is always performed by the Query Coordinator.Of course, if you make use of expressions that can't be evaluated in parallel or aren't implemented for parallel evaluation, then there is no other choice than doing this in the Query Coordinator.The specific case in question was a generic export functionality that allowed exporting report results to some CSV or Excel like format, and some of these reports had a lot of rows and complex - in that case CPU intensive - expressions in their projection clause.When looking at the run time profile of such an export query it became obvious that although it was a (very simple) parallel plan, all of the time was spent in the Query Coordinator, effectively turning this at runtime into a serial execution.This effect can be reproduced very easily:


create table t_1
compress
as
select /*+ use_nl(a b) */
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_1', method_opt=>'for all columns size 1')

alter table t_1 parallel cache;

-- Run some CPU intensive expressions in the projection
-- of a simple parallel Full Table Scan
set echo on timing on time on

set autotrace traceonly statistics

set arraysize 500

select
regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as some_cpu_intensive_exp1
, regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') as some_cpu_intensive_exp2
, regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') as some_cpu_intensive_exp3
from t_1
;

-- The plan is clearly parallel
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2000K| 192M| 221 (1)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM)| :TQ10000 | 2000K| 192M| 221 (1)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 3 | PX BLOCK ITERATOR | | 2000K| 192M| 221 (1)| 00:00:01 | Q1,00 | PCWC | |
| 4 | TABLE ACCESS FULL| T_1 | 2000K| 192M| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------------

-- But the runtime profile looks more serial
-- although the Parallel Slaves get used to run the Full Table Scan
-- All time spent in the operation ID = 0
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Pid | Operation | Name | Execs | A-Rows| ReadB | ReadReq | Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH | Parallel Execution Skew ASH | Activity Graph ASH | Top 5 Activity ASH |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | SELECT STATEMENT | | 5 | 2000K | | | 3 | 136 | 120 | #################### | 1:sqlplus.exe(120)[2000K],P008(0)[0],P009(0)[0],P00A(0)[0],P00B(0)[0] | ################################ | @@@@@@@@@@@@@@@@@@@ ( 98%) | ON CPU(120) |
| 1 | 0 | PX COORDINATOR | | 5 | 2000K | | | 119 | 1 | 1 | # | 1:sqlplus.exe(1)[2000K],P008(0)[0],P009(0)[0],P00A(0)[0],P00B(0)[0] | | ( .8%) | ON CPU(1) |
| 2 | 1 | PX SEND QC (RANDOM)| :TQ10000 | 4 | 2000K | | | 66 | 11 | 2 | ## | 2:P00B(1)[508K],P00A(1)[490K],P008(0)[505K],P009(0)[497K],sqlplus.exe(0)[0] | | (1.6%) | PX qref latch(2) |
| 3 | 2 | PX BLOCK ITERATOR | | 4 | 2000K | | | | | | | 0:P00B(0)[508K],P008(0)[505K],P009(0)[497K],P00A(0)[490K],sqlplus.exe(0)[0] | | | |
|* 4 | 3 | TABLE ACCESS FULL| T_1 | 52 | 2000K | 23M | 74 | | | | | 0:P00B(0)[508K],P008(0)[505K],P009(0)[497K],P00A(0)[490K],sqlplus.exe(0)[0] | | | |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Fortunately there is a simple and straightforward way to make use of the Parallel Slaves for evaluation of projection expressions that can be evaluated in parallel - simply add a suitable NO_MERGE hint for the query block that you want the projection to be evaluated for in the Parallel Slaves.If you don't want to have side effects on the overall plan shape by not merging views you could always wrap the original query in an outer SELECT and not merging the now inner query block. There seems to be a rule that the projection of a view always get evaluated at the VIEW operator, and if we check the execution plan we can see that the VIEW operator is marked parallel:


set echo on timing on time on

set autotrace traceonly statistics

set arraysize 500

select /*+ no_merge(x) */ * from (
select
regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as some_cpu_intensive_exp1
, regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') as some_cpu_intensive_exp2
, regexp_replace(filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') as some_cpu_intensive_exp3
from t_1
) x
;

-- View operator is marked parallel
-- This is were the projection clause of the VIEW will be evaluated
---------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2000K| 11G| 221 (1)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 2000K| 11G| 221 (1)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 3 | VIEW | | 2000K| 11G| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
| 4 | PX BLOCK ITERATOR | | 2000K| 192M| 221 (1)| 00:00:01 | Q1,00 | PCWC | |
| 5 | TABLE ACCESS FULL| T_1 | 2000K| 192M| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
---------------------------------------------------------------------------------------------------------------

-- Runtime profile now shows effective usage of Parallel Slaves
-- for doing the CPU intensive work
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Pid | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH | Parallel Execution Skew ASH| Activity Graph ASH | Top 5 Activity ASH |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | SELECT STATEMENT | | 5 | 2000K | | | | | 0:sqlplus.exe(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] | | | |
| 1 | 0 | PX COORDINATOR | | 5 | 2000K | 17 | 63 | 10 | # ## # #### | 1:sqlplus.exe(10)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] | #### | * (5.6%) | resmgr:cpu quantum(10) |
| 2 | 1 | PX SEND QC (RANDOM) | :TQ10000 | 4 | 2000K | 5 | 61 | 10 | ## # ## ## ## # | 3:P002(5)[544K],P001(4)[487K],P000(1)[535K],P003(0)[434K],sqlplus.exe(0)[0] | # | (5.6%) | ON CPU(7),resmgr:cpu quantum(3) |
| 3 | 2 | VIEW | | 4 | 2000K | 2 | 82 | 69 | #################### | 4:P003(42)[434K],P001(35)[487K],P000(26)[535K],P002(22)[544K],sqlplus.exe(0)[0] | ############ | @@@@@@@@@@@@@@@@@@@ ( 70%) | ON CPU(125) |
| 4 | 3 | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P002(0)[544K],P000(0)[535K],P001(0)[487K],P003(0)[434K],sqlplus.exe(0)[0] | | | |
|* 5 | 4 | TABLE ACCESS FULL| T_1 | 52 | 2000K | 3 | 78 | 29 | ###### ####### # ### | 4:P000(11)[535K],P002(8)[544K],P001(8)[487K],P003(7)[434K],sqlplus.exe(0)[0] | ### | ***** ( 19%) | resmgr:cpu quantum(30),ON CPU(4) |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

At runtime the duration of the query now gets reduced significantly and we can see the Parallel Slaves getting used when the VIEW operator gets evaluated. Although the overall CPU time used is similar to the previous example, the duration of the query execution is less since this CPU time is now spent in parallel in the slaves instead in the Query Coordinator.

Summary

By default Oracle performs evaluation at the latest possible point of the execution plan. Sometimes you can improve runtime by actively influencing when the projection will be evaluated by preventing view merging and introducing a VIEW operator that will be used to evaluate the projection clause.The optimizer so far doesn't seem to incorporate such possibilities in its evaluations of possible plan shapes, so this is something you need to do manually up to and including Oracle 12c (version 12.1.0.2 as of time of writing this).

Cusco to Lima

It was a 3:30 start, which after broken sleep and the events of the day before had me a little worried. We got a taxi to the airport in Cusco, which is the coldest airport I have ever experienced. After checking in, we headed to the departure gate, which was also freezing. The departure gate was interesting. The lady brought her own laptop, microphone and speaker to make the announcements. :)

We got on to the coldest plane I’ve ever been on. I don’t remember seeing people on a plane in coats and woolly hats before. :) After a quick flight we got to Lima airport, where I said goodbye to Debra, who is flying back to Northern Ireland, via Miami and London.

Having a 14 hour layover in Lima, I decided to check in to a hotel at the airport and sleep for a while. I also upgraded my flight home to a business class flight. The combination of the Machu Picchu trip, airport hotel and business class flight home have added up to quite a lot of money, but if I get home in a reasonable state, it will be worth it. :)

Cheers

Tim…


Cusco to Lima was first posted on August 15, 2015 at 9:40 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Machu Picchu

At about 04:00 we were queuing for the bus ride to the base of Machu Picchu. I started to feel a bit ill again. A little after 05:00 we were on the bus driving up to the base of Machu Picchu. It took about 30 mins to get there, most of which I spent trying not to puke.

I was very dissapointed with the entrance to Machu Picchu. It felt like the entrance to a theme park. There was even Machu Picchu WiFi. We were there to witness wonder and spectacle, but seemed to be getting Disneyland. After being on the verge all morning, I puked and felt much better.

When we eventually got through the turnstyles, we started to walk up the hill. The stairs are quite steep, but nothing I would be worried about if it weren’t for the altitude. It makes it feel like hard work, so you have to take it slow. I used the turns to my advantage and mostly hid the fact I was repeatedly throwing up. After a few minutes we got to area that overlooks the former residential area of Machu Picchu. If you’ve ever seen a picture of Machu Picchu, chances are you’ve seen the one taken from this spot. A few levels up and we were at the guardhouse, which gives an even better view. I puked there too. :) For me, this was all I wanted to do as far as Machu Picchu was concerned. I wanted to stand there and see this for myself. Everything else was a bonus. People visit several times and spend days there. This was really all I wanted. :)

After that we walked down towards the residential area. At that point, I really felt like I was done for. I told Debra to carry on and I walked down to the entrance to look for medical attention. I finally got to see the medic, and puke in her bin a few times. She injected me with a concoction of anti-nausea and electrolytes and left me to sleep for a while. By the time Debra returned I was feeling much better. Interestingly, it was nothing to do with the altitude. My blood O2 was fine. It was pretty similar to what happened to me in India. I’m starting to think it’s nausea caused by a type of migrane, induced by lack of sleep.

Anyway, after my rather brief visit to Machu Picchu, we were heading down the mountain in the bus. We got some food and chilled out before boarding the train to take us back to Cusco and the rest of our luggage.

The train journey back took about 3.5 hours. Lot’s of great sights, only marred by some intensely annoying children, who were complaining about being bored. Why do adults drag children along to this stuff? They don’t enjoy it and ruin it for everyone else!

Back at Cusco, it was a quick taxi ride to the hotel, where I puked and went to bed. We were hoping to have a brief look at Cusco, but it gets dark so early in Peru, there really wasn’t time.

I would like to say I got a good night’s sleep, but the hotel we stayed at was so noisey. I woke several times in the night because of fireworks, music and general noise in the town, which made the 03:30 start the next day even harder to cope with.

Now I know this all sounds really negative and bad, but it was worth it. Machu Picchu is one of those places I always hoped to see before I died. The fact it nearly killed me in the process is besides the point. :) I’m pretty sure if I hadn’t been so beaten up by two weeks of travelling and presenting it would have been a breeze. Part of me thinks it would be nice to go back and see again, but part of me thinks I’ve done all I wanted to do. It is a very expensive experience, but worth it in my opinion.

I wasn’t really in a fit state to take photos, but fortunately Debra was and she let me have a copy of them, which you can see here. I especially like the ones of me looking like dreadful. :)

Cheers

Tim…

Update 1: I think it is great how much work they are doing to preserve the Machu Picchu site, but the amount of rebuilding is a bit of a concern. At the moment, about 30% of the site has been rebuilt and the work is continuing. If too much is done, it ceases to be an ancient site and becomes a modern site in the style of an ancient one. They need to tread very carefully, or risk taking the final step and completing the transition to Disneyland!

Update 2: At no point did I see Pikachu! Apparently, Machu Picchu and Pikachu are not the same thing. Who’da thunk it?


Machu Picchu was first posted on August 15, 2015 at 9:03 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.