Who's online

There are currently 0 users and 49 guests online.

Recent comments

Oakies Blog Aggregator

Job Interview…

I went for a job interview today… (pause while Debra Lilley picks herself off the floor)

It’s probably been about 8 years since I’ve had a formal interview like this. I admit I was a little nervous going in, but it turned out to be pretty good fun.

It started with a 1 hour written exam. It’s been about 20 years since I’ve written more than a signature with a pen, so having to write with a pen for 1 hour was pretty terrifying. I’m not big on remembering syntax (that’s what the manuals are for :) ), but hopefully I showed that I understood what was going on. Some of the questions could have been answered with “Read this:….”. Pity that wasn’t an option. :)

After the written exam came a regular interview. I like interviews. I like talking to people about technical stuff, so I had a good time. I hope they did too. Nobody fell asleep and I didn’t see any ears bleeding, so I guess I didn’t bore them too much. I get quite “enthusiastic” when I’m talking about technology. I guess that can be a little daunting to some people. I hope I didn’t scare anybody off. :)

I guess it’s now down to them to decide if I’m a good fit for the job or if one of the other candidates will fit in better. Time will tell.

It would be really neat to get the job, but I’m lucky in so much as if I don’t get it, it will not really impact on my life, so I’m not tense about the outcome. Whatever will be, will be.



Job Interview… was first posted on May 14, 2012 at 12:42 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Tale of a platform migration Solaris 10 SPARC to Linux

This is as much a note to myself how to do this in the future as it is something hopefully worth reading for you. The requirement has been precise as always: migrate a database from 10.2 on SPARC to 11.2 on Linux. In the process, go from Veritas to ASM and make it quick!

I like short briefings but this was too short. Since the database was reasonably large I opted for the transportable tablespace approach, however I now think that a massively parallel impdp with network_link could have saved me quite a bit of time.

The following is by no means  the complete story, but hopefully gives you an idea how to do these things. Always check, and document, then test (rinse and repeat). Only when proper signoff is received should you try such a process in production. Remember to script it and have at least one clean run of the scripts! This process is not super-quick, if you have low downtime requirements then consider Streams or better: Golden Gate for the process.

The source database was originally not on the terminal release, and due to certain problems with the Data Pump API before the source was moved to the terminal release. The source was 11g Release 2 patchset 1 with the April PSU applied

Things to think about

Since I couldn’t simply go for a subset of the database with my transportable tablespace set (TTS) I had to ensure that a lot of metadata was carried across. Personally I think that TTS works best for tables and indexes!

The process of transporting/converting tablespaces is short and sweet (excluding dealing with the application):

  1. Define a self-contained set of tablespaces. In other words, the tablespaces you export from the source must not contain dictionary references to other, non-exported tablespaces. For instance, you cannot export a tablespace containing a table that has an index on another outside of the transportable set.
  2. Set the tablespaces you want to export read-only. This is an outage in production!
  3. Export the metadata associated with the tablespaces from the source.
  4. Copy tablespaces to their destination
  5. Perform the platform conversion
  6. Optionally make the tablespace read-write. Thanks for Jerry for pointing this out
  7. Import tablespace metadata
  8. Make new tablespaces read-write in source

You can either convert the tablespaces and data files at the source or target. The above assumes the conversion will happen on the destination. Since the source database is a clone and not used by anyone I can actually export the database mountpoint via NFS and perform the conversion/copy into ASM in one step.

So what’s in the TTS at all? This is not so well documented, but visible in the output of the expdp command. I strongly recommend the use of expdp over exp! Since I’m converting LOTS of tablespaces in one go I decided to use a parameter file:


The tablespace names were different of course, and a lot longer. Instead of keeping them all on one line (and run into a string buffer bug in expdp) you can create a line per tablespace-which is very nice and makes the file more readable. Note that at this stage you con’t need to specify data files, only tablespace names. I didn’t run the DBMS_TTS.TRANSPORT_SET_CHECK() procedure you’d normally run to check if the TTS is self contained: remember that I’m taking everything except SYS, SYSAUX, UNDOTBS1, TEMP (and other internal tablespaces). You should definitely run the transport_set_check() procedure!

With this I can create a dump file:

$ expdp parfile=exp_tts_metadata.par

Export: Release - 64bit Production on Friday, 11 May, 2012 4:43:26

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 10g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TRANSPORTABLE_01":  /******** AS SYSDBA parfile=exp_tts_metadata.par
Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:
Job "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 04:55:40

From the output you can see that it took quite a bit of information with it, all of which belongs to the data stored on the tablespace. The bit to take away is that OBJECT_GRANTS do not need to be exported from the source-they are imported later on when you plug the tablespaces in.


The documentation will tell you that the users the data belongs to must exist in the destination database. For most data loading operations in data warehouses that’s not a problem, but when you are migration you need to be careful. For my database migration I needed these:

  • (public) Synonyms
  • sequences
  • system grants
  • roles
  • packages
  • package bodies
  • functions
  • procedures

You might potentially need more-check the DATABASE_EXPORT_OBJECTS for different paths and compare with the objects in your database. Maybe you are using contexts for VPD or types for pipelined functions? Database links? You get it.

To get and save the metadata I decided to do a full metadata export of the database, as in

$ expdp full=y content=metadata_only dumpfile=exp_full_metadata.dmp logfile=exp_full_metadata.log exclude=table exclude=index

Depending on how busy your system is that can take a while. On this multi-TB database it helped appending the exclude parameter for tables and indexes.

Based on the metadata dump file you can do all sorts of magic, such as getting a lot of DDL. I decided against generating the SQL for roles, as the resulting sqlfile has revoke commands embedded into it. Roles can only be exported on a database-export level, and I didn’t want to waste time on tweaking the command. So I went for a simpler alternative:

set verify off
set long 10000000
set feedback off
set heading off
set pagesize 0
set linesize 220
set echo off

col cmd for a200 wrap

exec dbms_metadata.set_transform_param(dbms_metadata.session_transform, 'SQLTERMINATOR', true);

spool rolegrants_&&role..sql

SELECT DBMS_METADATA.GET_DDL('ROLE', '&&role')||';' cmd FROM dual;



spool off

This little script will prompt you for a role to export and get all the other granted roles, as well as the system grants, and saves all of that in file “rolegrants_role.sql” for later execution. It may produce a few ORA- errors when it can’t find role or system grants, but that was not a problem when I ran the file in the destination database. The script will not overwrite or otherwise alter roles, so it’s safe to run the script-any conflicts will throw an error, and there are only additional grants, no revokes. Notice how there is no query for OBJECT_GRANT-these are already in the TTS metadata export.

The next step is to create all the users. This is a two part process: in part one, before the metadata import, you need to have all the users but don’t required them to have all the object and role grants. Except for the owners of data, which you can identify in the data dictionary using this query:

select count(owner),owner
from dba_objects
group by owner
order by owner;

Be sure to ignore the Oracle internal users (XDB, ORDSYS, MDSYS, SYS, SYSTEM, etc). Since I’m lazy I used the full database metadata export to generate the users:


I reviewed and executed the script. I used the following script to export the system privs and role grants for the data owners:

set trimspool on
set verify off
set long 10000000
set feedback off
set heading off
set pagesize 0
set linesize 220
set echo off

col text for a200 wrap

exec dbms_metadata.set_transform_param(dbms_metadata.session_transform, 'SQLTERMINATOR', true);

spool usergrants_&&user..sql




Change as necessary, this is for one user only. The purpose is the same as described before.

RMAN convert

Quite simple if you know how to! I shut down the source (because I could!) and exported the /u01/oradata mountpoint via NFS to the Linux host, where it was mounted under /mnt. The RMAN convert file has the following format:

convert datafile
TO PLATFORM="Linux x86 64-bit"
FROM PLATFORM="Solaris[tm] OE (64-bit)"

You can create the datafile clause using a “select ”” || replace(file_name, ‘/u01/oradata/’,'/mnt’) || ”’,’ from dba_data_files” if there are really lots of them. Also note how the parallelism is set to 4 and the format indicates that the converted file should go straight into ASM. No need for an intermediate step.

When you ran the RMAN script, be sure to get the output to see if it all went ok. Depending on your network you might have to reduce the parallelism. Unless you are good at network troubleshooting and monitoring it is difficult to work out how long that might take. Best to let it run in a screen session over night.


Plug them in

With all the necessary dictionary metadata imported, and the conversion finished, it’s time to plug the tablesplespaces into the destination! That requires a fairly lengthy parameter file when you have lots of data files. The file has this format:


This is a critical point, go over your checklist. Some checks you might want to consider

  • All users created
  • All roles created
  • Privileges dealt with
  • Number of datafiles to be plugged in does not exceed control file maximum or db_files parameterYou may have to recreate your destination’s control file and change MAXDATAFILES in the create controlfile statement
  • No conflicting tablespace names (USERS, TOOLS?) in destination
  • Non-default cache sizes (db_nk_cache_size)
  • etc

This list is NOT exhaustive! Ensure you have those adequately covered.

Then use the impdp command with the previously created parameter file and off you go!

Aftermath and Summary

You may have to fine-tune grants and privileges to users, and copy stored procedures and such over to the destination. You can again make use of DBMS_METADATA:

Some object types might prove problematic, check XMLTYPES, Materialized Views and other replication mechanisms. Compare counts in DBA_OBJECTS on both sides and script the missing objects to be automatically created next time.

You would of course not try to migrate a 3TB database that way, you’d use foresight, planning and a complete test plan. You’d ensure (double/triple) that all the dictionary objects have been migrated and that the whole process is scripted. Don’t even think of typing in commands during the migration process other than the script names. Good luck!


In a recent thread on oracle-l I found this gem (,11)

The references are quite useful for anyone going to do this:

  • Note 1389592.1 - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Platform (from
  • NOTE 1166564.1 – Master Note for Transportable Tablespaces (TTS) — Common Questions and Issues
  • NOTE 1454872.1 – Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where Applicable
  • NOTE: 406586.1 – How to Handle Materialized Views When You Upgrade or Clone a Database

Credit to GregG for listing these.

Quick Reference README File For SLOB – The Silly Little Oracle Benchmark

This is just a quick blog entry with the main README file from SLOB – The Silly Little Oracle Benchmark. I frequently findings myself referring folks to the README so I thought I’d make it convenient. I’ve also uploaded this in PDF form here.

            SLOB - Silly Little Oracle Benchmark


This kit does physical I/O. Lot's of it. 

The general idea is that schema users connect to the instance and 
execute SQL on their own tables and indexes so as to eliminate 
as much SGA *application* sharing as possible. SLOB aims to stress Oracle 
internal concurrency as opposed to application-contention. It's all about 
database physical IO ( both physical and logical) not application scaling.

The default kit presumes the existence of a tablespace called IOPS. If 
you wish to  supply another named tablespace it will be given as a 
argument to the script. More on this later in this README.

To create the schemas and load data simply execute as the Oracle 
sysdba user. The script takes two arguments the first being the 
name of the tablespace and the second being how many schema users to load. 
A high-end test setup will generally load 128 users. To that end, 128 is 
the default.

To run the test workload use the script. It takes two arguments 
the first being the number of sessions that will attach and perform modify 
DML (UPDATE) on their data (writer.sql) and the second directs how many sessions 
will connect and SELECT against their data (reader.sql). 

The key to this kit is to run with a small SGA buffer pool to force physical 
I/O. For instance, a 40MB SGA will be certain to result in significant physical 
IOPS when running with about 4 or more reader sessions. Monitor free buffer waits 
and increase db_cache_size to ensure the run proceeds without free buffer wait 

Oracle SGA sizing heuristics may prevent you from creating a very small SGA
if your system has a lot of processor cores. There are remedies for this. 
You can set cpu_count in the parameter file to a small number (e.g., 2) and this
generally allows one to minimize db_block_buffers. Another approach is
to create a recycle buffer pool. The script uses the storage 
clause of the CREATE TABLE command to associate all SLOB users' tables
with a recycle pool. If there happens to be a recycle pool when the 
instance is started then all table traffic will flow through that

1. First, create the trigger tools. Change directory to  ./wait_kit 
   and execute "make all"
2. Next, execute the script, e.g., sh ./ IOPS 128
3. Next, run the kit such as sh ./ 0 8

When executes it produces a drop_users.sql file. If you need to 
re-run it is optimal to execute drop_users.sql first and then 
proceed to re-execute

The kit will produce a text awr report named awr.txt. The "awr" directory 
scripts can be modified to produce a HTML awr report if so desired. 

SLOB is useful for the following I/O and system bandwidth testing:

1. Physical I/O (PIO) - Datafile focus
    1.1 This style of SLOB testing requires a small db_block_cache
    setting. Small means very small such as 40MB. Some
    users find that it is necessary to over-ride Oracle's built
    in self-tuning even when supplying a specific value to 
    db_cache_size. If you set db_cache_size small (e.g., 40M)
    but SHOW SGA reveals an over-ride situation, consider 
    setting cpu_count to a very low value such as 2. This will
    not spoil SLOB's ability to stress I/O.
    1.2 Some examples of PIO include the following:
        $ sh ./ 0 32   # zero writers 32 readers
        $ sh ./ 32 0   # 32 writers zero readers
        $ sh ./ 16 16  # 16 of each reader/writer
2. Logical I/O (LIO)
    2.1 LIO is a system bandwidth and memory latency test. This 
    requires a larger db_block_cache setting. The idea is to 
    eliminate Physical I/O. The measurement in this testing mode 
    is Logical I/O as reported in AWR as Logical reads.
3. Redo Focused (REDO)
    3.1 REDO mode also requires a large SGA. The idea is to 
    have enough buffers so that Oracle does not need to
    activate DBWR to flush. Instead, LGWR will be the
    only process on the system issuing physical I/O. This 
    manner of SLOB testing will prove out the maximum theoretical
    redo subsystem bandwidth on the system. In this mode
    it is best to run with zero readers and all writers.

I recommend starting out small and scaling up. So, for instance,
a loop of PIO such as the following:
    $ for cnt in 1 2 4 8
        sh ./ 0 $cnt

Take care to preserve the AWR report in each iteration of the loop.
The best recipe for the number of SLOB sessions is system specific. 
If your system renders, say, 50,000 PIOPS with 24 readers but starts
to tail beyond 24 then stay with 24.

In general I recommend thinking in terms of SLOB sessions per core.

In the LIO case it is quite rare to run with more readers.sql than the
number of cores (or threads in the case of threaded cores). On the other 
hand, in the case of REDO it might take more than the number of cores 
to find the maximum redo subsystem throughput--remember, Oracle does 
piggy-back commits so over-subscribing sessions to cores might be 
beneficial during REDO testing.

The SLOB install directory has of README.{PLATFORM} files and 
user-contributed, tested scripts under the ./misc/user-contrib directory.

Filed under: oracle

IOsaturationtoolkit-v2 with Exadata IORM and AWESOME text graph

I’ve got a new version of IOsaturation toolkit which you can download here and it has a cool script called “smartscanloop” that shows you the Smart Scan MB/s per database across the Exadata  compute nodes.. it’s a per 2secs sample so that’s a pretty fine grained perf data and near real time text graph. Very useful for doing IORM demos and monitoring what database is currently hogging the IO resources and since it’s presented in a consolidated view you don’t have to go to each Enterprise Manager performance page and have a bunch of browser windows open.
The SECTION 1 is what I usually use to validate the IO numbers on the database side from my Orion (see oriontoolkit here) and Calibrate IO runs. I’ve been using it for quite a while on new RAC/non-RAC installations from client sites.. and I used it heavily on my R&D server while continuously enhancing the toolkit

The SECTION 2 gives you a standard tool to demonstrate the behavior of IORM ( so let’s say you are playing around with IORM percentage allocations for let’s say 3 databases the “saturate” script works well to generate load for each database and then you can observe the effects of the percentage allocation to the IO bandwidth/latency of each database.

when you run

./saturate 4 dbm1 2 exadb1
it will create 4 sessions on dbm1 and 2 sessions on exadb1 all doing parallel select and it outputs a log file for each database session. Each session log file has details on the start and end time, elapsed, MB/s which is pretty much everything you need to know to quantify the performance from a session level perspective. You’ll appreciate this session level output and be impressed on what IORM can do when you start investigating on IO prioritization as you see sessions from the other database having higher MB/s and lower elapsed times and as you play with different IORM scenarios.
cat *log | grep benchmark
Sample output below:
 benchmark ,instance       ,start            ,end              ,elapsed   ,MBs
 benchmark ,dbm1           ,05/13/12 19:18:28,05/13/12 19:19:32,        64,    537
 benchmark ,dbm1           ,05/13/12 19:18:28,05/13/12 19:19:30,        62,    554
 benchmark ,dbm1           ,05/13/12 19:18:28,05/13/12 19:19:32,        63,    545
 benchmark ,dbm1           ,05/13/12 19:18:28,05/13/12 19:19:32,        64,    537
 benchmark ,exadb1         ,05/13/12 19:18:28,05/13/12 19:19:32,        64,    539
 benchmark ,exadb1         ,05/13/12 19:18:28,05/13/12 19:19:32,        64,    539
So the output of smartscanloop is the high level IO numbers across the cluster and the log files are your session detail numbers. Below is the simple output which just shows the SmartScan MB/s per database

This AWESOME text graph is similar to what you see in the Enterprise Manager performance page IO tab. Note that you’ll be seeing higher numbers of MB/s on the smartscanloop compared to EM because of a more fine grained interval (2secs) which is also the same behavior when you measure the IO latency as I explained here (avg latency issue)

then I’ve modified the script to have the advanced output that shows the Hierarchy of Exadata IO. See the updated README for more details on how to use it. Below is the output

What’s good about this is the numbers are about the same when you do a per 10secs snapshot of AWR.. compare the AAS and latency (avgwt ms) columns of the above image and below

Again, this is pretty useful for monitoring the high level smart scans IO that’s happening across your Exadata cluster, if you are on an environment where there’s separation of duties you can even hand off this script to the sys admins that are monitoring the storage cells with their home grown alerting scripts, kSar, or nagios.. so this will serve as their view on the database side of things.

And if any of your clients haven’t adopted the IORM, this is very useful for DEMOs to customer sites to showcase the IORM capabilities.. and if you don’t want to show the latency and other columns you can opt to just use the simple output which only shows the smart scans MB/s (see get_smartscan.simple on README ). Most of the time.. the simpler the output the easier for them (users) to understand.

Sweet! right?!?

Wait.. Does the toolkit work on non-Exadata DBs?

The SECTION 1 works on Exa and non-Exa (I’ve also mentioned this at the beginning of this post)

The SECTION 2 works well on Exa because the underlying scripts just makes use of dcli commands, some shell and SQL.. but if you are interested to have the smartscanloop output on non-Exa environment I have the script get_smartscan.nonexa on the toolkit that shows the “physical read total bytes” instead of the “cell physical IO bytes eligible for predicate offload”..

then on each node do this if it’s a non-Exa RAC

$ while : ; do ./get_smartscan.nonexa | grep "%" ; echo "--" ; sleep 2 ; done
%,05/12/12 12:57:33,DEMO1     ,      0,
%,05/12/12 12:57:35,dbm1      ,      0,
%,05/12/12 12:57:38,DBFS1     ,      0,
%,05/12/12 12:57:40,exadb1    ,      0,
%,05/12/12 12:57:42,PRIMARY1  ,      0,
%,05/12/12 12:57:46,DEMO1     ,      1,
%,05/12/12 12:57:48,dbm1      ,   2078, @@@@@@@@@@@@@@@@@@@@@
%,05/12/12 12:57:50,DBFS1     ,      0,
%,05/12/12 12:57:52,exadb1    ,    972, @@@@@@@@@@
%,05/12/12 12:57:55,PRIMARY1  ,      0,
%,05/12/12 12:57:59,DEMO1     ,      0,
%,05/12/12 12:58:01,dbm1      ,   1088, @@@@@@@@@@@
%,05/12/12 12:58:03,DBFS1     ,      0,
%,05/12/12 12:58:05,exadb1    ,   2356, @@@@@@@@@@@@@@@@@@@@@@@@
%,05/12/12 12:58:07,PRIMARY1  ,      0,
%,05/12/12 12:58:11,DEMO1     ,      0,
%,05/12/12 12:58:14,dbm1      ,   1000, @@@@@@@@@@
%,05/12/12 12:58:16,DBFS1     ,      0,
%,05/12/12 12:58:18,exadb1    ,   2336, @@@@@@@@@@@@@@@@@@@@@@@
%,05/12/12 12:58:20,PRIMARY1  ,      0,
%,05/12/12 12:58:24,DEMO1     ,      0,
%,05/12/12 12:58:26,dbm1      ,   1031, @@@@@@@@@@
%,05/12/12 12:58:29,DBFS1     ,      0,
%,05/12/12 12:58:31,exadb1    ,   2071, @@@@@@@@@@@@@@@@@@@@@
%,05/12/12 12:58:33,PRIMARY1  ,      0,

But wait! there’s more!

Since the output is in comma separated format.. you can just do the following
./smartscanloop > smartscanloop.txt
and leave it running… So you can do this when you want to characterize the IO of a batch run or reporting of multiple databases and the text file that will be generated is pretty small and it doesn’t hurt in terms of CPU/IO resources because it’s just doing distributed SSH and doing some SELECT on v$ views
then from the text file you can easily see what’s the highest MB/s by executing the command below

$ less smartscanloop.txt | sort -nk5 | tail
pd01db03: %,05/10/12 21:29:28,biprd2    ,   7358, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
pd01db03: %,05/11/12 08:57:12,biprd2    ,   7363, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
pd01db03: %,05/10/12 14:55:54,biprd2    ,   7454, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
pd01db03: %,05/11/12 09:29:57,biprd2    ,   7556, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
pd01db03: %,05/10/12 18:17:50,biprd2    ,   7785, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
pd01db03: %,05/10/12 19:12:19,biprd2    ,   7880, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
pd01db03: %,05/11/12 10:31:24,biprd2    ,   7886, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
pd01db03: %,05/10/12 19:15:48,biprd2    ,   8112, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
pd01db03: %,05/11/12 10:31:40,biprd2    ,   8138, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
pd01db03: %,05/10/12 18:49:46,biprd2    ,   9315, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

and since it’s a CSV file you can then easily graph it in Tableau! (just put a header on it first)

host       , time            , inst     , smartscan,
pd01db01: %,05/10/12 13:08:14,hcmprd1   ,      0,
pd01db01: %,05/10/12 13:08:16,paprd1    ,      0,
pd01db01: %,05/10/12 13:08:18,rmprd1    ,      0,
pd01db01: %,05/10/12 13:08:20,lmprd1    ,      0,
pd01db01: %,05/10/12 13:08:22,DBFSPRD1  ,      0,
pd01db01: %,05/10/12 13:08:24,hcrprd1   ,      0,
pd01db02: %,05/10/12 13:08:14,hcrprd2   ,      0,
pd01db02: %,05/10/12 13:08:16,rmprd2    ,      0,
pd01db02: %,05/10/12 13:08:18,paprd2    ,      0,
pd01db02: %,05/10/12 13:08:20,hcmprd2   ,      0,
pd01db02: %,05/10/12 13:08:22,DBFSPRD2  ,      0,
pd01db02: %,05/10/12 13:08:24,lmprd2    ,      0,
pd01db03: %,05/10/12 13:08:14,biprd2    ,   3412, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
pd01db03: %,05/10/12 13:08:16,DBFSPRD3  ,      0,
pd01db04: %,05/10/12 13:08:14,DBFSPRD4  ,      0,
pd01db04: %,05/10/12 13:08:16,mvwprd2   ,      0,
pd01db04: %,05/10/12 13:08:18,fsprd2    ,    763, @@@@@@@@
pd01db04: %,05/10/12 13:08:20,mtaprd112 ,      0,
... output snipped ...

and hack the data! remember on the advanced output you can slice and dice it by host, time, inst, aas, latency, returned, ic, smartscan.. :)

Here’s the IO graph per instance

and IO graph per instance and host

Hope I’ve shared you some good stuff ;)

MOATS-like sqlplus “top” utility for RAC


So, you think MOATS was cool?! Check this out by Jagjeet Singh :)


Oracle E-Learning Resources

There is a lot of new stuff out there nowadays, regarding products and/or functionality or others. This is also true for Oracle and in short it’s not easy to follow all those innovations and/or new products. So from time to time I have a watch on stuff that interests me on the YouTube Oracle E-Learning

Read More…


Here’s a final warning of the busy time I’m having next week. It starts with the E4 conference in Dallas (full  agenda here),my topic is “Due Diligence examining Exadata”.  Then it’s on to Minneapolis where I’ll be doing a one-day event on finding and reading Execution Plans (Agenda andRegistration). After that I head out to San Francisco for the NoCOUG summer conference (my keynote is on Creating Test Cases, followed by a breakout session on stats collection in 11g – where I’ll be describing the approximate NDV and the benefits of column groups). Finally I’ll be ending the week with another one-day session on Beating the Oracle Optimizer (Agenda andRegistration).

Oracle’s Timeline, Copious Benchmarks And Internal Deployments Prove Exadata Is The Worlds First (Best?) OLTP Machine – Part II

There Is No Such Thing As “Pure OLTP”
There is no such thing as “pure OLTP.” How true! And that’s why you are supposed to buy Exadata for your Oracle OLTP/ERP deployment—at least that’s what I’ve heard.

Part I of this series on the topic of Oracle OLTP/ERP on Exadata Database Machine has brought quite a bit of feedback my way.  Most of the feedback came from independent consultants who have built a practice around Exadata. I did, however, hear from an Oracle customer that has chosen to migrate their Oracle Database 10g ERP system from a cluster of old AMD 2300 “Barcelona” Opteron-based servers (attached to a circa 2007 technology SAN) to Exadata. This customer also cited the fact that there is no such thing as “pure OLTP” and since it is a fact I don’t refute it.

No Such Thing As “Pure OLTP” – What Does That Mean
Oracle-based OLTP/ERP systems generally have an amount of batch processing and reporting that takes place in support of the application. That’s true. Batch processing and reporting must surely require massive I/O bandwidth and, indeed, massive I/O would naturally benefit from Exadata offload processing. That is how the sales pitch goes.

I won’t argue for a moment that Exadata offers significant I/O bandwidth. There is 3.2 GB/s of realizable storage bandwidth (Infiniband) for data flow to/from each server in an Exadata configuration. That’s roughly equivalent to 2 active 16GFC HBA ports. It’s a lot. However, since I’ve just spelled out the conventional storage connectivity required to match the 3.2 GB/s, the question boils down to whether or not the Exadata storage offload processing feature (Smart Scan) adds value to the type of reporting and batch activity common in Oracle OLTP/ERP environments.  I can’t prove a negative in this regard but I can say this:

Batch / reporting queries are not candidates for improvement by Smart Scan technology unless the plans are access method full (table or index fast full scan)

Most batch processing I’ve seen is quite compute-intensive as well as index-based (and not full scan). But, as I said, I cannot prove a negative. So, (pun intended) I’ll stop being negative. I’d like to defer to the positive proof Oracle offers on this topic. Oracle’s own designed benchmarks for proving platform suitability for Oracle E-Business Suite. The benchmark I have in mind is the Oracle E-Business Suite 12.1.3 Standard Extra-Large Payroll (Batch) Benchmark.

Oracle’s own description of the workload speaks volumes:

“Extra-Large”, “Batch”, Extremely Useful In Sizing And Capacity Planning
On April 10 2012, Oracle put out a press release highlighting the Sun Fire X4270 M3 payroll batch benchmark result. The press release made the following point about the result and the workload (emphasis added by me):

The Oracle E-Business Suite R12.1.3: Oracle’s Sun Fire X4270 M3 server posted the fastest results on the Payroll batch component of the Oracle E-Business Suite R12 X-large benchmark, completing the workload in less than 20 minutes. This result demonstrates that Oracle’s x86-based servers, running Oracle Linux, can deliver excellent throughput and are well suited for customers running batch applications in conjunction with Oracle Database 11g R2 (5).

I need to quickly point out two things. First, Oracle has an entire suite of their own benchmarks yet never has there been a published result for Exadata.  I know, that is old news and seemingly uninteresting to Oracle customers considering Exadata. Second, I highlighted “fastest results” because it just turns out that this result is in fact the first posted result with this version of the benchmark :

I know, I’m being petty, right? Why would anyone insist on OLTP/ERP benchmarks when considering a platform (Exadata) optimized for DW/BI workloads? I know, I’m sorry, petty again.

So What Is The Point?
If a single Sun Fire X4270 M3 can achieve excellent results with an Oracle-defined E-Business Suite batch benchmark attached to non-Exadata storage, don’t we have proof that Exadata isn’t required to allay the fears of batch processing on x86 without Exadata?  If Exadata added significant value (the sort that helps one absorb the sticker shock) to batch processing, wouldn’t there be published results?

Has Oracle simply not had enough time to publish Exadata benchmark results? Not even enough time given the fact that the benchmarks I speak of are their own benchmark specifications? I can answer those questions—but I won’t. Instead, I’ll focus on some of the particulars of the 4270 M3 result that would actually make it a very difficult workload for even a half-rack Exadata Database Machine X2-2!

The following table comes from Oracle’s full report on the batch benchmark result :

This table shows us a difficult profile–a batch processing profile. A batch profile that warrants the term “Extra-Large” in the name of the benchmark. Please notice that the peak write IOPS is 14,772. A half-rack Exadata Database Machine (X2) has the (datasheet) capacity for 12,500 random mirrored writes per second (WIOPS) thus 14,772 is more WIOPS than a half-rack Exadata can sustain. But what about all those extra processors an Exadata half-rack would offer over this server? Indeed, this was a 2s16c32t Xeon E5-2600 (Sandy Bridge) single server result. A half-rack Exadata (X2-2) has 48 Xeon 5600 cores. Surely the Sandy Bridge 2S server was totally out of gas, right?  No. The full report for the benchmark includes processor utilization:

There is no such thing as “pure OLTP.” Oracle has proven that fact with the Payroll Batch benchmark. Oracle has further proven that a single 2s16c32t E5-2600 server is capable of achieving a world record result on their own benchmark (“Extra Large”) and that particular achievement was possible without Exadata. In fact, it was possible without even saturating the single server E5-2600 CPUs–but, hey, at least the WIOPS demand was higher than a half-rack Exadata Database Machine X2 can sustain!

You need Exadata to handle the batch requirements for modern E-Business Suite?

You spend a lot on Oracle Database, Applications and support. Spend wisely on the platform.

Filed under: oracle

Death of the Enterprise Architect?

Last week I read an interesting article about how cloud computing is changing the role of the enterprise architect and it got me thinking about the bad rap many architects are getting in the brave new agile, cloud, big data world.

From what I’ve been reading, there’s been a bit of a straw man argument going on — enterprise architects are often described as uber-control freaks who attempt to dictate software architectures in a repressive way to implementation teams. Mention the term “reference architecture” and you’ll often raise the hackles of the new developer-led world.

To be sure, there are many enterprise architects who match that description. And they’re the ones who give architects a bad name, just like undisciplined developers can give agile a bad rap too.

I tend to agree with the article in many respects — the idea that software architectures can be fully contained and described in a prescriptive way that potentially limits or increases the cost of software that adds value to the business is something that really isn’t good. To me, it almost always comes back to the value question — does the architect add net positive business value?

One of the best roles I’ve seen for enterprise architects is in attention to technical debt — measuring the growth of it, assisting agile teams in ways to address emerging technical debt, and generally ensuring that the business isn’t accumulating potentially crippling amounts of it. To me, the companies which have these kinds of architects never have to “stop all development” in order to do vendor-software upgrades, re-platforming, or huge re-write / re-factoring efforts.

Again, I don’t think this is about the enterprise architect defining an architecture and enforcing it — it’s a lot more about being an architectural scout: staying out in front of the agile development teams to bring back ideas and help them choose concepts which keep the system flexible in the face of change. Not in defining the ultimate generic architecture — we’ve been down that path before with SOA, “message-bus”, integration servers and broker architectures.

I hope you’re lucky enough to work with such an enterprise architect.

Recursive Subqueries for FizzBuzz

So in yesterday’s post I mentioned that I wasn’t happy with my solution. Among several reasons were the limit on the first 100 numbers, and the lack of using what I think to be a more elegant solution with recursive sub-queries.

Also, I received a comment about problems with the MarkDown plugin — which was also affecting my code posting, so I disabled MarkDown and can now handle code posting much, much better.

I couldn’t let it rest, so I dug into the recursive sub-queries with a vengeance and wrote up something more elegant.

fb0 (input) as (
select      'buzz,fizz,fizzbuzz,fizz,buzz,fizz,fizz' input
from        dual
fb2 (n,pos,n_start,n_str) as
select      /* Anchor query of root nodes for recursion */
            fb1.column_value n,
            1 pos,
            fb1.column_value n_start,
            to_char(fb1.column_value) n_str
from        fb0,
            table(sys.odcinumberlist(3,5,6,9,10,12,15)) fb1
where       /* Limit root nodes to ones that match the first word of input */
            case regexp_substr(fb0.input,'\w+')
              when 'fizz' then mod(fb1.column_value,3)
              when 'buzz' then mod(fb1.column_value,5)
              when 'fizzbuzz' them mod(fb1.column_value,15)
            end = 0
union all
select      /* Build "child" nodes by walking fizzbuzz matching to input string */
              when mod(fb2.n,15) in (5,9) then n+1
              when mod(fb2.n,15) in (3,10) then n+2
              when mod(fb2.n,15) in (0,6,12) then n+3
            fb2.pos + 1,
            fb2.n_str || ',' ||
              when mod(fb2.n,15) in (5,9) then n+1
              when mod(fb2.n,15) in (3,10) then n+2
              when mod(fb2.n,15) in (0,6,12) then n+3
from        fb2,
where       case
              when mod(fb2.n,15) in (0,5,6,10) then 'fizz'
              when mod(fb2.n,15) in (3,9) then 'buzz'
              when mod(fb2.n,15) in (12) then 'fizzbuzz'
            end =
            regexp_substr(fb0.input,'\w+',1,fb2.pos + 1)
fb3 (input, n_str, n_length, n_start, min_length, min_start) as
select      /* Measure lengths of resulting matches, discard ones that are not long enough */
            fb2.n - fb2.n_start n_length,
            min(fb2.n - fb2.n_start) over () min_length,
            min(fb2.n_start) over (partition by fb2.n - fb2.n_start) min_start
from        fb2,
where       fb2.pos = nvl(length(regexp_replace(fb0.input,'\w')),0)+1
fb4 (input, n_str) as
select      /* Retain the matches which are shortest and start at the smallest value */
from        fb3
where       fb3.n_length = fb3.min_length
and         fb3.n_start = fb3.min_start
select      /* Display output */
from        fb4;