Who's online

There are currently 0 users and 49 guests online.

Recent comments


Oakies Blog Aggregator

Loading XML documents into an Oracle database (2)

If only I could do the following…(but it seems that it isn’t supported yet)… That is…based on the first “Loading XML documents into an Oracle...
class="readmore">Read More

Loading XML documents into an Oracle database (1)

Very often people want to load XML documents into an Oracle database so, for example, they can shred the needed values from those XML documents...
class="readmore">Read More

When Storage is REALLY Fast Even Zero-Second Wait Events are Top 5. Disk File Operations I/O: The Mystery Wait Event.

The SLOB code that is generally available here differs significantly from what I often test with in labs. Recently I was contorting SLOB to hammer an EMC XtremIO All-Flash Array in a rather interesting way. Those of you in the ranks of the hundreds of SLOB experts out there will notice two things quickly in the following AWR snippet:

1)   Physical single block reads are being timed by the Oracle wait interface at 601 microseconds (3604/5995141 == .000601) and this is, naturally for SLOB, the top wait event.

2)   Disk file operations I/O is ranking as a top 5 timed event. This is not typical for SLOB.



The 601us latencies for XtremIO are certainly no surprise. After all, this particular EMC storage array is an All-Flash Array so there’s no opportunity for latency to suffer as is the case with alternatives such as flash-cache approaches. So what is this blog post about? It’s about Disk file operations I/O.

I needed to refresh my memory on what the Disk file operations I/O event was all about. So, I naturally went to consult the Statistics Description documentation. Unfortunately there was no mention of the wait even there so I dug further to find it documented in the Description of Wait Events section of the Oracle Database 11g documentation which states:

This event is used to wait for disk file operations (for example, open, close, seek, and resize). It is also used for miscellaneous I/O operations such as block dumps and password file accesses.

Egad. A wait is a blocking system call. Since open(2)/close(2) and seek(2) are non-blocking on normal files I suppose I could have suffered a resize operation–but wait, this tablespace doesn’t allow autoextend.  I suppose I really shouldn’t care that much given the fact that the sum total of wait time was zero seconds. But I wanted to understand more so I sought information from the user community–a search that landed me happily at Kyle Hailey’s post on here. Kyle’s post had some scripts that looked promising for providing more information about these waits but unfortunately in my case the scripts returned no rows found.

So, at this point, I’ll have to say that the sole value of this blog post is to point out the fact that a) the Oracle documentation specifically covering statistics descriptions is not as complete as the Description of Wait Events section and b) the elusive Disk file operations I/O wait event remains, well, elusive and that this is now part I in a multi-part blog series until I learn more. I’ll set up some traces and see what’s going on. Perhaps Kyle will chime in.




Filed under: oracle

Planned Maintenance This Weekend!

Just a quick heads-up to you folks…

On Saturday I’ll be transferring my website and blog to a new server. It’s with the same hosting company, so hopefully the DNS changes will not be too drastic.

The site is already in place on the new box and it seems to be working fine, so on Saturday morning I will do a backup of the MySQL databases and transfer them to the new machine, then initiate the DNS change.

The last time I did this, which was between different hosting companies, there were some issues in some geographical regions for a couple of days, but most people were seeing the site as normal within about an hour.

Note. If you happen to post on the forum or comment on a blog post during Saturday, it might be lost depending on the timing. Also, my email address my go wonky for a little while…

Fingers crossed… :)




Planned Maintenance This Weekend! was first posted on July 18, 2014 at 9:14 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

AWR Warehouse Webinar from ODTUG

The webinar is over, but you haven’t missed out on everything I presented on the console feature, under the hood and behind the scenes!

You can access the slides from today’s presentation, as I’ve uploaded them to slideshare and the scripts are easy to locate on the scripts page here on DBAKevlar.

Thanks to everyone who attended and a big thanks to GP for doing the introduction and ODTUG for hosting us! :)






Copyright © DBA Kevlar [AWR Warehouse Webinar from ODTUG], All Right Reserved. 2014.

I/O benchmarking chat with Uday Vallamsetty from Delphix

Uday Vallamsetty from Delphix performance group just posted a great blog post on evaluating I/O performance in Amazon AWS with EBS.  I had a chance to talk with him a bit about I/O benchmarking and some of the surprises and challenges of I/O benchmarking as well as discuss the importance of producing a report card on any I/O subsystem one is using.

Interesting SLOB Use Cases – Part I. Studying ZFS Fragmentation. Introducing Bart Sjerps.

This is the first installment in a series of posts I’m launching to share interesting use cases for SLOB. I have several installments teed up but to put a spin on things I’m going to hit two birds with one stone in this installment. The first bird I’ll hit is to introduce a friend and colleague, Bart Sjerps, who I just added to my blogroll. The other bird in my cross-hairs is this interesting post Bart wrote some time back that covers a study of ZFS fragmentation using SLOB.

Bart Sjerps on ZFS Fragmentation. A SLOB study.

As always, please visit the SLOB Resources Page for SLOB kit and documentation.


Filed under: Silly Little Oracle Benchmark, SLOB, ZFS, ZFS Performance

Delivering DBaaS using EM

One of the jobs I’m currently working on is writing a technical whitepaper on Delivering DBaaS using EM12c. I realized as I was starting it that a technical whitepaper really isn’t the place for a step-by-step how-to sort of description, so I thought I’d better blog about it instead. Actually, I’m going to write a […]

Crossplatform transportable tablespaces - part 1

There is couple of way to do heterogeneous migration of Oracle databases but staring with 12c there is whole set of new RMAN commands to transport data across different platforms.
I was looking for a best method of move tablespaces from HP-UX to Linux and after some research I found this presentation by Martin Bach from Enkitec (you can watch it online here). Martin describing Oracle Perl script (MOS ID 1389592.1) which allows you to convert tablespaces on 11g database including Exadata. At first sigh it looks like a solution for me but to use that script I need to create a new backup of all tablespaces I want to move. That could be an option but I already had a daily updated copy of all data files in FRA. So in next step I started to investigate who script is working and how to convert a backupset from HPUX to Linux and apply incremental backup into files not registered into database. Well solution was easy to predict - use PL/SQL RMAN interface - DBMS_BACKUP_RECOVERY package. 

It was not a first time when I was looking into it and it remembered me a Oracle 8i database with corrupted control file without RMAN catalog which had to be recovered. For those who forget RMAN in Oracle 8i had not a catalog functionality so you had to treat all control files or RMAN catalog with extra care. But using knowledge about files and backup names, PL/SQL and DBMS_BACKUP_RECOVERY it was possible to restore everything manually.

But let's come back to current problem - I was keen to use PL/SQL but before that I decided to check what new Oracle introduced in 12c and nice surprise - now all operations described used by Oracle perl script are possible from RMAN interface. So in next step I decided to do a little test with existing copies of data files from smallest tablespace called 'USERS'.
This small test was successful and now I need to document and describe all steps and this is material for a next post. It's working a little bit better than Oracle script as there is no need to convert backupset - recover can apply and do conversion on the fly

New syntax to learn in investigate more:

RMAN> recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy 'C:\TEMP\USERS01.DBF' from backupset 'c:\temp\inc15_2.bkp';

Starting restore at 15-JUL-14
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file C:\TEMP\USERS01.DBF
channel ORA_DISK_1: reading from backup piece c:\temp\inc15_2.bkp
channel ORA_DISK_1: foreign piece handle=C:\TEMP\INC15_2.BKP
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 15-JUL-14


DevOps & Delphix : Chef recipes

Delphix Engines expose all features via a stable WEB API built on top of HTTP and JSON.

Clients choose an HTTP client to interact with Delphix and integrate within their environment.

Delphix Engines are bundled with a command line interface which guides users for automation and integration with third party tools.

Delphix CLI example

Adding a SQL Server Source Environment:

Enter these commands through the command line interface:


   set type=HostEnvironmentCreateParameters;

   set hostEnvironment.type=WindowsHostEnvironment;
   set hostEnvironment.proxy=;

   set hostParameters.type=WindowsHostCreateParameters;

   set primaryUser.credential.type=PasswordCredential;
   set primaryUser.credential.password=;




Delphix Curl API example

Example of refreshing a virtual database back to the newest version of the source database

  curl -v -X POST -k --data @- http://delphix-server/resources/json/delphix/database/ORACLE_DB_CONTAINE...  \
    -b ~/cookies.txt -H "Content-Type: application/json" <



 Chef as a database provisioning tool

Chef is an automation platform for provisioning physical or virtual environments to a specific state in a controlled and repeatable way.

Chef may install binaries, control users, configuration.

Chef as a database provisioning tool:

  • Install binaries of the DBMS
  • Configure the DBMS
  • Provision data

Installing the binaries and configuring the database might be standard work for Chef but actually provisioning the data such as a copy of a source database is generally out of the purview of Chef, at least until now. Now with Delphix provisioning of the data, no matter the size, can be done in minutes with a few API calls.

#222222;">With chef, one usually does not describe actions to take (“provision a VDB”), but the desired state of the system (“There must be a VDB running on that port”). This allows rules to be applied to heterogeneous systems in different states and continuously compared the actual with the desired state of the system (for instance, if a new server is added after the fact, it will notice that it needs to provision a VDB to get to the desired state). This is achieved through the separation of resources provider and recipes. The resource provider is the “library” which knows about the low level details of the implementation (is there a VDB running? how do I connect to a Delphix Engine? Which steps needs to be taken to go from the current state to a VDB being provisioned?). The “recipe” describe the state leveraging the resource providers.


Chef: Provisioning data from Delphix

Chef cookbooks (recipes) can use the Delphix Engine API to provision data.

Screen Shot 2014-07-14 at 2.10.14 PM

Chef recipes describe the desired state of the system.

Chef & Delphix example to provision a virtual database

#222222;">In the following “dlpx_pgsql_vdb” is the block in the recipe which indicates the state of the system. It means “We want a postgresql (pgsql) VDB running”.

dlpx_pgsql_vdb "HR" do
        action :provision
        port node#ff0000;">[:dlpx][:port] || 5443
        container "HR"
        delphix_server node#ff0000;">[:dlpx][:delphix_server]
#222222;">In this code details the following means:
dlpx_pgsql_vdb “HR”; do
#222222;">We are describing the desired state of a postgresql Delphix VDB, and the Chef name for that is “HR”
action :provision
#222222;">We want the VDB to be provisioned (as opposed to deleted or refreshed daily for instance)
port node[:dlpx][:port] || 5433
#222222;">The port is read from the host configuration and defaults to 5433 (the recipes could be shared by many hosts or servers, each of them with their own config)
container “HR”
#222222;">We want Delphix to name that VDB “HR”
delphix_server node[:dlpx][:delphix_server]
Look at the configuration for the hostname and credentials of the Delphix engine

Chef Providers build the library of utilities which can be leveraged in recipes. A Delphix Chef Provider can be built on top of the HTTP API.

def provision(group, mount_base, name)
    response ="#{@url}/database/provision", {
      :type => "OracleProvisionParameters",
      :container => {
        :type => "OracleDatabaseContainer",
        :name => name,
        :group => group
      :source => {
        :type => "OracleVirtualSource",
        :mountBase => mount_base
      :content_type => 'application/json',
      :cookies => @cookies

In this exmple

  • source -  hard coded as OracleVirtualSource
  • group – the delphix user group
  • mount_base –  nfs mount location on the target
  • name -  the new VDB name