Who's online

There are currently 0 users and 26 guests online.

Recent comments

Oakies Blog Aggregator

Friday Philosophy – At What Point Can You Claim a Skill?

I’ve just installed Oracle 12C on my laptop {I know, why only now?}. I went for the option to have a Container database with a pluggable database within it. {It is easy and free to install Oracle on your own home machine – so long as it is for personal use only and you are singed up to OTN (which is also free) }.

12C with pluggable databases (PDBs) is a little different to the last few versions of Oracle as it introduces this whole concept of the Container database that holds portions of the data dictionary and, within that, what we used to think of as Oracle instances plugged in underneath it. It is not *quite* like that – but this post is not about the technical aspects of Oracle 12C multitentant databases. And you will see why.

Whenever something I know well has changed more than a bit, I tend to hit this wall of “Whoa! it’s all changed!”. It isn’t all changed, but sometimes some of the fundamentals, the basics are different. For the last 15 years, once I have my database up and running I will have created my test users and some objects within 10 minutes and be playing away. Not this time. How do you create a user in a multi-tenant DB? How do I tell Oracle to create my users in that PDB? Hang on, how do I even check what I called my PDB? My ignorance is huge.

I popped over to Tim Hall’s site, OracleBase and the section on creating users under multi-tenant Oracle, scanned Bryn Llewellyn’s White Paper on it. A few google searches as well and soon I was there. My standard test to make sure the DB is alive, “select sysdate from dual” – only I altered it to show the PDB:

select SYSDATE from Dual

select SYSDATE from Dual

So I am logged into my working PDB on 12C, I have selected sysdate from DUAL, created my new user. I have used Oracle 12C and multitentant.

Next step?

Update CV to claim 12C expert and experience of Multi-tenant Oracle Database

This is of course a joke on my part.

Sadly, some people would actually do this.

It is something that has always annoyed me and often seems rife in the I.T. industry – people claiming skills or even expertise in something they have barely touched, let alone understood. And often about a thousand miles away from any legitimate claim to Expert. I chortle whenever I see a CV from someone with only 2 or 3 years’ experience of Oracle but list 20 areas they are expert in. Before I throw the CV in the bin.

Maybe part of the issue is that I.T. moves so fast and people feel they need to be seen to be on top of the changes to be worth employing or being listened to. Well, it’s nice to be leading edge – for much of my career I’ve been lucky enough to be exposed to the latest version of Oracle either as soon as it is out or even before (beta programs). But much more important is to have some integrity. Claiming to be an expert when you are not is incredibly dangerous as anyone who really does know the subject is going to suss you out in no time at all. And you will be exposed as a fraud and a liar. Gaining any respect after that is going to be really hard work, and so it should be.

Sadly, you do get the situation where people get away with this sort of deceit, usually by managing to deceive non-technical management but annoying the real technicians around them. Many of us have suffered from this.

This issue of claiming a skill before you had was very common with Exadata when it came out. Lots of people, it seemed, read the white papers, looked at some blogs and maybe saw a couple of talks – and then started talking to people about Exadata as though they knew it inside out. I actually saw a “professional” presentation like this at a conference, on Exadata, where it was soon clear that the presenter had probably never got as far as “select sysdate from dual;” on an exadata box (not that there is any difference for that statement :-) ). I could not help but interrupt and query a statement that was utterly untrue and at that point the presenter checked his “facts” with a more senior member of his company in the crowd. To his shame, the senior member of staff repeated the error of claiming knowledge he also did not have to back the presenter up. Every time I come across that company now, I think of that.

So when can you claim a skill? If you look at my screen shot you will see that I failed to actually log into my PDB database with my new user – #fail. Of course I can’t claim these skills based on reading some information, seeing some talks and all of an hour’s practical experience.

I think you can only claim a skill once you can tell for sure if someone else also has that skill. Or more significantly, tell when they are claiming a skill they lack. Personally, I tend towards not claiming a skill if I doubt my abilities. Don’t worry, my huge ego balances that British self-doubt quite well :-)

I used to give introductory talks on Exadata as I got so tired of the poor information I saw being given on the subject. Also, all the best talks were soon about the details of smart scans, the storage cells and patching. Not much for newbies. Interestingly, even as an intro talk, most times I did the talk I learnt something new in discussions at or after the talk. But I’ve retired that talk now. Why? Well Exadata has moved forward 2 versions since I last used it and 3 since I used it in anger. I could no longer tell you if something someone claimed for V5 of Exadata was true or not. So I am no longer skilled in Exadata.

Only claim skills you have.
Distrust those who claim skills they lack.
Try to teach those who seek your skills – you will only get better for it.

Unique Oracle Security Trainings In York, England, September 2015

I have just updated all of our Oracle Security training offerings on our company website. I have revamped all class pages and added two page pdf flyers for each of our four training classes. In have also updated the list....[Read More]

Posted by Pete On 25/06/15 At 04:36 PM

Oracle Midlands : Event #10

Just a quick heads-up about the next Oracle Midlands event. It’s good to encourage new speakers, so Mike is giving this new, unknown kid a shot at the limelight. I hope you will all come along to show your support.




Oracle Midlands : Event #10 was first posted on June 25, 2015 at 9:47 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Quiz Time. Why Do Deletes Cause An Index To Grow ? (Up The Hill Backwards)

OK, time for a little quiz. One of the things I’ve seen at a number of sites is the almost fanatical drive to make indexes as small as possible because indexes that are larger than necessary both waste storage and hurt performance. Or so the theory goes …   :) In many cases, this drives DBAs to […]

Why do people show Azure so much love?

cloudThe title of this post is taken from tweet I saw a few weeks ago and it keeps coming back to haunt me, so I thought I would comment on it.

Let me start by saying I don’t have any context as to why the tweeter thought people were showing Azure so much love. From my perspective, I kind-of like Azure and I think it is what my employer will end up using, but I’m not a crazed fan-boy about it. :)

Also, I fully understand a move to the cloud is not the right thing for everyone, so this post is focused on those people who do want/need to move to the cloud. Just because it is not right for you, it doesn’t mean it’s not right for everyone. So when I’m talking about running services on the cloud, it is not a recommendation. I’m not telling you you’ve got to. I’m speaking about cloud services to try to explain why someone might say something like the title of this post. I’m hoping this paragraph will stem the hate-comments that invariably come when you mention the cloud. :)


The Azure interface it pretty neat. It’s clean and reasonably intuitive. I’m a casual user, so I can’t say how I would feel about it if I were managing hundreds or thousands of resources, but from my brief time with it, I like it.

I don’t dislike the AWS interface, but it does feel a bit more cluttered and ugly than the Azure interface. I guess that could be enough to put off some people maybe.


Coming from the Oracle world, we tend to think of UNIX/Linux as being the centre of the universe, but if I think back to the companies I’ve worked for over the years, the majority of their kit has been Windows-based, with the exception of the bits I work on. :) Since most corporate desktops are still Windows-based, Outlook, Office and Active Directory tend to rule the roost. If you are thinking of moving those services on to the cloud, Azure seems the “obvious choice”. Am I saying they are the best products and Azure is the best place to run them? No. What I’m saying is it will be seen as the “obvious choice” for many people wanting to move to the cloud.

The same goes with SQL Server. I happen to like the AWS RDS for SQL Server implementation, but I’m guessing a lot of SQL Server folks will get a warmer and fuzzier feeling about running SQL Server on Azure. Lots of decisions in IT are based on gut instinct or personal bias of the buyers, not necessarily fact. I can see how someone will “feel happier” there.

Once the Oracle Cloud becomes generally available, we may see a similar issue there. People may feel happier about running Oracle products on the Oracle Cloud than on AWS or Azure. Time will tell.

What’s under the hood?

This is where cloud really turns stuff on its head. If I want to run a Linux VM, I can do that on AWS, Azure, Oracle Cloud, VMware vCloud Air etc. From my perspective, if the VM stays up and gives me the performance I paid for, do I really care about what’s under the hood? You can be snobbish about hypervisors, but do I care if Oracle are using less hardware to service the same number of VMs as Azure? No. Where infrastructure as a service (IaaS) is concerned, it is all about the price:performance ratio. As I’ve heard many times, it’s a race for the bottom.

Call me naive, but I really don’t care what is happening under the hood of a cloud service, provided I get what I pay for. I think this is an important factor in how someone like Microsoft can go from zero to hero of the cloud world. If they provide the right services at the right price, people will come.


Q: Why do people show Azure so much love?

A: Because it does what it is meant to do. It provides the services certain companies want at a price they are willing to pay. What’s not to love?

Q: So it’s the best cloud provider right?

A: That depends on your judging criteria. No one cloud provider is “the best”. For some people Azure will be the best option. For others it might be the worst.



Why do people show Azure so much love? was first posted on June 25, 2015 at 7:56 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Good new SQL tuning book

People ask me from time to time what are some good SQL tuning books. It’s a tough question to answer. There are some seminal books such as “SQL Tuning” by Dan Tow and “Cost Based Optimizer Fundamentals” by Jonathan Lewis, but these are advanced books  that few I know have actually read from cover to cover. If you are looking for practical SQL tuning cookbooks you might want something a little less advanced.  For a practical approach, I love “Refactoring SQL Applications” by #111111;"> Stephane Faroult which is an easy read, again, it’s not a straight forward SQL tuning book.

Finally there is a book that provides a pragmatic down to earth approach to tuning SQL queries : “Oracle Performance Tuning and Optimization : It’s all about the Cardinalities” by Kevin Meade.

I’ve been meaning to put together a general review but as that’s a lot to tackle I thought I’d go through some of the material in the book chapter by chapter.

Chapter 1 lays out “Driving Table and Join Order”

Meade lays out the 4 parts of a query as

  1. driving table
  2. join order
  3. access method
  4. join method

This is a great way to look at query optimization. As I’ve written about before with Visual SQL Tuning, based on Dan Tow’s work, the join order is the most important part of query optimization and the first step in join order is finding the driving table. The goal is

“Remove as many rows as possible as early as possible in query execution”

Big question is how do we remove as many row as possible as early as possible in the execution? We do that by finding the best driving table. The best driving table is the table who has the predicate filter that eliminates the most rows. To find the best predicate filter we have to go through the query and find which tables have predicate filters and then determine how many rows are returned after the predicate filter verses the rows in the table. Calculating these filter ratios can be a good bit of busy work. I like my busy work automated, so when I was at Embarcadero, I worked with our DB Optimizer tool team to do this automatically. It was pretty cool. Below is a diagram produced from a query that Jonathan Lewis put together for an article on how to  analyze a query visually.



The blue numbers to the  bottom right of certain tables are the filter ratios. A low ratio means that a low percentage of the table is returned after applying the predicate filter.

How do you calculate these predicate filters in an automated way without DB Optimizer. One of my favorite things about Chapter 1 is a query that Kevin Meade wrote to automatically calculate predicate filter ratios.

For example if I run

SQL> explain plan for
SELECT /*+ gather_plan_statistics */ order_line_data
FROM           customers cus
    INNER JOIN orders ord ON ord.id_customer =
    INNER JOIN order_lines orl ON orl.id_order =
    INNER JOIN products prd1 ON = orl.id_product
    INNER JOIN suppliers sup1 ON = prd1.id_supplier
WHERE   cus.location = 'LONDON'
    AND ord.date_placed BETWEEN sysdate - 7
                        AND     sysdate
    AND sup1.location = 'LEEDS'
                 FROM  alternatives    alt
                       INNER JOIN     products prd2
                         ON = alt.id_product_sub
                       INNER JOIN     suppliers sup2
                         ON = prd2.id_supplier
                 WHERE    alt.id_product =
                       AND sup2.location != 'LEEDS' )

Then I run

SQL> @showplanfrpspreadsheetcode11g.sql

I’ll see some output like

      frp_data as (
                    select '   11' id,'DELPHIXDB' table_owner,'PRODUCTS' table_name,'PRD2' table_alias,1999 num_rows,count(*) rowcount,1999 cardinality,cast(null as number) filtered_cardinality from DELPHIXDB.PRODUCTS PRD2 union all
                    select '   17' id,'DELPHIXDB' table_owner,'CUSTOMERS' table_name,'CUS' table_alias,14576 num_rows,count(*) rowcount,49 cardinality,count(case when "CUS"."LOCATION"='LONDON' then 1 end) filtered_cardinality from DELPHIXDB.CUSTOMERS CUS union all
                    select '    2' id,'DELPHIXDB' table_owner,'SUPPLIERS' table_name,'SUP1' table_alias,99 num_rows,count(*) rowcount,50 cardinality,count(case when "SUP1"."LOCATION"='LEEDS' then 1 end) filtered_cardinality from DELPHIXDB.SUPPLIERS SUP1 union all
                    select '    8' id,'DELPHIXDB' table_owner,'SUPPLIERS' table_name,'SUP2' table_alias,99 num_rows,count(*) rowcount,50 cardinality,count(case when "SUP2"."LOCATION"<>'LEEDS' then 1 end) filtered_cardinality from DELPHIXDB.SUPPLIERS SUP2 union all
                    select '   14' id,'DELPHIXDB' table_owner,'PRODUCTS' table_name,'PRD1' table_alias,1999 num_rows,count(*) rowcount,1999 cardinality,cast(null as number) filtered_cardinality from DELPHIXDB.PRODUCTS PRD1 union all
                    select '   18' id,'DELPHIXDB' table_owner,'ORDERS' table_name,'ORD' table_alias,71604 num_rows,count(*) rowcount,17898 cardinality,count(case when "ORD"."DATE_PLACED">=SYSDATE@!-7 AND "ORD"."DATE_PLACED"<=SYSDATE@! then 1 end) filtered_cardinality from DELPHIXDB.ORDERS ORD union all
                    select '    9' id,'DELPHIXDB' table_owner,'SUPPLIERS' table_name,'SUP2' table_alias,99 num_rows,count(*) rowcount,99 cardinality,cast(null as number) filtered_cardinality from DELPHIXDB.SUPPLIERS SUP2 union all
                    select '   12' id,'DELPHIXDB' table_owner,'ALTERNATIVES' table_name,'ALT' table_alias,17900 num_rows,count(*) rowcount,17900 cardinality,cast(null as number) filtered_cardinality from DELPHIXDB.ALTERNATIVES ALT union all
                    select '   19' id,'DELPHIXDB' table_owner,'ORDER_LINES' table_name,'ORL' table_alias,286416 num_rows,count(*) rowcount,286416 cardinality,cast(null as number) filtered_cardinality from DELPHIXDB.ORDER_LINES ORL union all
                    select null,null,null,null,null,null,null,null from dual
select frp_data.*,round(frp_data.filtered_cardinality/case when frp_data.rowcount = 0 then cast(null as number) else frp_data.rowcount end*100,1) actual_frp,decode(frp_data.filtered_cardinality,null,cast(null as number),round(frp_data.cardinality/case when frp_data.num_rows = 0 then cast(null as number) else frp_data.num_rows end*100,1)) plan_frp
from frp_data
where id is not null
order by

If I spool this to a file like kmo.sql (Kevin Meade out ) and run it I’ll get the filter ratios

----- --------- ------------ ---- ---------- ---------- ----------- -------------------- ---------- --------
    2 DELPHIXDB SUPPLIERS    SUP1	  99	     99 	 50		      49       49.5	50.5
    8 DELPHIXDB SUPPLIERS    SUP2	  99	     99 	 50		      50       50.5	50.5
   11 DELPHIXDB PRODUCTS     PRD2	1999	   1999        1999
   12 DELPHIXDB ALTERNATIVES ALT       17900	  17900       17900
   14 DELPHIXDB PRODUCTS     PRD1	1999	   1999        1999
   17 DELPHIXDB CUSTOMERS    CUS       14576	  14576 	 49		      49	 .3	  .3
   18 DELPHIXDB ORDERS	     ORD       71604	  71604       17898		    8941       12.5	25.0
   19 DELPHIXDB ORDER_LINES  ORL      286416	 286416      286416

  • NUM_ROWS – rows from table statistics
  • ROWCOUT – actual count(*)
  • CARDINALITY –  optimizer expected cardinality
  • ACTUAL_FRP – actual filter ratio
  • PLAN_FRP – expected filter ratio

From here I can see that the best filter ratio is on Orders and that’s where I should start my query.

Chapter 1 of Kevin Meade’s book is available online here.

A full set of scripts from his book are available here #006699;" title="scripts.rar" href="">scripts.rar#000000;"> 

A short word doc on tuning from Kevin along with some examples is available at #006699;" title="Information needed in Tuning a SQL Query.docx" href="">information needed in Tuning a SQL Query.docx#000000;"> 

And here is a thread on Orafaq from Kevin.

Upgraded and can’t see the tables you could before ?

If you take a look at the "alter user" command in the old 9i documentation, you’ll see this:


Specify the roles granted by default to the user at logon. This clause can contain only roles that have been granted directly to the user with a GRANT statement. You cannot use the DEFAULT ROLE clause to enable:

  • Roles not granted to the user
  • Roles granted through other roles
  • Roles managed by an external service (such as the operating system), or by the Oracle Internet Directory

Oracle enables default roles at logon without requiring the user to specify their passwords.

And if you do a similar examination of the 10g documentation, its pretty much the same, with some more elaboration.


Specify the roles granted by default to the user at logon. This clause can contain only roles that have been granted directly to the user with a GRANT statement. You cannot use the DEFAULT ROLE clause to enable:

  • Roles not granted to the user

  • Roles granted through other roles

  • Roles managed by an external service (such as the operating system), or by the Oracle Internet Directory

Oracle Database enables default roles at logon without requiring the user to specify their passwords or otherwise be authenticated. If you have granted an application role to the user, you should use the DEFAULT ROLE ALL EXCEPT role clause to ensure that, in subsequent logons by the user, the role will not be enabled except by applications using the authorized package.

So lets take a look at the 11.2 documentation for the same section


Specify the roles granted by default to the user at logon. This clause can contain only roles that have been granted directly to the user with a GRANT statement, or roles created by the user with the CREATE ROLE privilege. You cannot use the DEFAULT ROLE clause to specify:

  • Roles not granted to the user

  • Roles granted through other roles

  • Roles managed by an external service (such as the operating system), or by the Oracle Internet Directory

  • Roles that are enabled by the SET ROLE statement, such as password-authenticated roles and secure application roles

Notice the new 4th bullet point, which has replaced the handling of default roles from previous versions.  If a role is password protected, then even if it is set as a default role, then it will not be enabled automatically.  We can see this with a simple example:

SQL> conn / as sysdba


SQL> create user NO_PRIVS identified by blah;

User created.

SQL> grant create session to NO_PRIVS;

Grant succeeded.

SQL> create role SUPER_DOOPER identified by SECRET_PASSWORD;

Role created.

SQL> create role DOES_NOTHING;

Role created.

SQL> grant delete any table, select any table to SUPER_DOOPER;

Grant succeeded.


Grant succeeded.


Grant succeeded.

SQL> alter user NO_PRIVS default role all;

User altered.

SQL> conn NO_PRIVS/blah


SQL> select count(*) from hr.employees;

select count(*) from hr.employees


ERROR at line 1:

ORA-00942: table or view does not exist


SQL> set role SUPER_DOOPER identified by SECRET_PASSWORD;

Role set.

SQL> select count(*) from hr.employees;





Empty Leaf Blocks After Rollback Part II (Editions of You)

In my last post, I discussed how both 1/2 empty and totally empty leaf blocks can be generated by rolling back a bulk update operation. An important point I made within the comments of the previous post is that almost the exact scenario would have taken place had the transaction committed rather than rolled back. A […]

A First Technical Look At Hybrid Cloning in EM12c Release 5

The sales, support and technical teams were brought into the Denver Tech Center office to do some advanced training in Hybrid Cloud.  There were many take-aways from the days we spent in the office, (which is saying a lot-  most of you likely know how much I hate working anywhere but from home… :)) and I thought I would share a bit of this information with those that are aching for more details on this new and impressive offering from release 5.

If you’re new to Hybrid Cloud and want to know the high level info, please see my blog post on the topic.

Cloud Control Cloning Options

Cloud Control now includes a number of new options for database targets in EM12c.  These new drop down options include cloning to ease access to the new hybrid cloning.  Once you’ve logged into Cloud Control, go to Targets à Databases and then choose a database you wish to implement cloning features for.  Right click on the target and the drop downs will take you to the cloning options under Oracle Database –> Cloning.


There will be the following choices from this drop down:

  • Clone to Oracle Cloud-  ability to directly clone to the cloud from an on-premise database.
  • Create Full Clone-  Full clone from an image copy or an RMAN backup.
  • Create Test Master- Create a read-only test master database from the source target.
  • Enable as a Test Master- Use the database target as a test master, which will render it read-only and it would rarely be an option for a production database.
  • Clone Management-  Manage existing cloning options.

Using a test master is essential for snap clones, which are a great way to offer great space savings and eliminates the time that is required for standard cloning processes.  The test master is in a read only mode, so it will need to be refreshed or recreated with an up to date copy, (which will then be another option in the drop down, “Disable Test Master”) for new cloning procedures.

Snapshot Clone

For the example today, we’ll use the following production database:


We’ll use an existing test master database to perform our clone from:


We can right click on the database and choose to create a clone.  This is going to be an artifact via a snapclone, so keep this in mind as we inspect the times and results of this process.


Upon choosing to create a snapshot clone of a pluggable database.  This will then create snapshot clone, each clone is just a copy of the file header with block changes involved on the read only or read-write clone.

Once you fill out the pertinent data for the clone, using the correct preferred credentials with SYSDBA privileges, name the new pluggable database, the name you’d like it displayed in  Cloud Control as and enter the PDB  administration credentials, password and confirm the password.  Once that’s done, choose if you’d like to clone it to a different container, (CDB) than what the source resides on and then add the database host and ASM credentials.

Once you click next, you have more advanced options to view and/or setup:


The advanced options allow you to change the sparse disk group to create the clone on, storage limits and database options.


You can then choose to have data masking to protect any sensitive data, but keep in mind, once you do so, you will no longer be using a snapclone due to the masking, but the option to implement it at this step is an option.  You can also set up any pre, post or SQL scripts that need to be run as part of the clone.  This could include resetting sequences, user passwords, etc.

The next step allows you to schedule the clone in the future or run immediately.


You can also choose the type of notifications, as this is simply an EM Job that is submitted to perform the cloning process.  Once you’ve reviewed the cloning steps chosen via the wizard, you can then submit.


Once the jobs been submitted, the submitter can monitor the job steps:




Once the clone has completed, you can view each of the steps, including the time each took.


The source database was over 500 GB and was cloned in less than one minute!  You also will see the new cloned database in the targets list:


If curious, note that this is a fully cloned database that is on ASM, which you can view, just as you would for any other database.

Again, note the size and that this can be managed like any other database that you would have created via a DBCA template or through a standard creation process.


More to come soon and thanks to Oracle for letting us get our hands on the new hybrid cloning!

Tags:  , , ,





Copyright © DBA Kevlar [A First Technical Look At Hybrid Cloning in EM12c Release 5], All Right Reserved. 2015.

Empty Leaf Blocks After Rollback Part I (Empty Spaces)

There’s been an interesting recent discussion on the OTN Database forum regarding “Index blank blocks after large update that was rolled back“. Setting aside the odd scenario of updating a column that previously had 20 million distinct values to the same value on a 2 billion row table, the key questions raised are why the blank index leaf blocks […]