Search

OakieTags

Who's online

There are currently 0 users and 31 guests online.

Recent comments

Oakies Blog Aggregator

Friday Philosophy – Make a Team by Letting Them Burn

The title of today’s Friday Philosophy sounds extreme, but it’s true.

Sir John Harvey-JOnes

Sir John Harvey-Jones

Recently, I was watching a TV program about “experts” helping companies turn around. A couple of decades ago the BBC had a program called “The Troubleshooter” where a gentleman called Sir John Harvey-Jones gave companies in trouble advice {the similar idea but without the cheap and tacky elements we have today with making the targets of the program cry and over-emote for good TV}. John just gave solid advice. But one thing that was true in the program back then and is true in the tacky 21st century take on the program is that eventually you have to let the new team make mistakes.

This resonated with me as when I was managing teams it was something I knew I should do – and struggled to allow. A new team and, especially, a new team leader, has to be given space to make mistakes.

I have always found this very hard to watch. When you become a team leader yourself, or a subject matter expert, or the “lead” on anything, you make mistakes. You just do: it’s new to you, you have not done it before and you lack the experience and knowledge to know what works and what does not. If you are lucky you might have a mentor you can talk to or think back on, maybe a team leader you enjoyed working for or an expert you admire. But often it is just you and the new role and a whole green field of requirements into which you can drop your own cow-pats. It is challenging, exciting, frightening, worrying… Looking back, those are the times that have been most demanding in my career and have also been the times I learnt the most.

I would say they were also the best times in my working life (and that would fulfill the usual mythology and story-telling shtick at this point) but that would be a lie. 50% of the time they were, the other 50% of the time I hated it. Am I not supposed to say that? Well, it’s true. Half the time, breaking new ground is not the Star Trek/Friends/Movie-of-the-month feel-good ride to betterment that society sells us it is. Sometimes it is hard work, bruising and sucks. Am I wrong?

So, I know it is not always a nice ride. And, irrespective, I’ve learnt some lessons over the years – and I do not see why people under my tutelage cannot just inherit the lessons I learned painfully without the pain, by me guiding them. I want what is in my head and my experience to be available to you.

What is wrong with that? Well, three things at least.

My way is not your way. I am me, you are you. Ignoring for the minute that I could get things wrong {As if!!! {British ironic humour) }, just because how I handle a situation or my team or a tricky customer interaction works for me, that way may not work for you – as you have a different personality and different strengths. I’m pretty good at dealing with companies that try to rip me off. I face them down and I bloody well let them have it. My wife does not do that, she keeps calm, is passive (in my eyes) and does not point out their stupidity. But she nails them with reasonable logic and calm {but she will go for the throat if reason fails}. We both usually win. She maybe wins more often (please don’t tell her that). We all have to find what works for us as individuals and that varies.

Secondly, though I would like to save you from pain, if you do not make your own decisions and live with them then it was not your work. If you do what I told you to do then it was partly my work. You will know that. What will you do when I am not there? I’m not arguing against seeking advice, that is always (in my book) correct. But if you are in charge of something, you need to BE IN CHARGE so that you learn to know you can (or cannot, let’s be honest) do it. You have to decide if you take any advice, it would be questionable of me dictate you take my advice (though there are times and situations when that would be correct of me). If you succeed because I told you what to do, you have learned a way to handle that situation. If you resolved the problem yourself, you also learned that you can do it. If you mess up, then you learnt a way not to do things and you now have to learn another vital management skill:

To be a good leader you need to accept your mistakes – and sort them out.

That is what I mean about letting the team burn.

The third point, the one I do not like looking at, is that. Well. My way may not be best. Your way might not just be different and better suited to you and your abilities, it might simply be better. If I over-ride a minion when they are not doing it My Way then I am preventing them from learning, I am preventing them from doing it their way, and I am potentially preventing them doing it a better way.

My job as a manager is getting the best out of those I manage. That may include them doing a better job than me. If that is a problem, it is my problem.

This is also true of teaching and mentoring and explaining. If I teach you SQL programming and you become a better SQL programmer than me, I’ll be hurt – How dare you be better than me? I Bloody taught you! I would like to feel that as I get older I can live more comfortably with achieving that aim of someone I teach becoming better than me.

So getting back to the title. If I manage a team leader, I have to let that team leader… lead. I advise, I help, I highlight what I think they missed… And then, if I can over-ride my damned ego, I shut up. I have to risk letting them burn.

If they burn, I try to put out the fire with them.

If they do not burn, they have learnt and will be better.

If they shine, then they have exceeded me and we might be swapping roles one day.

I would like to think that is how I operated at the end of my time managing teams.

SLOB 2.3 Is Getting Close!

SLOB 2.3 is soon to be released. This version has a lot of new, important features but also a significant amount of tuning in the data loading kit. Before sharing where the progress is on that front, I’ll quickly list some of the new important features that will be in SLOB 2.3:

  1. Single Schema Support. SLOB historically avoids application-level contention by having database sessions perform the SLOB workload against a private schema. The idea behind SLOB is to exert maximum I/O pressure on storage while utilizing the minimum amount of host CPU possible. This lowers the barrier to entry for proper testing as one doesn’t require dozens of processors festering in transactional SQL code just to perform physical I/O. That said, there are cases where a single, large active data set is desirable–if not preferred. SLOB 2.3 allows one to load massive data sets quickly and run large numbers of SLOB threads (database sessions) to drive up the load on the system.
  2. Advanced Hot Spot Testing. SLOB 2.3 supports configuring each SLOB thread such that every Nth SQL statement operates on a hot spot sized in megabytes as specified in the slob.conf file. Moreover, this version of SLOB allows one to dictate the offset for the hot spot within the active data set. This allows one to easily move the hot spot from one test execution to the next. This sort of testing is crucial for platform experts studying hybrid storage arrays that identify and promote “hot” data into flash for example.
  3. Threaded SLOB. SLOB 2.3 allows one to have either multiple SLOB schemas or the new Single Schema and to drive up the load one can specify how many SLOB threads per schema will be active.

 

To close out this short blog entry I’ll make note that the SLOB 2.3 data loader is now loading 1TB scale Single Schema in just short of one hour (55.9 minutes exactly). This procedure includes data loading, index creation and CBO statistics gathering. The following was achieved with a moderate IVB-EP 2s20c40t server running Oracle Linux 6.5 and Oracle Database 12c and connected to an EMC XtremIO array via 8GFC Fibre Channel. I think this shows that even the data loader of SLOB is a worthwhile workload in its own right.

SLOB 2.3 Data Loading 1TB/h

Filed under: oracle

DHCP: Virtualbox vs VMware on laptops

I’ve been using VMware Fusion on my Mac but it costs about $60. Not that much but Virtualbox is free, so I also have Virtualbox as well.

For my VMs I use DHCP and this has caused some problems but mainly on Virtualbox. With VMware Fusion (and VMware Workstation) everything works fine.

VMware

With VMware, under “Edit Hardware Settings” then “Network Adaptor” I just set the network

  • Mac:  “Share with MAC”
  • PC:  NAT

This causes the system to act as a router, isolating the VM’s on their own network, while allowing internet connectivity outbound as well as being able to connect to the VMs from my laptop and being able to connect from one VM to another.

VirtualBox

#444444;">
    • #444444; font-style: italic;">Bridge Adaptor – my typical default, doesn’t work when not connected to the network  (like when  wifi is off or flaky)
    • #444444; font-style: italic;">Host-Only Adapter – works on laptop, but cannot connect to outside network
    • #444444; font-style: italic;">Nat Network Adaptor – works with or without network connection but requires port forwarding

Bridge adaptor is the easiest and works when you are connected to the network. It doesn’t work when you are off the network or the network is a wifi and spotty.

In that case you can use “Host-Only Adaptor” which does work when you are off the network, but once you are back on the network and want the VMs to connect to the outside network you can’t.

The best of both worlds is “Nat Network Adaptor“. The problem with Nat Network Adaptor is you have to set up a Nat Network Adaptor and set up port forwarding rules.

#222222;">Create a NAT Network

  • #222222;">click menu “VirtualBox -> Preference” then “Network”
  • Click button “Nat Network”
  • Click box with + sign on right to create a Nat Network
  • Click the screwdriver to the right
  • Keep the default Nat Network name or customize
  • Make sure “Supports DHCP” is checked
  • Click “Port Forwarding”

Under port forwarding you add your rules on how to talk to your VMs.

For example I have a VM called “Linux Target” with an IP of 10.0.2.6

I’ll add a rule to allow me to SSH to this VM. SSH is on port 22.

I’ll give the rule a meaningful name for me. In my case “SSH to Linux Target”

I’ll be  using my local host for Host IP, so I give it 127.0.0.1

I’ll pick out a safe but meaningful port  on my local host that will forward to my VM’s port 22. I pick out local host port 2022 and map that to my VM’s port 22 on 10.0.2.6.

The rule will look like the last line in the following table:

Screen Shot 2015-05-22 at 11.49.15 AMIn the above table I’ve added rules for both HTTP connections over port 1080 which my web application on the VM uses and SSH connections over port 22  which SSH uses for 3 VMs on my laptop.

Now the trick is, in order to connect to the VM from the laptop, I don’t use the VM’s name or IP. I use the name “localhost” specify the local host port that is forwarded to my VM. For example to ssh to “Linux Target” I would use

ssh -p 2022 localhost

and to connect to HTTP on “Linux Target”, in a browser I would type

http://localhost:2080

(to connect from one VM to the other VM use the actual IP address)

Thanks to Adam Bowen for stepping me through these options. See his post at Delphix on the subject at https://community.delphix.com/delphix/topics/basic-virtualbox-networking-setup-for-landshark-2-x

One year at Delphix

It’s been over a year since I leapt into the void.

OK, more than a little melodramatic.  In many respects, I was leaping from the void by joining a promising and exciting startup company like Delphix.

Business was still brisk as an independent consultant at EvDBT, but for the past several years, I was experiencing what I called “just-in-time engagements”.  That is, new consulting engagements were just mysteriously showing up at the right time just before the current one was ending.  Frankly, it was getting a bit spooky, and I had been on pins and needles for a couple years watching it happen, wondering like a farmer if the day would come when the rain did not appear on time.  That day had shown up previously, during the recession of 2001 – 2002, when I experienced about 2-3 weeks of no work early in 2002, but that was the only dry spell I encountered in almost 16 years at EvDBT.  However, I wasn’t eager to see another one…

So a little over twelve months ago, on 01-May 2014, I left the world of Oracle technology consulting that I had first entered on January 15, 1990.  Well, I haven’t really left Oracle technology, but I’m no longer making a living at it.  Oracle is a big part of Delphix, but only a part.

What have been the highlights during my first year at Delphix?

  • learning data virtualization
  • learning how to tune virtual databases for Oracle and SQL Server
  • learning VMware and earning my VCA certification
  • carving a personal niche within a small company
  • became proficient at Windows Powershell    #ff0000;"><- no kidding!  true!
  • continuing to present at Oracle conferences, often traveling and presenting with my brilliant wife, Kellyn Gorman

Yes, Kellyn and I got married during the past year!  It took us each a few tries at marriage, but we each hung in there, and got it right this time.  They say that remarriage is the triumph of optimism over experience, and I’m delighted to say that optimism trumps experience.  We did the deed in grand style, spending the week at Oracle Open World in San Francisco, coming home on Friday and getting married that Sunday in a tiny ceremony with immediate family only.  We finished up the RMOUG TD2015 call-for-papers on Wednesday and Kellyn publshed the agenda on Thursday, and then on Saturday we flew to Europe to do a joint-keynote and present at the Slovenian Oracle Users Group in Ljubljana and the Croatian Oracle Users Group in Rovinj.  After spending time with Joze and his lovely wife Lili, we blasted out of Croatia and scooted over to beautiful Venice for a quiet, blissful week-long honeymoon, meeting with Lothar and his lovely wife Birgit during the week as well.

Then, it was back to reality through the end of the year, getting swept up in the preparations for the RMOUG conference.

Delphix had a spectacular Q4 ending on 31-January 2015, where that financial quarter alone equaled the earnings from the entire previous fiscal year.  Lots of celebrations, victory laps, and high fives at the company all-hands meeting in early February, but what none of us in the Professional Services division saw was the looming tsunami of freshly-sold new deployments cresting just over our heads.  That wave crested and crashed down on us, and I found myself buried in work.  Just now, four months later in the new fiscal year, I’m finally able to look up and look around to find that winter and spring have passed and summer has arrived.My second year at Delphix has begun, and I’m curious as to what it will bring.

I’m continuing to heed the advice of Tim Minchin, who counsels against pursuing one’s Big Dream and instead suggests passionate dedication to short-term goals, in short that one be “micro-ambitious”.  That is say, that thing that is right in front of you right now?  Do it the very best you can, and put your back into it.  Whether it is a blog post, cutting the lawn, an email, or a new wax ring for the toilet in the basement.  Especially the new wax ring – don’t bugger that up!  And then do the next thing the same way.  And the next.  And the next.  Before you know it, you’ve climbed a mountain and have made a pretty good career besides.

Not unexpectedly at a small fast-growing company, hints have been made of a transition from the technical track to the managerial track.  This would finally reverse the track switch I made over 20 years ago while at Oracle, when I stepped off the managerial track in favor of the technical track.  I’ve never looked back on that decision, but should I be “micro-ambitious” here as well, take the new task right in front of me, and work to excel?  Or stick to my guns, stay with the technical track?

I’ve learned that it is a mistake to just “go with the flow” and bob along with the prevailing current.  If one is to succeed at something new, it must be a whole-hearted plunge accompanied by a full-throated war cry.

So, if you hear a strange noise rather like a cross between a “Tarzan yell” and someone choking on an avocado pit, don’t be alarmed.  Just listen a little longer to find whether you hear the “splat” of my corpse hitting pavement, or the “whoosh” as I learn to fly again.

And rest assured, that wax ring in the downstairs toilet?  Nailed it.

With Modern Storage the Oracle Buffer Cache is Not So Important.

With Oracle’s move towards engineered systems we all know that “more” is being done down at the storage layer and modern storage arrays have hundreds of spindles and massive caches. Does it really matter if data is kept in the Database Buffer Cache anymore?

Yes. Yes it does.

Time for a cool beer

Time for a cool beer

With much larger data sets and the still-real issue of less disk spindles per GB of data, the Oracle database buffer cache is not so important as it was. It is even more important.

I could give you some figures but let’s put this in a context most of us can easily understand.

You are sitting in the living room and you want a beer. You are the oracle database, the beer is the block you want. Going to the fridge in the kitchen to get your beer is like you going to the Buffer Cache to get your block.

It takes 5 seconds to get to the fridge, 2 seconds to pop it open with the always-to-hand bottle opener and 5 seconds to get back to your chair. 12 seconds in total. Ahhhhh, beer!!!!

But – what if there is no beer in the fridge? The block is not in the cache. So now you have to get your car keys, open the garage, get the car out and drive to the shop to get your beer. And then come back, pop the beer in the fridge for half an hour and now you can drink it. That is like going to storage to get your block. It is that much slower.

It is only that much slower if you live 6 hours drive from your beer shop. Think taking the scenic route from New York to Washington DC.

The difference in speed really is that large. If your data happens to be in the memory cache in the storage array, that’s like the beer already being in a fridge – in that shop 6 hours away. Your storage is SSD-based? OK, you’ve moved house to Philadelphia, 2 hours closer.

Let's go get beer from the shop

Let’s go get beer from the shop

To back this up, some rough (and I mean really rough) figures. Access time to memory is measured in Microseconds (“us” – millionths of a second) to hundreds of Nanoseconds (“ns” – billionths of a second). Somewhere around 500ns seems to be an acceptable figure. Access to disc storage is more like Milliseconds (“ms” – thousandths of a second). Go check an AWR report or statspack or OEM or whatever you use, you will see that db file scattered reads are anywhere from low teens to say 2 or 3 ms, depending on what your storage and network is. For most sites, that speed has hardly altered in years as, though hard discs get bigger, they have not got much faster – and often you end up with fewer spindles holding your data as you get allocated space not spindles from storage (and the total sustainable speed of hard disc storage is limited to the total speed of all the spindles involved). Oh, the storage guys tell you that your data is spread over all those spindles? So is the data for every system then, you have maximum contention.

However, memory speed has increased over that time, and so has CPU speed (though CPU speed has really stopped improving now, it is more down to More CPUs).

Even allowing for latching and pinning and messing around, accessing a block in memory is going to be at the very least 1,000 times faster than going to disc, maybe 10,000 times. Sticking to a conservative 2,000 times faster for memory than disc , that 12 seconds trip to the fridge equates to 24,000 seconds driving. That’s 6.66 hours.

This is why you want to avoid physical IO in your database if you possibly can. You want to maximise the use of the database buffer cache as much as you can, even with all the new Exadata-like tricks. If you can’t keep all your working data in memory, in the database buffer cache (or in-memory or use the results cache) then you will have to do that achingly slow physical IO and then the intelligence-at-the-hardware comes into it’s own, true Data Warehouse territory.

So the take-home message is – avoid physical IO, design your database and apps to keep as much as you can in the database buffer cache. That way your beer is always to hand.

Cheers.

Update. Kevin Fries commented to mention this wonderful little latency table. Thanks Kevin.

“Here’s something I’ve used before in a presentation. It’s from Brendan Gregg’s book – Systems Performance: Enterprise and the Cloud”

Lab Report: Oracle Database on EMC XtremIO. A Compression Technology Case Study.

If you are interested in array-level data reduction services and how such technology mixes with Oracle Database application-level compression (such as Advanced Compression Option), I offer the link below to an EMC Lab Report on this very topic.

To read the entire Lab Report please click the following link:   Click Here.

The following is an excerpt from the Lab Report:

Executive Summary
EMC XtremIO storage array offers powerful data reduction features. In addition to thin provisioning, XtremIO applies both deduplication and compression algorithms to blocks of data when they are ingested into the array. These features are always on and intrinsic to the array. There is no added licensing, no tuning nor configuration involved when it comes to XtremIO data reduction.

Oracle Database also supports compression. The most common form of Oracle Database compression is the Advanced Compression Option—commonly referred to as ACO. With Oracle Database most “options” are separately licensed features and ACO is one such option. As of the publication date of this Lab Report, ACO is licensed at $11,000 per processor core on the database host1. Compressing Oracle Database blocks with ACO can offer benefits beyond simple storage savings. Blocks compressed with ACO remain compressed as they pass through the database host. In short, blocks compressed with ACO will hold more rows of data per block. This can be either a blessing or a curse. Allowing Oracle to store more rows per block has the positive benefit of caching more application data in main memory (i.e., the Oracle SGA buffer pool). On the other hand, compacting more data into each block often results in increased block-contention.

Oracle offers tuning advice to address this contention in My Oracle Support note 1223705.12. However, the tuning recommendations for reducing block contention with ACO also lower the compression ratios. Oracle also warns users to expect higher CPU overhead with ACO as per the following statement in the Oracle Database product documentation:

Compression technology uses CPU. Ensure that you have enough available CPU to handle the additional load.

Application vendors, such as SAP, also produce literature to further assist database administrators in making sensible choices about how and when to employ Advanced Compression Option. The importance of understanding the possible performance impact of ACO are made quite clear in such publications as SAP Note 14363524 which states the following about SAP performance with ACO:

Overall system throughput is not negatively impacted and may improve. Should you experience very long runtimes (i.e. 5-10 times slower) for certain operations (like mass inserts in BW PSA or ODS tables/partitions) then you should set the event 10447 level 50 in the spfile/init.ora. This will reduce the overhead for insertion into compressed tables/partitions.

The SAP note offers further words of caution regarding transaction logging (a.k.a., redo) in the following quote:

Amount of redo data generated can be up to 30% higher

Oracle Database Administrators, with prior ACO experience, are largely aware of the trade-offs where ACO is concerned. Database Administrators who have customarily used ACO in their Oracle Database deployments may wish to continue to use ACO after adopting EMC XtremIO. For this reason Database Administrators are interested in learning how XtremIO compression and Advanced Compression Option interact.

This Lab Report offers an analysis of space savings with and without ACO on XtremIO. In addition, a performance characterization of an OLTP workload manipulating the same application data in ACO and non-ACO tablespaces will be covered…please click the link above to continue reading…

 

Filed under: oracle

Smart Scan enhancements in Oracle Exadata 12c-inline LOBs

As part of the 12c release of the database and cellos Oracle introduced the ability to perform Smart Scans on inline LOBs. The enhancement is certainly for Secure Files only. And as always, if you would like to read up about Secure Files you can head over to Tim Hall’s website for the details.

To demonstrate the new behaviour I have used an 11.2.0.3 BP 22 database on Exadata 11.2.3.3.1 and 12.1.0.2.2 database on Exadata 12.1.2.1.0. The test case evolves around the following table:

CREATE TABLE loboffload (
 id number primary key,
 comments clob)
enable row movement 
tablespace users
lob (comments) store as securefile (
 enable storage in row
);

This is the short version, Oracle fills in the blanks and converts the table DDL to

SQL> select dbms_metadata.get_ddl('TABLE','LOBOFFLOAD') from dual;

DBMS_METADATA.GET_DDL('TABLE','LOBOFFLOAD')
--------------------------------------------------------------------------------

  CREATE TABLE "MARTIN"."LOBOFFLOAD"
   (    "ID" NUMBER,
        "COMMENTS" CLOB,
         PRIMARY KEY ("ID")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS"  ENABLE
   ) SEGMENT CREATION IMMEDIATE
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS"
 LOB ("COMMENTS") STORE AS SECUREFILE (
  TABLESPACE "USERS" ENABLE STORAGE IN ROW CHUNK 8192
  NOCACHE LOGGING  NOCOMPRESS  KEEP_DUPLICATES
  STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))  ENABLE ROW
 MOVEMENT

The idea is that I am using the comments lob column to simulate a common approach in application design where you define a free text field on a GUI screen for the user to provide additional information about the record to be stored in the database. The design is not very practical though.

The important information is in the section about the LOB. Here you find the request to store the LOB as a SECUREFILE in-line with the table. Out of line LOBs are not offloadable as of 12.1.2.1.0 as far as I know.

To feed some data into the table I copied added a bit of data and then used the insert … append technique to populate it:

alter session enable parallel dml;

-- with nods to Jonathan Lewis who published this in one of his presentations on generating data
insert /*+ append parallel(4) */ into loboffload
with v1 as (
 SELECT rownum n FROM dual CONNECT BY level <= 10000
)
SELECT  rownum id,
  CASE
    WHEN mod(rownum,100000) = 0     THEN 'THIS WAS EXCEPTIONAL' 
    WHEN mod(rownum,10000) = 0      THEN 'THIS WAS VERY GOOD'
    WHEN mod(rownum,1000) = 0       THEN 'THIS WAS GOOD'
    WHEN mod(rownum,100) = 0        THEN 'OK'
    ELSE 'NO COMPLAINTS'
  END 
FROM v1,
  v1
WHERE rownum <= 1e6;

create sequence loboffload_s cache 10000 start with 1000001;


insert /*+ append parallel(4) */ into loboffload
select /*+ parallel(4) */ loboffload_s.nextval, comments from loboffload;

I ended up with 16 million rows in the end. I used impdp over a network link to move the table to the other database.

Test with 11.2.0.3

The test in 11.2.0.3 is used to confirm that inline LOBs are not offloadable. Here is the query and result:

MARTIN@ora11203> select /*+ monitor noparallel full(t) */ count(*) from loboffload t where comments like '%GOOD%';

  COUNT(*)
----------
     15840

Elapsed: 00:01:33.48

MARTIN@ora11203> -- removing irrelevant statistics from the output
MARTIN@ora11203> @mystats stop t=1
==========================================================================================
MyStats report : 25-MAY-2015 02:33:24
==========================================================================================


------------------------------------------------------------------------------------------
1. Summary Timings
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
TIMER   snapshot interval (seconds)                                                  95.28
TIMER   CPU time used (seconds)                                                      93.00


------------------------------------------------------------------------------------------
2. Statistics Report
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
STAT    CPU used by this session                                                     9,303
...
STAT    DB time                                                                      9,353
...
STAT    cell flash cache read hits                                                   9,454
STAT    cell physical IO interconnect bytes                                  1,233,436,672
...
STAT    consistent gets                                                            150,794
STAT    consistent gets - examination                                                    3
STAT    consistent gets direct                                                     150,558
STAT    consistent gets from cache                                                     236
STAT    consistent gets from cache (fastpath)                                          231
STAT    cursor authentications                                                           3
...
STAT    execute count                                                                   13
...
STAT    logical read bytes from cache                                           14,393,344
STAT    no work - consistent read gets                                             150,558
...
STAT    parse count (hard)                                                               1
STAT    parse count (total)                                                             13
STAT    parse time cpu                                                                   1
STAT    parse time elapsed                                                               1
STAT    physical read IO requests                                                    9,459
STAT    physical read bytes                                                  1,233,436,672
STAT    physical read requests optimized                                             9,454
STAT    physical read total IO requests                                              9,459
STAT    physical read total bytes                                            1,233,436,672
STAT    physical read total bytes optimized                                  1,233,395,712
STAT    physical read total multi block requests                                     9,255
STAT    physical reads                                                             150,566
STAT    physical reads cache                                                             8
STAT    physical reads direct                                                      150,558
...
STAT    table scan blocks gotten                                                   150,558
STAT    table scan rows gotten                                                  16,000,000
STAT    table scans (direct read)                                                        1
STAT    table scans (long tables)                                                        1
...

------------------------------------------------------------------------------------------
3. About
------------------------------------------------------------------------------------------
- MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
- Based on the SNAP_MY_STATS utility by Jonathan Lewis

==========================================================================================
End of report
==========================================================================================

So none of the essential cell% events recorded except for Flash Cache read hits. Without the cell scans statistics incremented by 1 (for the table segment) you can conclude that no Smart Scan happened here.

Test on 12.1.0.2/12.1.2.1.0

The first result was not too surprising, as it is the documented behaviour. The second test should hopefully reveal some offloading.

MARTIN@ORA12c> select /*+ monitor noparallel full(t) */ count(*) from loboffload t where comments like '%GOOD%';

  COUNT(*)
----------
     15840

Elapsed: 00:00:01.65

MARTIN@ORA12c> @mystats stop t=1
==========================================================================================
MyStats report : 25-MAY-2015 02:29:46
==========================================================================================


------------------------------------------------------------------------------------------
1. Summary Timings
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
TIMER   snapshot interval (seconds)                                                  14.61
TIMER   CPU time used (seconds)                                                       0.03


------------------------------------------------------------------------------------------
2. Statistics Report
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
STAT    CPU used by this session                                                         9
...
STAT    cell IO uncompressed bytes                                           1,234,296,832
STAT    cell blocks helped by minscn optimization                                  150,666
STAT    cell blocks processed by cache layer                                       150,671
STAT    cell blocks processed by data layer                                        150,671
STAT    cell blocks processed by txn layer                                         150,671
STAT    cell flash cache read hits                                                   1,143
STAT    cell num smartio automem buffer allocation attempts                              1
STAT    cell physical IO bytes eligible for predicate offload                1,234,296,832
STAT    cell physical IO interconnect bytes                                      6,273,368
STAT    cell physical IO interconnect bytes returned by smart scan               6,273,368
STAT    cell scans                                                                       1
...
STAT    consistent gets                                                            151,053
STAT    consistent gets direct                                                     150,671
STAT    consistent gets examination                                                      3
STAT    consistent gets examination (fastpath)                                           3
STAT    consistent gets from cache                                                     382
STAT    consistent gets pin                                                            379
STAT    consistent gets pin (fastpath)                                                 379
...
STAT    parse count (total)                                                             13
STAT    physical read IO requests                                                    1,205
STAT    physical read bytes                                                  1,234,296,832
STAT    physical read requests optimized                                             1,143
STAT    physical read total IO requests                                              1,205
STAT    physical read total bytes                                            1,234,296,832
STAT    physical read total bytes optimized                                  1,171,423,232
STAT    physical read total multi block requests                                     1,189
STAT    physical reads                                                             150,671
STAT    physical reads direct                                                      150,671
...
STAT    table scan blocks gotten                                                    15,744
STAT    table scan disk non-IMC rows gotten                                      1,670,692
STAT    table scan rows gotten                                                   1,670,692
STAT    table scans (direct read)                                                        1
STAT    table scans (short tables)                                                       1
...

------------------------------------------------------------------------------------------
3. About
------------------------------------------------------------------------------------------
- MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
- Based on the SNAP_MY_STATS utility by Jonathan Lewis

==========================================================================================
End of report
==========================================================================================

Good news, what Oracle said is true. The same query that did not offload in 11.2.0.3 is now offloaded in 12.1.0.2 with Exadata 12.1.2.1.0. You find proof in the existence of all the cell% statistics, especially cell scans plus the … processed by … layer statistic.

Temp Table Transformation Cardinality Estimates - 1

Having published recently two notes about the Temp Table Transformation highlighting the heuristics based decision and other weaknesses, for example regarding the projection of columns, it's time to publish some more notes about it.The transformation can also have significant impact on cardinality estimates, both join and single table cardinality.Looking at the difference in the join cardinality estimates of following simple example:


create table t1
as
select
rownum as id
, mod(rownum, 10) + 1 as id2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1000
;

exec dbms_stats.gather_table_stats(null, 't1')

alter session set tracefile_identifier = 'temp_trans_join_card';

alter session set events '10053 trace name context forever, level 1';

explain plan for
with
cte as (
select /* inline */ id + 1 as id from t1 t
where 1 = 1
)
select /*+
--opt_estimate(@"SEL$2" join("A"@"SEL$2" "B"@"SEL$2") rows=1000)
no_merge(a) no_merge(b)
*/ * from cte a, cte b
where a.id = b.id
;

alter session set events '10053 trace name context off';

-- 11.2.0.x Plan with TEMP transformation
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 26 |
| 1 | TEMP TABLE TRANSFORMATION | | | |
| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D660C_27269C | | |
| 3 | TABLE ACCESS FULL | T1 | 1000 | 4000 |
|* 4 | HASH JOIN | | 1 | 26 |
| 5 | VIEW | | 1000 | 13000 |
| 6 | TABLE ACCESS FULL | SYS_TEMP_0FD9D660C_27269C | 1000 | 4000 |
| 7 | VIEW | | 1000 | 13000 |
| 8 | TABLE ACCESS FULL | SYS_TEMP_0FD9D660C_27269C | 1000 | 4000 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("A"."ID"="B"."ID")

-- 11.2.0.x Plan with INLINE hint
----------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 10000 | 253K|
|* 1 | HASH JOIN | | 10000 | 253K|
| 2 | VIEW | | 1000 | 13000 |
| 3 | TABLE ACCESS FULL| T1 | 1000 | 4000 |
| 4 | VIEW | | 1000 | 13000 |
| 5 | TABLE ACCESS FULL| T1 | 1000 | 4000 |
----------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("A"."ID"="B"."ID")

the following becomes obvious:- There are vastly different cardinality estimates possible depending on whether the transformation gets used or not. In principle due to the NO_MERGE hints used, the transformation shouldn't have any impact on the estimates, but it does- Looking at the optimizer trace file some information seems to get lost when the transformation gets used, in particular column related statistics- This information loss, like in the example here, can lead to join cardinality estimates of 0 (rounded to 1 in the plan output)- And even worse, at least in pre-12c versions, such a 0 cardinality estimate can't be corrected via OPT_ESTIMATE hints, since somehow the correction gets ignored/lost although being mentioned in the optimizer trace:


11.2.0.1:
Column (#1): ID(
AvgLen: 22 NDV: 1 Nulls: 0 Density: 0.000000
Column (#1): ID(
AvgLen: 22 NDV: 1 Nulls: 0 Density: 0.000000
Join Card: 0.000000 = = outer (1000.000000) * inner (1000.000000) * sel (0.000000)
>> Join Card adjusted from 0.000000 to: 1000.000000, prelen=2
Adjusted Join Cards: adjRatio=1.00 cardHjSmj=0.000000 cardHjSmjNPF=0.000000 cardNlj=0.000000 cardNSQ=1000.000000 cardNSQ_na=0.000000
Join Card - Rounded: 1 Computed: 0.00

The behaviour regarding the OPT_ESTIMATE hint changes in 12c, but then there are other oddities introduced in 12c that are not there in pre-12c - have a look at the "Query Block" section when using the INLINE variant of the query - there are two identical fully qualified object names, clearly a bug, making hinting using global hint syntax impossible for that query block.Although my simple example here can be corrected via extended statistics on the join column expression used in the CTE query my point here is that depending on whether the transformation gets used or not vastly different and extreme cardinality estimates are possible - and those extreme cases even can't be corrected in pre-12c versions.For example I recently had a real life case where two columns were joined that had a significant number of NULL values, one coming from a temp table transformation row source. Without the transformation the join cardinality estimates were reasonable, but the transformation again lead to such a 0 cardinality estimate (that couldn't be corrected via a (correctly specified) OPT_ESTIMATE hint), ruining the whole plan.

Datapump TTS quick tip

I was trying to transport a large number of tablespace datafiles from one database to another, and impdp kept crashing with “PL/SQL: numeric or value error: character string buffer too small.”

It turns out there is a limit on what you can put on a single line in a parameter file.  Hence if you have something like:

transport_datafiles=/long/path/file1,/long/path/file2,/long/path/file3,....

then you might run into trouble. It’s easily fixed however – just put the entries one per line

transport_datafiles=/long/path/file1
transport_datafiles=/long/path/file2
transport_datafiles=/long/path/file3
....

Oracle 12c – PL/SQL Implicit Result Sets

Implicit Result Sets

Beginning with Oracle 12c it is possible for SELECT statements to send results back to a client from PL/SQL without need for REF CURSOR definitions. Previously, PL/SQL required definition of a REF CURSOR as an OUT parameter adding complexity (and precision) not found in other products (argh… the older I get the more I believe that a module should do exactly one thing (and do it well)). However, it is common in some database products to use a Stored Procedure to return multiple result sets.

Using Oracle 12c’s newly implemented implicit result sets will allow reuse of SQL Server (and other database) stored procedures that return implicit result sets. Implicit result sets were originally targeted at JDBC clients but will probably work well in other environments like C#.NET or SQL Server TSQL.

Oracle 12c PL/SQL’s DBMS_SQL.RETURN_RESULT procedure makes result cursors available to the client easily (procedures do not need out parameter for procedure to return cursor results). The general process is as follows:

  • In PL/SQL Procedure
    • Open cursorname FOR SELECT
    • Call DBMS_SQL.SQL_RETURN_RESULT(cursorname)
  • In Java JDBC
    • Call stored procedure returning implicit result sets
    • Locate next result set from procedure with getMoreResults() or getMoreResults(nn)
    • Retrieve result set using getResultSet()
  • In PL/SQL
    • Supported in PL/SQL with DBMS_SQL.GET_NEXT_RESULT

Example PL/SQL Procedure using Implicit Result Set

The procedure below creates two implicit result sets that may be read by client software such as Java, C#, TSQL, or PL/SQL:

create or replace PROCEDURE MY_IMPLICIT_TABLES
(in_deptid in emp.deptno%type)
as
mycursor sys_refcursor;
mycursor2 sys_refcursor;
begin
if in_deptid is not null
then
open mycursor for
select empno,ename,job,sal
from emp
where deptno = in_deptid
order by ename;
 dbms_sql.return_result(mycursor);
/* now for 2nd cursor */
open mycursor2 for
select employee_id,last_name,job_id,salary
from hr.employees
where department_id = in_deptid
order by last_name;
dbms_sql.return_result(mycursor2);
end if;
END MY_IMPLICIT_TABLES;

 The following JDBC code calls the stored procedure, then processes each of the result sets returned. Note that returning two or more result sets means that the client-side code must have some intelligence about how to process each result set in turn; this is pretty fragile design so be careful.

The resulting output from the sample Java program is shown below:

package samples;

import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.sql.Types;

/** Sample use of Oracle Procedure with Implicit Results
*

* Note: Exception handling omitted to simplify example
*  *
* @version 1.01 04/01/2015
* @author  John Jay King
* */
public class OraThinJDBCImplicitResults {
public OraThinJDBCImplicitResults() throws SQLException  {
// Load the Oracle JDBC driver — not needed for JDBC 4 Direct Connection
DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());

// Direct connection to database; “real applications” should
// probably use a Connection Pool for best system performance
Connection conn =
DriverManager.getConnection (“jdbc:oracle:thin:@192.168.0.61:1539/pdb1.localdomain”,
“john”, “king”);

// Create a Statement
CallableStatement stmt1 = conn.prepareCall(“{CALL my_implicit_tables(?)}”);
ResultSet rs1 = null;
int rsetCount = 0;
stmt1.setInt(1,20);
stmt1.executeQuery();
while (stmt1.getMoreResults()) {
System.out.println(“Processing Data From Result Set ” + ++rsetCount);
rs1= stmt1.getResultSet();
processResults(rs1);
}

if (rs1 != null) {
rs1.close();
}
if (stmt1 != null) {
stmt1.close();
}
if (conn != null) {
conn.close();
}
}

private void processResults(ResultSet rset) throws SQLException {
ResultSetMetaData myRSMetaData = rset.getMetaData();
System.out.println(“ResultSet”);
System.out.println(“\tColumn Count:” + myRSMetaData.getColumnCount());
int ctr = 0;
int coltype;
int colcount = myRSMetaData.getColumnCount();
while (rset.next()) {
for (ctr = 1; ctr<(colcount+1); ctr++) {
System.out.println(“\t** Column ” + ctr + ” ****************”);
System.out.println(“\t\tName: ” + myRSMetaData.getColumnName(ctr));
System.out.println(“\t\tColtype: ” + myRSMetaData.getColumnType(ctr));
coltype = myRSMetaData.getColumnType(ctr);
System.out.print(“\t\tValue is “);
switch (coltype) {
case     Types.VARCHAR :
System.out.println(rset.getString(ctr));
break;
case     Types.DOUBLE :
System.out.println(rset.getDouble(ctr));
break;
case     Types.INTEGER:
System.out.println(rset.getInt(ctr));
break;
case     Types.NUMERIC:
System.out.println(rset.getDouble(ctr));
break;
case Types.TIMESTAMP:
System.out.println(rset.getDate(ctr));
break;
default :
System.out.println(rset.getString(ctr));
}
}
}
}

public static void main (String args [])
throws SQLException
{
OraThinJDBCImplicitResults myir = new OraThinJDBCImplicitResults();

} // end main
} // end OraThinJDBCStoredProc

Processing Data From Result Set 1

ResultSet
Column Count:4
** Column 1 ****************
Name: EMPNO
Coltype: 2
Value is 7876.0
** Column 2 ****************
Name: ENAME
Coltype: 12
Value is ADAMS
** Column 3 ****************
Name: JOB
Coltype: 12
Value is CLERK
** Column 4 ****************
Name: SAL
Coltype: 2
Value is 1100.0
** Column 1 ****************
Name: EMPNO
Coltype: 2
Value is 7902.0
** Column 2 ****************
Name: ENAME
Coltype: 12
Value is FORD
** Column 3 ****************
Name: JOB
Coltype: 12
Value is ANALYST
** Column 4 ****************
Name: SAL
Coltype: 2
Value is 3000.0
** Column 1 ****************
Name: EMPNO
Coltype: 2
Value is 7566.0
** Column 2 ****************
Name: ENAME
Coltype: 12
Value is JONES
** Column 3 ****************
Name: JOB
Coltype: 12
Value is MANAGER
** Column 4 ****************
Name: SAL
Coltype: 2
Value is 2975.0
v** Column 1 ****************
Name: EMPNO
Coltype: 2
Value is 7788.0
** Column 2 ****************
Name: ENAME
Coltype: 12
Value is SCOTT
** Column 3 ****************
Name: JOB
Coltype: 12
Value is ANALYST
** Column 4 ****************
Name: SAL
Coltype: 2
Value is 3000.0
** Column 1 ****************
Name: EMPNO
Coltype: 2
Value is 7369.0
** Column 2 ****************
Name: ENAME
Coltype: 12
Value is SMITH
** Column 3 ****************
Name: JOB
Coltype: 12
Value is CLERK
** Column 4 ****************
Name: SAL
Coltype: 2
Value is 800.0

Processing Data From Result Set 2

 ResultSet
Column Count:4
** Column 1 ****************
Name: EMPLOYEE_ID
Coltype: 2
Value is 202.0
** Column 2 ****************
Name: LAST_NAME
Coltype: 12
Value is Fay
** Column 3 ****************
Name: JOB_ID
Coltype: 12
Value is MK_REP
** Column 4 ****************
Name: SALARY
Coltype: 2
Value is 6000.0
** Column 1 ****************
Name: EMPLOYEE_ID
Coltype: 2
Value is 201.0
** Column 2 ****************
Name: LAST_NAME
Coltype: 12
Value is Hartstein
** Column 3 ****************
Name: JOB_ID
Coltype: 12
Value is MK_MAN
** Column 4 ****************
Name: SALARY
Coltype: 2
Value is 13000.0

Oracle’s Implicit Result set capability will allow reuse of existing stored procedure designs (if desired); be prepared for the complexity of code needing to know about the number of result sets and their contents.