Search

OakieTags

Who's online

There are currently 0 users and 43 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

The User Group Tour

Those of you that have been around Oracle for some time may already be aware of the various OTN tours. These tours bring well known speakers to some of the smaller (relatively speaking) regions that often don’t get to see some of the big name speakers, simply because of audience size. Over the past couple of years, I’ve been involved in the OTN APAC tour, and recently returned from the New Zealand leg of the tour for this year. I presented two workshops – one on Database Lifecycle Management and one on DBaaS – as well as standing in for a sick presenter and covering Snap Clone in Enterprise Manager 12c. For me, the best value for the conference was (as it so often is) the time spent interacting both with customers and other speakers / exhibitors. It was great to catch up with so many people I haven’t seen for a long time, both from within Oracle and outside of it.

In many ways, the New Zealand leg of the OTN APAC Tour is the start of my very own user group tour. Over the next few months, I’ll be presenting at UKOUG, RMOUG and possibly Collaborate 15 and Kscope 15. Should be a fun time, provided my back and knee hold out. I was on crutches at the NZOUG conference and will be for UKOUG as well.

UKOUG Tech14

This will be my first time at the UKOUG conference. I hear it’s one of the better user group conferences to get to. Of course, in my case getting there is going to be the problem since I’ll still be on crutches. I’m flying from Canberra to Manchester with two stops:

  • Canberra to Sydney – the shortest leg, just to prepare me for the next leg. :) That’s only a 55 minute flight, so most of the time will be getting to and from cruising height. I suspect this one will be a Dash 8, which means I’ll have to check my laptop bag as premium luggage (taken from and delivered to me at the steps into the plane) and have to carry on the laptop itself. All these things are relatively easy when you’re not on crutches, but adding the crutches adds a whole new dimension of complexity!
  • Sydney to Dubai – after a 2 hour layover in Sydney, this is the real ugly one – 15 hours, 28 minutes estimated travel time. UGH! :( At least this time I can carry the laptop bag on. I had to get a new one so I could carry it on my back since wheeling a bag around isn’t the easiest of options with crutches! :)
  • Dubai to Manchester – after a 3 hour 40 minute layover in Dubai, a short 7 hours, 32 minutes – well, short compared to the leg before it anyway!

After landing in Manchester, I’ll be met by a good friend who’s volunteered to drive me to Liverpool. I wonder what state I’ll be in by that time!

Once I’m at the conference, I’ll kick things off with Super Sunday, probably attending most of the OakTable stream. I’m looking forward to meeting a few of my colleagues on the Oak Table from Europe that I haven’t met as yet, and of course there’s the ACE Dinner that night that the organizers have kindly invited me to. During the Tech14 conference itself, I’ll be helping out with Enterprise Manager Round Table at 2:30 on Monday, as well as presenting on Snap Clone on Tuesday at 3:00. I’m also visiting Barclays on the Thursday to discuss EM and how it can help them in their environment, before the long haul home again.

RMOUG Training Days

The RMOUG Training Days conference is one of my personal favourites, and to me one of the best grass roots user group conferences out there. I’ve presented at it multiple times, and have always enjoyed meeting up with my good friends on the organizing committee. This time round, I have two hands-on workshops (one on Delivering Pluggable Database as a Service (PDBaaS) with Enterprise Manager and the other on Delivering Schema as a Service with Enterprise Manager, and doing the Snap Clone presentation as well.

The two workshops are really designed to show you Database as a Service (DBaaS) in two different versions of the Oracle database. Schema as a Service works with releases prior to Oracle Database 12c, and as we all know, there are a lot of people out there who haven’t made the switch to DB12c as yet. Of course, there can be multiple reasons for that, but if you can, I really do suggest you make the move to DB12c and then look at using PDBaaS instead of Schema as a Service. It’s a much more straightforward implementation at the application level, since it addresses the namespace collision issue you need to deal with in Schema as a Service (see my posts here and here on Schema as a Service if what I just said is double dutch to you. ;) )

Once the conference is over, I’m looking forward to at least seeing some snow close up. I doubt I’ll be of much use for skiing or similar with my back and knee problems, but I can always hope to see Carol Dacko disappear from sight as she attempts a snow angel again! :)

Collaborate 15

This is another conference that I haven’t attended before, which I’m hoping to get to this time round. At the time I’m writing this, they haven’t announced acceptances for the conference, so I may or may not get there. Again, it’s a conference I’ve been wanting to get to but just haven’t had the opportunity as yet. Hopefully this year I’ll get there. THe only downside for me is that it’s in Las Vegas, a city I visited plenty of times for work when I was based in the US. It’s one of those places I wanted to visit to tick off my bucket list, but it really doesn’t hold a lot of attraction for me now I’ve been there. I’m not a gambler so the gaming side doesn’t interest me at all, but the shows are usually worth investigating. :)

Kscope 15

Like Collaborate 15, this is another conference I haven’t been to. I submitted a couple of papers for it as some of my colleagues said this was a great conference to get to, so we’ll see what comes of those!

So there you have it – a bunch of travel coming up to present at a variety of user group conferences. That’s one of the parts of my job as a database architect in the DBaaS group within EM that really makes me happy. I love talking with customers about what EM and DBaaS in particular can do to help them, so if you see me hobbling around on crutches at any of these conferences, come up and say hi!

Conferences and the Etiquette of Meeting New People

One of the reasons I like user group conferences is meeting new people in my “area” – these people at the conference not only like technology and probably some of the same technology I do but, as they are at a conference, are probably inclined to learn stuff, meet people and maybe share ideas. I’m not actually very good face-to-face socially with people I do not know, so I like to tilt things in my favour if I can!

But sometimes you have odd experiences.

I was at a conference a couple of years back that was not in the UK. I was loving it as I was meeting some people I knew electronically but never met and others who I had only met when they visited UK conferences – but most people I had no real connection to, so I was a bit quiet (yes, loud-mouthed, always-comments-in-talks me was quiet), especially as it was the first day. I was late for the lunch event, I don’t remember why, but it was a little unusual for being a sit-down meal and there was to be some meet-the-expert tables and I wanted to be able to identify some people I knew by name and never met. The big signs on tables would be a bit of a help there.

As I came in I saw half the tables were full and most of my friends were already on a full table. I wandered over to a half-full table and as I sat down I said “Hello, I’m Martin Widlake, How you doing?” or whatever to the half-dozen guys there. They all looked at me. A couple nodded or said “hi” but one said “We’ve got other friends turning up soon”. Hmm, that was code for “sod off”, I think.

I turned on the full English Accent so they could tell I was not from around those parts. “That’s nice – always good to have friends when you are at a conference….especially one where you don’t know many people?”. Some smiled, they all understood my mild reprimand. Mr Friendly who had mentioned all his other friends did not smile though. After this opening “chat” no one else really said anything more to me.

The Starter turned up and the guys all spoke to each other – and ignored me. Some other lone gun wandered up and asked me if he could sit next to me – “Sure, feel free – I’m Martin Widlake, I’m from the UK”. He introduced himself and sat down. Mr Friendly piped up “There are more people joining us at this table, I’m not sure there is gonna be room”. Some of his already-present friends had the decency to look a bit apologetic and I simply said “Well, it’s pretty full on all the tables now – and he’s got his starter” as the waitress put down a plate. And I pointedly started talking to the new chap.

Main turns up and so do a couple of late members for Mr Friendly’s group, who sat down at the remaining spare seats. “I told you” he said “you might have to move”.

I’m losing my patience a bit now. “Well they can sit somewhere else I’m sure, seeing as they are late.”

Mr Friendly is getting angry “I warned you when you sat down – when the others turn up, you’ll move”.

“I won’t move, this is my seat. I’m staying here”.

“Who the hell do you think you are?” demands Mr Friendly. Oh, thank you, I think to myself. I’m so going to enjoy this….

“Well, I did introduce myself when I arrived and….” pointing to the large sign with ‘Martin Widlake’ on it above the table “there is a large reminder of my name just there”. I paused a couple of seconds before adding “So this is my seat and my table and I’m kind of obliged to stay here, whether you want to talk to me or not”.

Maybe I could have handled the situation better from the start and stressed that the reason why I was staying was I was supposed to sit at my table. But I was smarting a little from the fact that no one apparently wanted to come to my table and talk to me. Maybe not surprising, as I don’t think I had done a presentation at the conference at that point – but my massive ego was already bruised.

So what about the etiquette of meeting people at conferences? It was just a title for a story I felt like telling…

However, there are a couple of things I do want to mention about the etiquette of meeting people at conferences. If you do not know many people there – just talk to people. People you don’t know. Just make a few observational or open comments, nothing to direct – “This queue for the coffee is a bit long”, “Have you seen any good/bad presentations yet?”, “what do you think about ansii join syntax” (OK, last one is a bad suggestion). Most people will respond and those that do not are probably just very nervous – more nervous than you! – and almost no one will be like Mr Friendly above. And if they are like Mr Friendly, where there are a few hundred other people you can go and try the odd comment on to see if you get a response.

At the social events you can see dozens and dozens of people just at the side or wandering around, not speaking to anyone. If you are one of them, few people are likely to come up to you and start a conversation (I’ve tried approaching the odd lone person but I stopped when I got Seriously Stalked at one of the UKOUG conferences). But if you go talk to other people, most of us will respond. And if someone does respond – Don’t stalk them!!! – have a conversation and then, having found it is possible, go and try some other people. The next day, if you see the people who responded last night, go and have a small chat again. But Don’t stalk them!!!.

Finally, talk to the presenters. We are actually the easy targets and not the hard ones. Those of us who present tend to be attention seekers so we are happy for you to come up and chat. And if you pretend you liked our talks we will certainly warm to you, so it’s an easy opening. However, it’s not like we are pop-stars or TV celebrities, we are just average people and you can come and chat to us (actually, I feel the same about pop-stars and TV celebrities, I don’t get easily star-struck but I know a lot of people do, even over Oracle Names).

But Don’t stalk them!!!.

And if someone insists on joining you at a table that has a name above it – listen carefully when they introduce themselves…

Why Women Should Consider a Smartwatch

I keep having this conversation over and over, in retail stores, restaurants and other establishments- women coming up to me and asking, “Can you tell me about that watch you have on?”

It made me realize that it could be really intimidating to:

1.  Ask questions of the sales staff in most stores.

2.  The benefits that I noticed I get from the smartwatch are not always the same for men.

3.  Seeing a woman wearing one lets them know that they are something they may want to consider.

So, even though I’m hoping everyone will benefit from this post, I really am hoping to make up for the lack of marketing of smartwatches to women and listing some of the benefits of owning one.

So let’s start with the model that I currently possess-the Moto360

It has a round face and comes in both the black model and the silver with leather band.  I have the brushed aluminum with leather band and it’s my second, very successful pairing of a smart watch with an Android smartphone.  Mine is available to pair with Apple iPhone, too, so I am happy to say that I recommend it highly for both phone OS options.

moto-360-hands-on-630  photo:  http://egadget.com

My introduction into the smartwatch world was a Christmas present from Tim last year, (on my request) of the Sony Smartwatch2-

sw2

I started out with this paired to my Samsung S4, then migrated it to my Samsung Note3, which I still have at this time.  Pairing these watches are as simple as a software download from the playstore, (or IOS AppStore for those with iPhones…) and then using the Bluetooth on your phone to connect to it.

Differences that Caused Me to Upgrade

1.  The Sony Smartwatch2 used an Android base OS that was compatible with my phone, but unlike the Moto360 that uses the newer AndroidWear OS.  This means that instead of finding apps that are compatible with my new watch, everything on my smartphone is automatically available at some level of notifications with AndroidWear and I can choose what to exclude.

2.  The Moto360 has a smaller face and it’s round.  This means it doesn’t get caught on EVERYTHING when I’m putting a sweater on, taking a coat off or even reaching my hand into my purse, it’s less likely to get hung up.

Top Features of a Smartwatch

1. Solid notifications, as our smart phone is rarely on our person, but in our purse.  Women’s clothes commonly do not have functional pockets that can hold a smart phone.  The pairing of a smartphone and smartwatch is pure genius to simplifying our lives and never missing out on important notifications.

2.  Size-  Our wrists are often much smaller than men’s and we need something that doesn’t look like a monstrosity on our arm.  I have a tiny, tiny wrist, (ring size is 4 1/2, so keep that in mind as you look at the photos) yet the watch doesn’t stand out like it’s ridiculous and the wrist band fits comfortably.

3.  The rest of the list:

  • Include the ability to change the face of the watch to something less masculine if we want! :)  There are a TON of watch faces out there, but I chose to use my own background with the standard offering.

2014-11-24 16.42.32

3.  Vibration notifications that we can control.

  • The ability to exclude applications and silence any notifications that don’t provide value.
  • The ability to answer, (at least via bluetooth headset) or send calls to voicemail after noting the callerID info.
  • Ability to quickly reply to text messages with pre-recorded list of responses vs. getting our phone out of our purse.

4.  Social Media

  • Ability to like Facebook posts.
  • Option to favorite or native retweet Twitter posts.
  • Simple options to interact with other social media platforms.

2014-11-24 16.43.44

5.  Remote camera

  • Our hands are commonly smaller, making it difficult to click the shutter button on the camera screen when taking selfies and other pictures.  Having the option to do this on our watch is helpful.

6.  Health Monitoring

  • Any of those cool fitbit features you’ve been eyeing?  Yeah, there is a version out for the Androidwear and you can just download it.  The censors are already on the Moto360 to perform these functions.
  • Check your pulse
  • Create goals for yourself and check status during day on your progress.

2014-11-24 16.48.232014-11-24 16.48.12

What can the watch also do?

  • Let you know when an item has been shipped, (very helpful right now with Christmas shopping upon us!)

2014-11-24 16.42.58

  • Check the weather before the kids head out to ensure both you and they bundle up or otherwise by a few swipes on your wrist-

2014-11-24 16.42.44

The Watch is Easy To Interact With

Turning your wrist to face you will light the face unless you have the ambient mode set to on, (not be default and it does use more battery life by doing so…)

Swiping the face up and down will access different AndroidWear “cards”, which are active notifications you can view.

Swiping right, you can see different options available to you on the watch.

A quick tap on the screen face will bring up the “Ok Google” with options that are voice activated or you can scroll down to choose one.

  • Send a text
  • Send an email
  • Set a reminder
  • Play music
  • Set an alarm
  • Show your current status for activity goals.

Application Enhancements

If you do purchase one, I would highly recommend a few apps that made my experience much more beneficial:

Connect-  Moto360 app to tell you specific data about your Moto360 watch, including battery levels and health monitor profiles.

Wear App Manager-  Tells you a bit more about the app and allows you to manage just the Android Wear apps separately.

Wear Mini Launcher-  Extra management options for AndroidWear users, including the ability to hide apps, (good for password apps) and global shutoff for notifications.

1Weather-  Android app that has a great weather interface and notifies on multiple locations more accurately than some I’ve seen.  Very important when you live somewhere like Colorado where the weather changes every 5 minutes… :)

Wear Aware-  Vibration notification if your watch and phone become separated.  It’s also free, where some of the others want $$.

Wear Mail-  One mail app to offer advanced email notifications.  I like it as it seems to notify faster than my other email clients, so I just exclude those in notifications and use Wear Mail instead.

Silliest Feature I Love

I know, this is going to be funny, but I love that while charging, it works as a clock.  The blue lit display isn’t intrusive, doesn’t disturb me, but if I wake up at night, it’s easy to look over and see the time.  I don’t really use an alarm, (I wake up about 15 min. before any alarm, which is very annoying… :)) so my smartphone does the trick and I haven’t owned an alarm clock for years.

moto-360-charging

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Why Women Should Consider a Smartwatch], All Right Reserved. 2014.

Retrieving Bind Values from SQL Monitor in EM12c Release 4

I know others may want to know how to do this and I had challenges until Gagan Chawla, a teammate from Oracle, was kind enough to point out how to still get to this hidden, little gold nugget, so I’m posting it here for others!

Up till database plug-in 12.1.0.5, while in SQL Monitor SQL ID details page, you could click on a button called View Report and quickly view a large amount of valuable data about a SQL statement that had executed.  One of the sections in this report was binds, which listed what values were being passed for the bind variables.

binds1

If you are investigating a performance issue for the execution of a SQL statement, having bind values can give you a significant advantage.  It can tell you:

  1. Is the value outside the min/max value on an existing histogram.
  2. Do statistics lean towards another value being more prevalent.
  3. Is the value passed in not in the correct format.
  4. Does the value searched impact due it’s different from the values known and/or counts expected are off.

There are a number of other reasons, but to have this data and to have it easily accessible at your fingertips is very beneficial to the person trouble shooting.

Post the database plug-in, the feature is no longer where it once was.  From the SQL Monitoring, Monitored SQL Executions, if you were to a SQL ID of interest, you would then go to the SQL Details page.

binds2

There is a new report called “SQL Details Active Report“, but it doesn’t contain the bind values data.  This report is still very, very valuable:

binds3

It shows all the above data, along with a wait event vs. all resource usage graph at the bottom of the report.  You can save or mail the report and all it’s relevant data.  It would still be nice to have the previous report with the bind values that was once available from the details page and you can get to it, but you just need to make a few more clicks.

Go back to the main Monitored SQL Executions page and locate the SQL that you are interested in:

binds4

Bring your cursor to the status column for that SQL ID and double click.  This will take you the the Monitored SQL Executions, SQL Detail Page and on the right hand side, you will see the View Report button.

binds5

This button will bring you to the previous SQL ID Details report that includes the bind data.  Another thing to remember is that you must also be viewing a database that supports the feature, which means Oracle 11.2 or higher.

 

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Retrieving Bind Values from SQL Monitor in EM12c Release 4], All Right Reserved. 2014.

12.1.0.2 Introduction to Zone Maps Part III (Little By Little)

I’ve previously discussed the new Zone Map database feature and how they work in a similar manner to Exadata Storage indexes. Just like Storage Indexes (and conventional indexes for that manner), they work best when the data is well clustered in relation to the Zone Map or index. By having the data in the table […]

Baselines

I’m not very keen on bending the rules on production systems, I’d prefer to do things that look as if they could have happened in a completely legal fashion, but sometimes it’s necessary to abuse the system and here’s an example to demonstrate the point. I’ve got a simple SQL statement consisting of nothing more than an eight table join where the optimizer (on the various versions I’ve tested, including 12c) examines 5,040 join orders (even though _optimizer_max_permutations is set to the default of 2,000 – and that might come as a little surprise if you thought you knew what that parameter was supposed to do):

select
	/*+ star */
	d1.p1, d2.p2, d3.p3,
	d4.p4, d5.p5, d6.p6,
	d7.p7,
	f.small_vc
from
	dim1		d1,
	dim2		d2,
	dim3		d3,
	dim4		d4,
	dim5		d5,
	dim6		d6,
	dim7		d7,
	fact_tab	f
where
	d1.v1 = 'abc'
and	d2.v2 = 'def'
and	d3.v3 = 'ghi'
and	d4.v4 = 'ghi'
and	d5.v5 = 'ghi'
and	d6.v6 = 'ghi'
and	d7.v7 = 'ghi'
and	f.id1 = d1.id
and	f.id2 = d2.id
and	f.id3 = d3.id
and	f.id4 = d4.id
and	f.id5 = d5.id
and	f.id6 = d6.id
and	f.id7 = d7.id
;

It’s useful to have such extreme examples because they make it easy to notice certain features of Oracle’s behaviour – in this case the difference between SQL Plan Baselines and SQL Profiles. After I had created a baseline for this statement the optimizer still examined 5,040 join orders because that’s the way that baselines work – the optimizer first tries to optimize the statement without the baseline in case it can produce a better plan (for future evaluation and evolution) than the plan dictated by the baseline.

In my example this wasn’t really a problem (apart from the memory requirement in the SGA) but one of my clients has a production query that takes 7 seconds to optimize then runs in a few seconds more, so I’d like to get rid of that 7 seconds … without touching the code. Adding a baseline won’t reduce the time to optimize. (Note: 7 seconds may not seem like a lot, but when several copies of this query are executed concurrently using a different literal value as an input, that 7 seconds can go up to anything between 40 and 500 seconds of CPU parse time per query).

If I take a different tack and write some code to acquire the content of the SQL Plan Baseline (viz. the outline section from the in-memory execution plan) but store it as an SQL Profile the optimizer simply follows the embedded hints and examines just one join order (because the set of hints includes a leading() hint specifying exactly the join order required). This is why I will, occasionally, take advantage of the code that Kerry Osborne and Randolf Geist produced some time ago to raid the library cache for a child cursor and store its plan outline as an SQL profile.

Footnote:

This dirty trick doesn’t always solve the problem - the first example I tried to use for my demonstration did a complex concatenation that took a couple of minutes to optimise, and storing the baseline as a profile didn’t help.

 

 

Webinar Followup (Nov. 12) - In Search of Plan Stability - Part 2

Sorry for the delay in getting this posted, but thanks to everyone who attended my November 12th webinar entitled In Search of Plan Stability - Part 2. You can download the presentation materials from these links:

Presentation PDF
Recording

Thanks!

#DOAG2014 bits of info for DBAs

Just wanted to share some pieces of information from the recent DOAG annual conference that you may find interesting.

From Mike Dietrich’s presentation about Database Upgrade:

Mike Dietrich

Database Replay is extremely useful to predict after-upgrade performance on a test system,

especially we can record the production load on 10g even.

From Carsten Czarski’s talk about XML DB:

Carsten Czarski

With 12c, XML DB is mandatory and it provides an easy way to upload BLOBs via ftp into the database.

From Ulrike Schwinn’s talk about the Resource Manager I took away that

The resource manager becomes more and more popular and important, especially for Multitenant

- something Hans Forbrich reinforced later on.

Particularly I liked way she presented later on about ADO: Very many live demonstrations – that’s how I try to do my own presentations also :-)

Ulrike Schwinn

Frank Schneede did a great job debunking Exadata myths. For example,

You don’t need to have all cores enabled with Exadata X4 in order to save license cost. That’s called Capacity on Demand.

Frank Schneede

If I should name one presentation that was most useful for me, it’ll be probably Frank’s.

Markus Michalewicz delivered an excellent talk as expected about RAC cache fusion:

Markus Michalewicz

Two important messages:

RAC scales well (far) beyond three nodes because there are never more than three nodes involved for cache fusion intercommunication.

And Multitenant and RAC are a perfect fit.

One Data Guard snippet out of Larry Carpenter’s talk about Global Data Services (GDS):

GDS makes it possible to automate the failover of the Real-Time Query service to the primary in case the physical standby has an outage.

Larry Carpenter

Hans Forbrich talked about Multitenant. He showed great presentation skills and although I knew the technical details before, the way he highlighted certain aspects was still very helpful for me.

Hans Forbrich

One key message was that

Multitenant is here to stay.

DBAs should learn about it and become familiar with it as soon as possible,

because sooner than later it will have to be administered in production!

Tagged: #DOAG2014

Parallel Costs

While creating a POC of a SQL rewrite recently I received a little surprise as I switched my query from serial execution to parallel execution and saw the optimizer’s estimated cost increase dramatically. I’ll explain why in a moment, but it made me think it might be worth setting up a very simple demonstration of the anomaly. I created a table t1 by copying view all_source – which happened to give me a table with about 100,000 rows and 1117 blocks – and then ran the query ‘select max(line) from t1;’ repeating the query with a /*+ parallel(t1 2) */ hint. From 11.2.0.4 here are the two execution plans I got:


---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     4 |   153   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |     4 |            |          |
|   2 |   TABLE ACCESS FULL| T1   | 99173 |   387K|   153   (4)| 00:00:01 |
---------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |     1 |     4 |  1010   (1)| 00:00:05 |        |      |            |
|   1 |  SORT AGGREGATE        |          |     1 |     4 |            |          |        |      |            |
|   2 |   PX COORDINATOR       |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |     1 |     4 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |          |     1 |     4 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR |          | 99173 |   387K|  1010   (1)| 00:00:05 |  Q1,00 | PCWC |            |
|   6 |       TABLE ACCESS FULL| T1       | 99173 |   387K|  1010   (1)| 00:00:05 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------

It seems that the optimizer thinks that running the query parallel 2 will take five times as long as running it serially ! Is it, perhaps, some cunning fix to the optimizer that is trying to allow for process startup time for the parallel execution slaves ? Is it a bug ? No – it’s just that I hadn’t got around to setting my system stats and they were complete garbage thanks to various other tests I had been running over the previous couple of weeks. Critically, I had not adjusted the “parallel-oriented” statistics to be consistent with the “serial-oriented” stats.

Here, from the 10053 trace file for the parallel run, is the section on the Single Table Access costing, together with the system stats and base statistical information:


SYSTEM STATISTICS INFORMATION
-----------------------------
  Using WORKLOAD Stats
  CPUSPEED: 1110 millions instructions/sec
  SREADTIM: 4.540000 milliseconds
  MREADTIM: 12.440000 millisecons
  MBRC:     21 blocks
  MAXTHR:   3000000 bytes/sec
  SLAVETHR: 1000000 bytes/sec

***************************************
BASE STATISTICAL INFORMATION
***********************
Table Stats::
  Table: T1  Alias: T1
    #Rows: 99173  #Blks:  1117  AvgRowLen:  76.00  ChainCnt:  0.00
Access path analysis for T1
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for T1[T1]
  Table: T1  Alias: T1
    Card: Original: 99173.000000  Rounded: 99173  Computed: 99173.00  Non Adjusted: 99173.00
  Access Path: TableScan
    Cost:  151.13  Resp: 1010.06  Degree: 0
      Cost_io: 147.00  Cost_cpu: 20826330
      Resp_io: 1007.76  Resp_cpu: 11570183

I’m going to walk through the optimizer’s calculations that got the serial I/O cost (cost_io: 147.00) and the parallel I/O cost (Resp_io: 1007.76), but before I do that I’ll point out how inconsistent some of the system stat are. The multiblock read time (mreadtim) is 12.44 milliseconds, to read an average of 21 blocks (mbrc) which, converting to bytes per second means (21 * 8192) * 1000/12.44 = 13,828,938 bytes per second; but the I/O rate for a single parallel execution slave (slavethr) is only 1,000,000 bytes per second – which means a single serial session can (apparently) tablescan nearly 14 times as fast as an individual parallel execution slave. It’s not surprising that somehow the optimizer thinks a serial tablescan will be faster than parallel 2) – but let’s check exactly how the arithmetic falls out.

Serial:

  • #Blks: 1117, MBRC = 21 => read requests = 1117/21 = 53.19
  • sreadtim = 4.54 milliseconds, mreadtim = 12.44 milliseconds = 2.74 * sreadtim
  • Cost = 53.19 * 2.74 + 1 (_tablescan_cost_plus_one = true) = 146.74 — which looks close enough.

Parallel:

  • #Blks: 1117, block size = 8KB => total I/O requirement = 9,150,464 bytes
  • slavethr: 1,000,000 bytes/sec, degree 2 => total throughput 2,000,000 bytes/sec => elapsed I/O time = 4.575232 seconds
  • sreadtim = 4.54 milliseconds => cost = 4.575232 / 0.00454 = 1007.76 QED.

Two final thoughts:

First, if I increase the degree of parallelism to 3 the cost drops to 673 (671.84 plus a bit of CPU); if I increase the degree any further the cost doesn’t drop any further – that’s because the maxthr (maximum throughput) is 3,000,000. The optimizer uses the smaller of maxthr and (degree * slavethr) in the parallel arithmetic.

Secondly, and most importantly, the way the optimizer behaves will be affected by the various parameters relating to parallelism, so here’s a list of the relevant settings on the instance I was using when creating this example:


SQL> show parameter parallel

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback         string      LOW
parallel_adaptive_multi_user         boolean     TRUE
parallel_automatic_tuning            boolean     FALSE
parallel_degree_limit                string      CPU
parallel_degree_policy               string      MANUAL
parallel_execution_message_size      integer     16384
parallel_force_local                 boolean     FALSE
parallel_instance_group              string
parallel_io_cap_enabled              boolean     FALSE
parallel_max_servers                 integer     80
parallel_min_percent                 integer     0
parallel_min_servers                 integer     0
parallel_min_time_threshold          string      AUTO
parallel_server                      boolean     FALSE
parallel_server_instances            integer     1
parallel_servers_target              integer     32
parallel_threads_per_cpu             integer     2
recovery_parallelism                 integer     0

Note, particularly, that I have not enabled parallel_automatic_tuning.

For further details on parallel costs and the cost/time equivalence that the optimizer uses in recent versions, here’s a link to an article by Randolf Geist.

Why is “Dave Unknown” Trying to Social Media With Me?

I know some people share my opinion on this and others totally disagree – but I fail to appreciate why people I have never met, spoken with or care about want to Social Media with me. If we have not met but there is a high probability we share unusual interests then OK, perhaps – but the fact that we both can spell Oracle or know what a gene is does not count as unusual shared interests. Maybe I am just too old to “get it” or just too grumpy to appreciate their efforts.

I’m not the biggest fan of Social Media but I don’t actively dislike it either. I mean, I’m blogging so that means I have some appreciation for it. I have a Twitter account and sometimes I Twit. But not a lot. I don’t have time or inclination to log on every day and see what people have seen that they think is funny/odd/outrageous/titillating on the web, which airport they are currently being bored in or what publication/talk/blog post of theirs they want to big up. Or what cereal they have just eaten {really? Some of you think this would interest anyone?} But occasionally I hang out there and swap twit twaddle and follow links and maybe even put up my own links to my fabulous blog utterings. But I don’t follow people I don’t in some way know or have a reason to be interested in {and I don’t include seeing them on TV as my being interested in them – I followed a couple of people on twitter early on that I thought would be interesting, based on their Popular Culture output. And very quickly decided I’d stand a better chance of continuing to like them if I was not being informed of all the dross that crossed their minds when they had not rehearsed their material}.

For me, the main Social Media thing that baffles and slightly annoys me is LinkedIn Wannabes. Why are you contacting me if I don’t know you and you don’t know me? I don’t know 7.039 billion people. OK, you know some Oracle – so do probably 0.7039 million people (wow, what a worrying thought) that I also don’t know. It’s not personal that I have no interest in being LinkedIn with you, it’s the opposite. I impersonally don’t feel a need to link with you.

Do I want to link in with Dave in Denver CO, USA who is a Java developer? I’ve nothing against you, Dave, but I’m highly unlikely to meet you and we probably have little to talk about, especially as I almost never communicate with people via LinkedIn {and I don’t know anyone who does really communicate via LinkedIn}. I struggle to keep up with people I have met in the flesh or I absolutely know I have shared interests with, so random LinkedIn Wannabes, no chance. If I met you in person I’d probably like to have a chat and I might even buy you a beer, and perhaps we would become friends and I’d welcome your LinkedIn invite with open keyboard. But frankly, until you’re drinking that Carlsberg I just got from the bar for you, you are one in 7.039 billion unknown people to me.

So am I being unfriendly? Well, when I get a LinkedIn request I almost always check out the person. Is it someone I have worked with or met at a conference and it might be nice to maintain some sort of vague contact with? Occasionally it is. Once it a blue moon it turns out to be someone I actually know (or know of) quite well and I feel daft that I did not recognise them. Sometimes it is someone I don’t know but they know 15 people I do (hopefully mostly the ones I like  :-) ) and I can see they share strong work interests with me.  I link in. But most of the time I don’t know them and *they have over 500 contacts*. 

Over 500 contacts? Really? Really? And you know all these people? No, you don’t Dave. You are just collecting stamps. I’m as important to you as that. So now, though I know nothing much about you, I know I am unimportant to you, I’m just a stamp. I definitely do NOT want to be LinkedIn with you.

Occasionally it is worse. I’m not a stamp, I’m a little bit of potential collateral, a maybe-bit-of-income for them. The person is a recruitment consultant or a salesperson or a company representative who has figured out that for every 200 hundred people they bother they get a lead. So they contact thousands of us. Well, you can really stuff your invite up where the sun does not shine.

But most of the time it is stamp collecting. This seems very common with our South Asian friends. I don’t know why, maybe it is a cultural thing, maybe the universities there tell their students that this is a good way to progress (I can’t see that it is but I’m happy to be corrected if I am wrong), I don’t know – but 75% of LinkedIn invites I get from people with 500+ contacts are from that part of the world.

I’ve noticed one key thing about LinkedIn stamp collecting (or potential-collateral) invites – none of them have bothered to change the standard invite text.

Hi M

I’d like to add you to my professional network on LinedIn

- Dave Unknown

Hint – if you really want to link with me, change the text to something, anything and I mean *anything* else. Try

Oi, Martin

I’ve met you and you smell of fish and your jokes are pathetic. Link in to me else I will throw things at you next time you present

- Dave Unknown

That’ll get my attention.

What kicked of this diatribe by me? It was when we got the below at work:linkedin_who

 

It really tickled me. This person is so desperately stamp collecting that they are trying to link to Admin in Technical Services. Of course I removed names to protect the guilty but, really, Ramzan “the import/export professional” – I think you should take a bit more care in your stamp collecting efforts.