Search

OakieTags

Who's online

There are currently 0 users and 29 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Big Numbers

It’s hard, sometimes, to get an instinctive idea of how big a “big number” really is. I’ve just heard a brilliant description of a billion (American style: 109) that really gives you a gut feeling for how big the number is:

If you owed someone a billion dollars, and paid it back at the rate of $1 per second – how long would it take to pay off the debt (and don’t even think about the interest accruing) ?

The answer is 34 years.

The way I read it, that turns a big number into a number that’s small enough to comprehend but big to feel enormous.

Censorship: A confusing subject…

I used to be totally against any form of censorship, but as I’ve got older and I have two young nephews I find myself becoming a bit of an old fuddy-duddy about it. I kinda want to protect their innocence as much as possible.

When I’m in the car with my nephews and the news starts I try to flick to a CD to avoid the talk about sex, drugs, rape and murder. Call me a coward, but I want to avoid the barrage of questions on those subjects and the whole can of worms they will open. I remember one such occasion when nephew number 1 asked me what they meant by, “found her body in a ditch”, after listening to the news on our national radio station (Radio1). Shudder…

I kinda like the idea of everyone taking responsibility for themselves on the censorship front. Having said that you can’t be with them 100% of the time etc.

So now on to the item that prompted this post…

My favorite advert at the moment is from Google. It’s one of those generic, “Use all Google products because they are hip and so by extension, using them makes you hip!”, type adds. Why do I like it? Purely because of the music. It’s a track called Traktor by Wretch 32, which I love. Why did it prompt this post? As with most hip-hop songs is contains some adult language and themes. So what did Google choose to censor? The words in bold were blanked in the add.

“My lifestyle’s terribly wild, but you’ll never catch me on the Jeremy Kyle show”

“Explosive, terrorist style. Don’t think I’m a blow I’ll get you therapy now woah”

“I ride this mothafucking beat like a tractor. I ride this mothafucking beat like a train”

“Zoom zoom down the rabbit hole. More women more alcohol here”

So “Jeremy Kyle” is censored, which I can understand because his show is truly worthy of the “Jeremy Vile” nickname. The word “terrorist” is censored, which seems a little odd to me. The song is not promoting terrorism, so it hardly seems worth messing with. What is surprising is that “mothafucking” (not my spelling) is not blanked. Well to be fair they do play some traffic noises over it, but it is still clearly audible in the chorus. Seems a little strange for an add that runs throughout the day, especially since this was blanked on the radio edit of the song. As for the last quote, I assume somebody at Google understood what that meant before they OKed it in a national advert right? :)

Censorship is a very tricky subject, because as this advert shows, we all have different opinions on what should (if anything) be censored.

Cheers

Tim…




Friday Philosophy – Tainted by the Team

A while ago whilst working on one project, a colleague came back to his desk next to mine and exclaimed “I hate working with that team! – they are so bad that it makes everyone who works with them look incompetent!”

Now there is often an argument to be made that working with people who are not good at their job can be great for you, as you always looks good in comparison {it’s like the old adage about hanging around with someone less attractive than you – but I’ve never found anyone I can do that with…}. It is to an extent true of course, and though it can seem a negative attitude, it is also an opportunity to teach these people and help them improve, so everyone potentially is a winner. I actually enjoy working with people who are clueless, so long as they will accept the clues. You leave them in a better state than when you joined them.

However, my friend was in the situation where the team he was dealing with was so lacking in the skills required that if you provided them with code that worked as specified, which passed back the values stated in the correct format derived from the database with the right logic… their application code would still fall over with exceptions – because it was written to a very, very “strict” interpretation of the spec.

In one example, the specification for a module included a “screen shot” showing 3 detail items being displayed for the parent object. So the application team had written code to accept only up to 3 detail items. Any more and it would crash. Not error, crash. The other part of the application, which the same people in the application team had also written, would let you create as many detail items for the parent as you liked. The data model stated there could be many more than 3 detail items. I suppose you could argue that the specification for the module failed to state “allow more than three items” – but there was a gap in the screen to allow more data, there was the data model and there was the wider concept of the application. In a second example, the same PL/SQL package was used to populate a screen in several modes. Depending on the mode, certain fields were populated or not. The application however would fail if the variables for these unused fields were null. Or it would fail if they were populated. The decision for each one depended on the day that bit of the module had been written, it would seem. *sigh*

The situation was made worse by the team manager being a skilled political animal, who would always try to shift any blame to any and all other teams as his first reaction. In the above examples he tried to immediately lay the blame with my colleague and then with the specification, but my colleague had managed to interpret the spec fine (he did the outrageous thing of asking questions if he was not sure or checked the data model). Further, this manager did not seem to like his people asking us questions, as he felt it would make it look like they did not know what they were doing. Oddly enough they did NOT know what they were doing. Anyway, as a consequence of the manager’s hostile attitude, the opportunity to actually teach the poor staff was strictly limited.

That was really the root of the problem, the manager. It was not the fault of the team members that they could not do the job – they had not had proper training, were unpracticed with the skills, siloed into their team, not encouraged to think beyond the single task in front of them and there was no one available to show them any better. The issue was that they were being made to do work they were not able to do. The problem, to my mind, was with the manager and with the culture of that part of the organisation that did not deal with that manager. He obviously did not believe that rule one of a good manager is to look after the best interests of your team. It was to protect his own backside.

But the bottom line was that this team was so bad that anything they were involved in was a disaster and no one wants to be part of a disaster. If you worked with them, you were part of the disaster. So we took the pragmatic approach. When they had the spec wrong, if we would alter our code to cope, we would alter our code. And document that. It gave us a lot of work and we ended up having a lot of “bugs” allocated to our team. But it got the app out almost on time. On-going maintencance could be a bit of an issue but we did what we could on our side to spell out the odditites.

I still know my friend from above and he still can’t talk about it in the pub without getting really quite agitated :-)

I want to be a simplifier – thanks Steve Jobs

Any intelligent fool can make things bigger and more complex. It takes a touch of genius – and a lot of courage – to move in the opposite direction.

                                             Albert Einstein

I just turned up the short blog/article below on Steve Jobs. Steve Jobs has always been an inspiration for me even if I was a bit  jealous at times of Apple. Apple never interested me as a work place because for one I didn’t feel I had the experience to stand out at Apple and even at 21 years old when I graduated in 1988, 4 years after when Macintosh came out, I felt Apple was already too big be for me to make a difference especially when they attracted the best and the brightest designers. I wasn’t a designer. I wasn’t a code geek either. I wasn’t even a computer scientist for that matter. I never knew what I was. But now I know – I’m a simplifier or at least that’s what I want to be. I want to make things easy to understand and I want make things that engage the user to understand.  Thanks to Steve Jobs  who crystallized this insight about simplification and to the years of inspiration.

 

John Sculley on Steve Jobs:

Steve said, “If I asked someone who had only used a personal calculator what a Macintosh should be like, they couldn’t have told me. There was no way to do consumer research on it, so I had to go and create it, and then show it to people, and say now what do you think?”

A friend of mine was at meetings at Apple and Microsoft on the same day. … . He went into the Apple meeting (he’s a vendor for Apple), and as soon as the designers walked in the room, everyone stopped talking, because the designers are the most respected people in the organization. Everyone knows the designers speak for Steve because they have direct reporting to him. It is only at Apple where design reports directly to the CEO.

Later in the day he was at Microsoft. When he went into the Microsoft meeting, everybody was talking and then the meeting starts and no designers ever walk into the room. All the technical people are sitting there trying to add their ideas of what ought to be in the design. That’s a recipe for disaster.

He’s a minimalist and constantly reducing things to their simplest level. It’s not simplistic. It’s simplified. Steve is a systems designer. He simplifies complexity.

- John Sculley

 


“If I’d asked my customers what they wanted, they’d have said a faster horse.’” – Henry Ford*
* thanks Robyn, from: We’ll miss you Steve

 

 

Finding non-default configuration settings in SQL Server

In our Remote DBA practice, we frequently perform comprehensive system reviews for our customers on their database services. Among the things we always check for are non-default settings for the database software. We want to validate that any non-default setting is set that way for a good reason, and that any setting that is default really should be that way.

In Oracle, this is easy.  The gv$parameter view has a column, ISDEFAULT, that allows a simple SQL query to show which settings are set to non-default values.

It’s not so easy in SQL Server.  There is a view, master.sys.configurations, but it doesn’t have a way to tell if the setting is default or modified or anything.

That’s why I was pleased to find that Michael Campbell came up with a good solution that hard codes known default values into a script that works for SQL Server 2008 and up.

The style of insert used in the above script doesn’t work for SQL Server versions lower than 2008, so I made slight changes to allow it to work on 2005 and lower.  Here it is, with full attribution to Michael Campbell for developing the underlying script and technique. I can’t guarantee all the default values are valid for earlier versions, but the script runs and shows results for SQL Server 2005.

-- Server Configuration (find any non-standard settings)
--        for SQL Server 2008.
DECLARE @config_defaults TABLE (
    name nvarchar(35),
    default_value sql_variant
)

INSERT INTO @config_defaults (name, default_value) VALUES ('access check cache bucket count',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('access check cache quota',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('Ad Hoc Distributed Queries',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('affinity I/O mask',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('affinity mask',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('affinity64 I/O mask',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('affinity64 mask',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('Agent XPs',1)
INSERT INTO @config_defaults (name, default_value) VALUES ('allow updates',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('awe enabled',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('backup compression default',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('blocked process threshold (s)',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('c2 audit mode',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('clr enabled',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('common criteria compliance enabled',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('cost threshold for parallelism',5)
INSERT INTO @config_defaults (name, default_value) VALUES ('cross db ownership chaining',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('cursor threshold',-1)
INSERT INTO @config_defaults (name, default_value) VALUES ('Database Mail XPs',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('default full-text language',1033)
INSERT INTO @config_defaults (name, default_value) VALUES ('default language',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('default trace enabled',1)
INSERT INTO @config_defaults (name, default_value) VALUES ('disallow results from triggers',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('EKM provider enabled',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('filestream access level',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('fill factor (%)',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('ft crawl bandwidth (max)',100)
INSERT INTO @config_defaults (name, default_value) VALUES ('ft crawl bandwidth (min)',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('ft notify bandwidth (max)',100)
INSERT INTO @config_defaults (name, default_value) VALUES ('ft notify bandwidth (min)',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('index create memory (KB)',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('in-doubt xact resolution',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('lightweight pooling',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('locks',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('max degree of parallelism',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('max full-text crawl range',4)
INSERT INTO @config_defaults (name, default_value) VALUES ('max server memory (MB)',2147483647)
INSERT INTO @config_defaults (name, default_value) VALUES ('max text repl size (B)',65536)
INSERT INTO @config_defaults (name, default_value) VALUES ('max worker threads',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('media retention',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('min memory per query (KB)',1024)
-- NOTE: SQL Server may change the min server
--   memory value 'in flight' in some environments
--    so it may commonly show up as being 'non default'
INSERT INTO @config_defaults (name, default_value) VALUES ('min server memory (MB)',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('nested triggers',1)
INSERT INTO @config_defaults (name, default_value) VALUES ('network packet size (B)',4096)
INSERT INTO @config_defaults (name, default_value) VALUES ('Ole Automation Procedures',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('open objects',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('optimize for ad hoc workloads',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('PH timeout (s)',60)
INSERT INTO @config_defaults (name, default_value) VALUES ('precompute rank',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('priority boost',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('query governor cost limit',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('query wait (s)',-1)
INSERT INTO @config_defaults (name, default_value) VALUES ('recovery interval (min)',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('remote access',1)
INSERT INTO @config_defaults (name, default_value) VALUES ('remote admin connections',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('remote login timeout (s)',20)
INSERT INTO @config_defaults (name, default_value) VALUES ('remote proc trans',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('remote query timeout (s)',600)
INSERT INTO @config_defaults (name, default_value) VALUES ('Replication XPs',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('scan for startup procs',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('server trigger recursion',1)
INSERT INTO @config_defaults (name, default_value) VALUES ('set working set size',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('show advanced options',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('SMO and DMO XPs',1)
INSERT INTO @config_defaults (name, default_value) VALUES ('SQL Mail XPs',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('transform noise words',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('two digit year cutoff',2049)
INSERT INTO @config_defaults (name, default_value) VALUES ('user connections',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('user options',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('xp_cmdshell',0)

SELECT c.name, value, value_in_use, d.default_value
from sys.configurations c
INNER JOIN @config_defaults d ON c.name = d.name
where
    c.value != c.value_in_use
    OR c.value_in_use != d.default_value
go

Related posts:

  1. Finding thread IDs and names of SQL Server background threads
  2. SQL Server
  3. Deriving Optimal Configuration Values Using 11g Database Replay

We'll miss you Steve

As Steve Jobs resigns his role at Apple, perhaps the most appropriate send off is remembering another innovator/inventor who changed our lives, while we recognize that delivering on an acknowledged need isn't innovation.
In the words of Henry Ford:
'If I’d asked my customers what they wanted, they’d have said a faster horse.'
If Jobs had asked you what you wanted in Y2K, what would you have told

Bitmap Index

A few days ago I got an email about a problem with a system that would use a bitmap index to satisfy a query but wouldn’t use the equivalent B-tree index – and the DBA wanted to make the switch he wanted to downgrade to Standard Edition from Enterprise Edition.

In outline there was a table with a mapping column defined as varchar2(N) holding a string of 1′s and 0′s representing a bit mask. Typically each map value had about 800 rows associated with it and the users’ queries were all about matching bits using the utl_raw.bitand() and utl_raw.bit_or() functions.

My response was that the only surprise was that Oracle was using the bitmap index, not that it wasn’t using the B-tree index as it seemed that the only way the index could be used was with an index fast full scan. I was curious, so I said I’d take a look at the query, the object definitions the plan, and the 10053 trace file if the DBA cared to send them to me.

It turned out that I was correct – the index fast full scan was the plan used by the bitmap index because the queries were of the form:

select  count(*)
from    tableX t0
where
        utl_raw.bit_and(t0.mapping, '0000.....001') = '0000.....001'
;

But, as the DBA pointed out, Oracle didn’t even consider this plan when he changed the bitmap to a B-tree. Why not ? For the same old reason that Oracle often surprises people by ignoring indexes – the column was not declared as NOT NULL, which means there could be rows in the table that are not in the B-tree index, so the index was not a valid target for comparison. (In this case the human eye can see that this is irrelevant, but the optimizer is blindly following a heurisitc – or rule – at this point.)

Key point: Oracle’s standard B-tree indexes do not hold index entries that are completely null. Bitmap indexes (and cluster indexes) do have entries for the nulls.

Conan the Barbarian…

Being a former Arnie fan, I wasn’t holding out much hope for the remake of Conan the Barbarian. As it turns out I was pleasantly surprised. It wasn’t that bad really.

Jason Momoa is a decent Conan. He’s not physically as massive as Arnie was in his day, but he sure looks like it would hurt if he punched you. He has a face built for sneering and scowling, which is pretty important in this role. Most of his dialog consists of grunting and throaty sentences that make Barry White sound like a Castrato.

The story is pretty average. It’s never going to win an oscar, but it is suitable for a film of this genre. Just enough to link together the action scenes, with only a mild sprinkling of cheese here and there. The battlefield caesarean was true comedy, but I think it was meant to be serious.

The action scenes themselves were a little hit and miss. Some worked really well, but some were a little blurry and shaky. Some of the scenes were cut badly, making it hard to follow the thread of the action. On occasion it got a little hard to see what was going on and I found myself losing interest.

Despite everything, it was a nice bit of moronic fun. I must buy the Arnie films on DVD… :)

Cheers

Tim…




Essential tools for Exadata performance experiments

Like I said in a previous post I have started working in the Exadata performance field, which is really exciting, especially after you get it to work really fast!

Also, finding out what your session is spending time on is important if you are just getting started.

I found the following indispensable tools for Exadata performance analysis:

  • Session Snapper from Tanel Poder, available from his website
  • The Tuning and Diagnostic Pack license
  • The Real Time SQL Monitor package
  • A good understanding of DBMS_XPLAN
  • Oracle 10046 traces
  • collectl – I have blogged about it before here and mention it for reference only

There are probably a lot more than those, but these are the bare essentials. Let’s have a look at them in a bit more detail.

Snapper

I don’t think I have to lose a single word about snapper-it’s an established tool for performance diagnostics tought by Tanel in his advanced troubleshooting course. I use snapper primarily for analysis of smart scans. Below is an example for a session using serial execution to scan a table (best viewed with Firefox):

SQL> select count(1) from order_items;

SQL> @snapper all 5 1 243
Sampling SID 243 with interval 5 seconds, taking 1 snapshots...
setting stats to all due option = all

-- Session Snapper v3.52 by Tanel Poder @ E2SN ( http://tech.e2sn.com )

-------------------------------------------------------------------------------------------------------------------------------------
SID, USERNAME  , TYPE, STATISTIC                                                 ,     HDELTA, HDELTA/SEC,    %TIME, GRAPH
-------------------------------------------------------------------------------------------------------------------------------------
243, SOE       , STAT, session logical reads                                     ,     95.02k,        19k,
243, SOE       , STAT, user I/O wait time                                        ,        331,       66.2,
243, SOE       , STAT, non-idle wait time                                        ,        332,       66.4,
243, SOE       , STAT, non-idle wait count                                       ,      1.51k,      302.2,
243, SOE       , STAT, physical read total IO requests                           ,        776,      155.2,
243, SOE       , STAT, physical read total multi block requests                  ,        746,      149.2,
243, SOE       , STAT, physical read requests optimized                          ,         30,          6,
243, SOE       , STAT, physical read total bytes                                 ,    778.67M,    155.73M,
243, SOE       , STAT, cell physical IO interconnect bytes                       ,    778.67M,    155.73M,
243, SOE       , STAT, consistent gets                                           ,     95.06k,     19.01k,
243, SOE       , STAT, consistent gets from cache                                ,          7,        1.4,
243, SOE       , STAT, consistent gets from cache (fastpath)                     ,          7,        1.4,
243, SOE       , STAT, consistent gets direct                                    ,     95.05k,     19.01k,
243, SOE       , STAT, physical reads                                            ,     95.05k,     19.01k,
243, SOE       , STAT, physical reads direct                                     ,     95.05k,     19.01k,
243, SOE       , STAT, physical read IO requests                                 ,        776,      155.2,
243, SOE       , STAT, physical read bytes                                       ,    778.67M,    155.73M,
243, SOE       , STAT, calls to kcmgcs                                           ,          7,        1.4,
243, SOE       , STAT, file io wait time                                         ,        394,       78.8,
243, SOE       , STAT, Number of read IOs issued                                 ,        776,      155.2,
243, SOE       , STAT, no work - consistent read gets                            ,     95.15k,     19.03k,
243, SOE       , STAT, cell flash cache read hits                                ,         30,          6,
243, SOE       , TIME, DB CPU                                                    ,      1.96s,   391.94ms,    39.2%, |@@@@      |
243, SOE       , TIME, sql execute elapsed time                                  ,         6s,       1.2s,   120.0%, |@@@@@@@@@@|
243, SOE       , TIME, DB time                                                   ,         6s,       1.2s,   120.0%, |@@@@@@@@@@|
243, SOE       , WAIT, direct path read                                          ,       3.3s,   660.48ms,    66.0%, |@@@@@@@   |
243, SOE       , WAIT, kfk: async disk IO                                        ,        5ms,        1ms,      .1%, |          |
--  End of Stats snap 1, end=2011-08-23 16:20:03, seconds=5

-----------------------------------------------------------------------
Active% | SQL_ID          | EVENT                     | WAIT_CLASS
-----------------------------------------------------------------------
74% | 1p7y7pvzmxx3y   | direct path read          | User I/O
26% | 1p7y7pvzmxx3y   | ON CPU                    | ON CPU

--  End of ASH snap 1, end=2011-08-23 16:20:03, seconds=5, samples_taken=47

Watch out for the cell smart entries to figure out of a smart scan is happening. In the above example, there wasn’t one.

The same query again, but this time making use of session offloading:

SQL> select /*+full(t)*/ count(1) from order_items t;

SQL> @snapper all 5 1 243
Sampling SID 243 with interval 5 seconds, taking 1 snapshots...
setting stats to all due option = all

-- Session Snapper v3.52 by Tanel Poder @ E2SN ( http://tech.e2sn.com )

-------------------------------------------------------------------------------------------------------------------------------------
SID, USERNAME  , TYPE, STATISTIC                                                 ,     HDELTA, HDELTA/SEC,    %TIME, GRAPH
-------------------------------------------------------------------------------------------------------------------------------------
243, SOE       , STAT, session logical reads                                     ,    427.69k,     85.54k,
243, SOE       , STAT, user I/O wait time                                        ,        165,         33,
243, SOE       , STAT, non-idle wait time                                        ,        167,       33.4,
243, SOE       , STAT, non-idle wait count                                       ,      3.52k,        703,
243, SOE       , STAT, enqueue waits                                             ,         12,        2.4,
243, SOE       , STAT, enqueue requests                                          ,         10,          2,
243, SOE       , STAT, enqueue conversions                                       ,         15,          3,
243, SOE       , STAT, enqueue releases                                          ,         10,          2,
243, SOE       , STAT, global enqueue gets sync                                  ,         25,          5,
243, SOE       , STAT, global enqueue releases                                   ,         10,          2,
243, SOE       , STAT, physical read total IO requests                           ,      4.56k,      911.6,
243, SOE       , STAT, physical read total multi block requests                  ,      4.31k,      862.2,
243, SOE       , STAT, physical read total bytes                                 ,       3.5G,     700.6M,
243, SOE       , STAT, cell physical IO interconnect bytes                       ,      1.52G,    303.95M,
243, SOE       , STAT, ges messages sent                                         ,         12,        2.4,
243, SOE       , STAT, consistent gets                                           ,    427.54k,     85.51k,
243, SOE       , STAT, consistent gets from cache                                ,         75,         15,
243, SOE       , STAT, consistent gets from cache (fastpath)                     ,         75,         15,
243, SOE       , STAT, consistent gets direct                                    ,    427.47k,     85.49k,
243, SOE       , STAT, physical reads                                            ,    427.47k,     85.49k,
243, SOE       , STAT, physical reads direct                                     ,    427.47k,     85.49k,
243, SOE       , STAT, physical read IO requests                                 ,      4.56k,        911,
243, SOE       , STAT, physical read bytes                                       ,       3.5G,    700.36M,
243, SOE       , STAT, calls to kcmgcs                                           ,         75,         15,
243, SOE       , STAT, file io wait time                                         ,     17.45k,      3.49k,
243, SOE       , STAT, cell physical IO bytes eligible for predicate offload     ,      3.49G,       699M,
243, SOE       , STAT, cell smart IO session cache lookups                       ,          5,          1,
243, SOE       , STAT, cell smart IO session cache hits                          ,          5,          1,
243, SOE       , STAT, cell physical IO interconnect bytes returned by smart scan,      1.52G,    303.25M,
243, SOE       , STAT, cell session smart scan efficiency                        ,         -3,        -.6,
243, SOE       , STAT, table scans (long tables)                                 ,          5,          1,
243, SOE       , STAT, table scans (direct read)                                 ,          5,          1,
243, SOE       , STAT, table scan rows gotten                                    ,    129.59M,     25.92M,
243, SOE       , STAT, table scan blocks gotten                                  ,    422.53k,     84.51k,
243, SOE       , STAT, cell scans                                                ,          5,          1,
243, SOE       , STAT, cell blocks processed by cache layer                      ,    501.97k,    100.39k,
243, SOE       , STAT, cell blocks processed by txn layer                        ,    501.97k,    100.39k,
243, SOE       , STAT, cell blocks processed by data layer                       ,     426.9k,     85.38k,
243, SOE       , STAT, cell blocks helped by minscn optimization                 ,    501.91k,    100.38k,
243, SOE       , STAT, cell simulated session smart scan efficiency              ,       3.5G,    699.17M,
243, SOE       , STAT, cell IO uncompressed bytes                                ,       3.5G,    699.17M,
243, SOE       , TIME, DB CPU                                                    ,      3.98s,   796.68ms,    79.7%, |@@@@@@@@  |
243, SOE       , TIME, sql execute elapsed time                                  ,      6.31s,      1.26s,   126.2%, |@@@@@@@@@@|
243, SOE       , TIME, DB time                                                   ,      6.31s,      1.26s,   126.2%, |@@@@@@@@@@|
243, SOE       , WAIT, enq: KO - fast object checkpoint                          ,     2.01ms,      402us,      .0%, |          |
243, SOE       , WAIT, cell smart table scan                                     ,      1.65s,   329.33ms,    32.9%, |@@@@      |
243, SOE       , WAIT, events in waitclass Other                                 ,     5.69ms,     1.14ms,      .1%, |          |
--  End of Stats snap 1, end=2011-08-23 16:21:16, seconds=5

-----------------------------------------------------------------------
Active% | SQL_ID          | EVENT                     | WAIT_CLASS
-----------------------------------------------------------------------
66% | adhm3mbjfzysd   | ON CPU                    | ON CPU
34% | adhm3mbjfzysd   | cell smart table scan     | User I/O

--  End of ASH snap 1, end=2011-08-23 16:21:16, seconds=5, samples_taken=44

The enq: KO fast object checkpoint and the “cell smart table scan” are giveaways for a smart scan.

DBMS_XPLAN

Another proven, useful tool in the arsenal of the performance analyst. For Exadata you might want to use the display_cursor function. It can take a sql_id, a child number and a format parameter. I like the format ‘ALL’ best, as it gives me most information about a statement.

Note that even though you might get the word “storage” in the execution plan, it doesn’t mean you actually see a smart scan in the end! Always check the session wait events and session counters for the word “smart” to make sure one is occurring.

SQL> select * from table(dbms_xplan.display_cursor('3xjhbw9m5u9qu',format=>'ALL'))
2  /

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------
SQL_ID  3xjhbw9m5u9qu, child number 0
-------------------------------------
select count(*) from order_items where product_id > 10

Plan hash value: 2209137760

-------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                 |       |       |   249K(100)|          |
|   1 |  SORT AGGREGATE               |                 |     1 |     4 |            |          |
|*  2 |   INDEX STORAGE FAST FULL SCAN| ITEM_PRODUCT_IX |   338M|  1293M|   249K  (2)| 00:50:00 |
-------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

1 - SEL$1
2 - SEL$1 / ORDER_ITEMS@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

2 - storage("PRODUCT_ID">10)
filter("PRODUCT_ID">10)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

1 - (#keys=0) COUNT(*)[22]

31 rows selected.

This leads me to the next tool, SQL Monitoring, and it rocks.

Real Time SQL Monitoring

This is yet another awesome tool in your repository if you have the license. It allows you to check what’s happening during the execution of a SQL statement, which is more than useful in larger data warehouse style queries.

The easiest and most accessible way is to use OEM Grid Control or database console to view the report. I personally like to run the report in a putty session on a large screen. To make it easier to run the report based on a SQL ID I created the below script “report.sql”:

set trim on long 2000000 longchunksize 2000000
set trimspool on  pagesize 0 echo off timing off linesize 1000
select DBMS_SQLTUNE.REPORT_SQL_MONITOR(
sql_id => '&1',
report_level=>'ALL')
from dual;

With this at hand you can query v$sesssion for a SQL_ID and feed it into the code snippet.

Unfortunately I can’t upload text files to wordpress for security reasons, and posting the output here isn’t possible since the report is rather wide. I decided to run the report with HTML output and type “EM” instead an post the print screens. The command I used is sjhown below:

SQL> select dbms_sqltune.report_sql_monitor(sql_id=>’&1′, type=>’EM’, report_level=>’+histogram’) from dual;

The output of the script would be far too wide so I moved it into the attachement of the EM report type here for an insert /*+ append */ into table select ….

You can use this to even look at statistics gathering-in the below two examples I executed dbms_stats-this report has been created for a stats gathering session with a DOP of 16

What I like best about the monitoring report is that it points out where you are in the execution plan, and it provides an estimate for how much work has already been done. More intelligent people have written lots about using these tools, off the top of my head I’d check Greg Rahn’s as well as Doug Burn’s weblogs for more information.

Oracle 10046 trace

Cary Millsap has written the standard about extended trace files and I strongly recommend reading his book and papers on the subject.

Whilst you can run queries in isolation and easily identify them during testing, the matter gets considerably more difficult as soon as you have to trace a session from a connection pool or other third party application. Most of them are not instrumented making tracing by

Of course you could use the MRTools to enable tracing on a wider scale and then trawl through the trace files with their help, but it might be difficult to get permission to do so on a busy production system.

The lesson learned for me was how to enable tracing in parallel query: this MOS note “Master Note; How to Get a 10046 trace for a Parallel Query [ID 1102801.1]” has the answer. It becomes more difficult if your PQ spans instances, but there are ways around this. You also should consider using DBMS_MONITOR to enable session trace if you can’t use “alter session set events” for some reason. Be sure to read all of the note, especially the bit at the bottom which explains the new 11g interface using “alter session set sql_trace” syntax.

I personally liked setting the tracefile identifier to something more easily identifiable. The ADRCI has a nice option to show trace files-if your ADR is consolidated in a cluster file system you could simply run “adcri > show tracefile -t” to list them all and pick the ones with your custom tracefile identifier.

What about command line utilities?

In the past I liked to use strace to see what a process was doing while “on cpu”. Unfortunately this isn’t really an option with Exadata, as most of the calls are purely network related. After all, RDS is a network protocol. Consider this output from strace during a 10 second query (taken on a database node):

#  strace -cp 29031
Process 29031 attached - interrupt to quit
Process 29031 detached
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
88.85    0.033064           2     14635         5 poll
5.47    0.002035           0     14754      7377 setsockopt
2.68    0.000999         999         1           munmap
2.43    0.000906           0     22075      7321 recvmsg
0.56    0.000209           0      7399           sendmsg
0.00    0.000000           0         8         6 read
0.00    0.000000           0         2           write
0.00    0.000000           0         3           mmap
0.00    0.000000           0        60           rt_sigprocmask
0.00    0.000000           0        15         5 rt_sigreturn
0.00    0.000000           0        30           setitimer
0.00    0.000000           0         1           semctl
0.00    0.000000           0       262           getrusage
0.00    0.000000           0       112           times
0.00    0.000000           0         1           io_setup
0.00    0.000000           0         1           io_destroy
0.00    0.000000           0        23           semtimedop
------ ----------- ----------- --------- --------- ----------------
100.00    0.037213                 59382     14714 total

Not too enlightening-all I/O happens in the cells.

Summary

These are the tools I found very useful in getting started with Exadata performance analysis, and they are very good for the job at hand. Note that snapper can be used for getting information about parallel query and SQL Monitor is a blessing when it comes to looking at huge parallel queries involving lots of tables.

Next presentations

I’ve got a couple of presentations coming up.

The first is at the UKOUG Management and Infrastructure SIG on Tuesday 20th September. I chair this SIG and it is all about how to manage Oracle when you have lots of databases, very big databases or a particular need to know a bit more about the rest of the IT Infrastructure. You can find the schedule here. The aim is to talk at a technical level but not the deep-dive of some of the RDBMS or HA presentations as we want to cover a wider brief.

As I say, one thing we do is look at the hardware your Oracle databases and application servers need to run on. This meeting we have Steve Shaw from Intel talking about getting the most out of your Xeon-based servers, but the general concepts apply to other platforms. If you are old enough, you will remember how you used to set up “HIMEM.SYS”, “EMM386.SYS and try to keep as much of the first 640K of your memory free. You might even have got down and dirty with you BIOS settings. We did it as the performance boosts were significant. Well, we don’t do that sort of thing anymore and Steve’s talk will probably make you want to! It still is a free way to get more out of your hardware.

Piet de Visser is also coming along and I always really enjoy his presentations. This time he is covering something of interest/concern to many of us – Outsourcing. I think that will be a pretty lively session.

I’m presenting as well, with Neil Chandler on the topic of how deep you should dive when solving technical issues. To 10046 trace or not.

We meet in Oracle’s city office, so handy for anyone in or around London or for anyone coming in from North of London (the office is 5 minutes walk from Liverpool Street Station and three stops along the underground from King’s Cross St Pancras). We’ve still got to finalise one or two agenda slots but they will be real-world talks about Enterprise Manager/GRID control. One fixed item on the agenda is that those who wish to retire to a pub afterwards to continue the discussions.

You may have noticed the little logo for the UKOUG TEBS conference appearing at the right of this blog. The agenda is not quite public yet so I cannot say too much, but I will be presenting at the event, on Index Organized Tables. I’ll be showing demonstrations of some of the things I have been blogging about and expanding on areas, joining it all up into one session. I might also be presenting on database design but that talk is being discussed and finalised at present. The UKOUG team have a lot of things that they have to pull together and organise just for the presentations, never mind the rest of the conferences such as the exhibition hall, catering, registration etc. I’ve been involved in helping with the agenda organisation this year, in a minor way, so I’ve seen it all from the inside.

The TEBS conference is, for me, the meeting highlight of the year. I’ve been to Oracle Open World and, interesting though it is and with some fabulous presentations and events, it is just too big, corporate and company line for me. Oh, I’d go again when the opportunity arises, but for me the UKOUG TEBS conference has a much better feel to it, you still get a load of great talks from Oracle about the latest/greatest, but you also get loads and loads of talks by many of the best in the field and the talks are independent – no pressure to be upbeat about Oracle or avoid any negative messages. In fact, if you honestly feel something in Oracle is worth avoiding, you are free to present and say “Don’t Do This!” :-)

I had planned to go to more of the European conferences this year but it did not get myself organised. For me, as an Independent consultant, I need to present to justify the trip and I keep failing to get the submissions in on time.