Search

OakieTags

Who's online

There are currently 0 users and 31 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

I want to be a simplifier – thanks Steve Jobs

Any intelligent fool can make things bigger and more complex. It takes a touch of genius – and a lot of courage – to move in the opposite direction.

                                             Albert Einstein

I just turned up the short blog/article below on Steve Jobs. Steve Jobs has always been an inspiration for me even if I was a bit  jealous at times of Apple. Apple never interested me as a work place because for one I didn’t feel I had the experience to stand out at Apple and even at 21 years old when I graduated in 1988, 4 years after when Macintosh came out, I felt Apple was already too big be for me to make a difference especially when they attracted the best and the brightest designers. I wasn’t a designer. I wasn’t a code geek either. I wasn’t even a computer scientist for that matter. I never knew what I was. But now I know – I’m a simplifier or at least that’s what I want to be. I want to make things easy to understand and I want make things that engage the user to understand.  Thanks to Steve Jobs  who crystallized this insight about simplification and to the years of inspiration.

 

John Sculley on Steve Jobs:

Steve said, “If I asked someone who had only used a personal calculator what a Macintosh should be like, they couldn’t have told me. There was no way to do consumer research on it, so I had to go and create it, and then show it to people, and say now what do you think?”

A friend of mine was at meetings at Apple and Microsoft on the same day. … . He went into the Apple meeting (he’s a vendor for Apple), and as soon as the designers walked in the room, everyone stopped talking, because the designers are the most respected people in the organization. Everyone knows the designers speak for Steve because they have direct reporting to him. It is only at Apple where design reports directly to the CEO.

Later in the day he was at Microsoft. When he went into the Microsoft meeting, everybody was talking and then the meeting starts and no designers ever walk into the room. All the technical people are sitting there trying to add their ideas of what ought to be in the design. That’s a recipe for disaster.

He’s a minimalist and constantly reducing things to their simplest level. It’s not simplistic. It’s simplified. Steve is a systems designer. He simplifies complexity.

- John Sculley

 


“If I’d asked my customers what they wanted, they’d have said a faster horse.’” – Henry Ford*
* thanks Robyn, from: We’ll miss you Steve

 

 

Finding non-default configuration settings in SQL Server

In our Remote DBA practice, we frequently perform comprehensive system reviews for our customers on their database services. Among the things we always check for are non-default settings for the database software. We want to validate that any non-default setting is set that way for a good reason, and that any setting that is default really should be that way.

In Oracle, this is easy.  The gv$parameter view has a column, ISDEFAULT, that allows a simple SQL query to show which settings are set to non-default values.

It’s not so easy in SQL Server.  There is a view, master.sys.configurations, but it doesn’t have a way to tell if the setting is default or modified or anything.

That’s why I was pleased to find that Michael Campbell came up with a good solution that hard codes known default values into a script that works for SQL Server 2008 and up.

The style of insert used in the above script doesn’t work for SQL Server versions lower than 2008, so I made slight changes to allow it to work on 2005 and lower.  Here it is, with full attribution to Michael Campbell for developing the underlying script and technique. I can’t guarantee all the default values are valid for earlier versions, but the script runs and shows results for SQL Server 2005.

-- Server Configuration (find any non-standard settings)
--        for SQL Server 2008.
DECLARE @config_defaults TABLE (
    name nvarchar(35),
    default_value sql_variant
)

INSERT INTO @config_defaults (name, default_value) VALUES ('access check cache bucket count',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('access check cache quota',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('Ad Hoc Distributed Queries',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('affinity I/O mask',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('affinity mask',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('affinity64 I/O mask',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('affinity64 mask',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('Agent XPs',1)
INSERT INTO @config_defaults (name, default_value) VALUES ('allow updates',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('awe enabled',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('backup compression default',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('blocked process threshold (s)',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('c2 audit mode',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('clr enabled',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('common criteria compliance enabled',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('cost threshold for parallelism',5)
INSERT INTO @config_defaults (name, default_value) VALUES ('cross db ownership chaining',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('cursor threshold',-1)
INSERT INTO @config_defaults (name, default_value) VALUES ('Database Mail XPs',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('default full-text language',1033)
INSERT INTO @config_defaults (name, default_value) VALUES ('default language',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('default trace enabled',1)
INSERT INTO @config_defaults (name, default_value) VALUES ('disallow results from triggers',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('EKM provider enabled',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('filestream access level',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('fill factor (%)',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('ft crawl bandwidth (max)',100)
INSERT INTO @config_defaults (name, default_value) VALUES ('ft crawl bandwidth (min)',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('ft notify bandwidth (max)',100)
INSERT INTO @config_defaults (name, default_value) VALUES ('ft notify bandwidth (min)',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('index create memory (KB)',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('in-doubt xact resolution',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('lightweight pooling',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('locks',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('max degree of parallelism',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('max full-text crawl range',4)
INSERT INTO @config_defaults (name, default_value) VALUES ('max server memory (MB)',2147483647)
INSERT INTO @config_defaults (name, default_value) VALUES ('max text repl size (B)',65536)
INSERT INTO @config_defaults (name, default_value) VALUES ('max worker threads',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('media retention',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('min memory per query (KB)',1024)
-- NOTE: SQL Server may change the min server
--   memory value 'in flight' in some environments
--    so it may commonly show up as being 'non default'
INSERT INTO @config_defaults (name, default_value) VALUES ('min server memory (MB)',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('nested triggers',1)
INSERT INTO @config_defaults (name, default_value) VALUES ('network packet size (B)',4096)
INSERT INTO @config_defaults (name, default_value) VALUES ('Ole Automation Procedures',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('open objects',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('optimize for ad hoc workloads',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('PH timeout (s)',60)
INSERT INTO @config_defaults (name, default_value) VALUES ('precompute rank',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('priority boost',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('query governor cost limit',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('query wait (s)',-1)
INSERT INTO @config_defaults (name, default_value) VALUES ('recovery interval (min)',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('remote access',1)
INSERT INTO @config_defaults (name, default_value) VALUES ('remote admin connections',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('remote login timeout (s)',20)
INSERT INTO @config_defaults (name, default_value) VALUES ('remote proc trans',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('remote query timeout (s)',600)
INSERT INTO @config_defaults (name, default_value) VALUES ('Replication XPs',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('scan for startup procs',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('server trigger recursion',1)
INSERT INTO @config_defaults (name, default_value) VALUES ('set working set size',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('show advanced options',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('SMO and DMO XPs',1)
INSERT INTO @config_defaults (name, default_value) VALUES ('SQL Mail XPs',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('transform noise words',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('two digit year cutoff',2049)
INSERT INTO @config_defaults (name, default_value) VALUES ('user connections',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('user options',0)
INSERT INTO @config_defaults (name, default_value) VALUES ('xp_cmdshell',0)

SELECT c.name, value, value_in_use, d.default_value
from sys.configurations c
INNER JOIN @config_defaults d ON c.name = d.name
where
    c.value != c.value_in_use
    OR c.value_in_use != d.default_value
go

Related posts:

  1. Finding thread IDs and names of SQL Server background threads
  2. SQL Server
  3. Deriving Optimal Configuration Values Using 11g Database Replay

We'll miss you Steve

As Steve Jobs resigns his role at Apple, perhaps the most appropriate send off is remembering another innovator/inventor who changed our lives, while we recognize that delivering on an acknowledged need isn't innovation.
In the words of Henry Ford:
'If I’d asked my customers what they wanted, they’d have said a faster horse.'
If Jobs had asked you what you wanted in Y2K, what would you have told

Bitmap Index

A few days ago I got an email about a problem with a system that would use a bitmap index to satisfy a query but wouldn’t use the equivalent B-tree index – and the DBA wanted to make the switch he wanted to downgrade to Standard Edition from Enterprise Edition.

In outline there was a table with a mapping column defined as varchar2(N) holding a string of 1′s and 0′s representing a bit mask. Typically each map value had about 800 rows associated with it and the users’ queries were all about matching bits using the utl_raw.bitand() and utl_raw.bit_or() functions.

My response was that the only surprise was that Oracle was using the bitmap index, not that it wasn’t using the B-tree index as it seemed that the only way the index could be used was with an index fast full scan. I was curious, so I said I’d take a look at the query, the object definitions the plan, and the 10053 trace file if the DBA cared to send them to me.

It turned out that I was correct – the index fast full scan was the plan used by the bitmap index because the queries were of the form:

select  count(*)
from    tableX t0
where
        utl_raw.bit_and(t0.mapping, '0000.....001') = '0000.....001'
;

But, as the DBA pointed out, Oracle didn’t even consider this plan when he changed the bitmap to a B-tree. Why not ? For the same old reason that Oracle often surprises people by ignoring indexes – the column was not declared as NOT NULL, which means there could be rows in the table that are not in the B-tree index, so the index was not a valid target for comparison. (In this case the human eye can see that this is irrelevant, but the optimizer is blindly following a heurisitc – or rule – at this point.)

Key point: Oracle’s standard B-tree indexes do not hold index entries that are completely null. Bitmap indexes (and cluster indexes) do have entries for the nulls.

Conan the Barbarian…

Being a former Arnie fan, I wasn’t holding out much hope for the remake of Conan the Barbarian. As it turns out I was pleasantly surprised. It wasn’t that bad really.

Jason Momoa is a decent Conan. He’s not physically as massive as Arnie was in his day, but he sure looks like it would hurt if he punched you. He has a face built for sneering and scowling, which is pretty important in this role. Most of his dialog consists of grunting and throaty sentences that make Barry White sound like a Castrato.

The story is pretty average. It’s never going to win an oscar, but it is suitable for a film of this genre. Just enough to link together the action scenes, with only a mild sprinkling of cheese here and there. The battlefield caesarean was true comedy, but I think it was meant to be serious.

The action scenes themselves were a little hit and miss. Some worked really well, but some were a little blurry and shaky. Some of the scenes were cut badly, making it hard to follow the thread of the action. On occasion it got a little hard to see what was going on and I found myself losing interest.

Despite everything, it was a nice bit of moronic fun. I must buy the Arnie films on DVD… :)

Cheers

Tim…




Essential tools for Exadata performance experiments

Like I said in a previous post I have started working in the Exadata performance field, which is really exciting, especially after you get it to work really fast!

Also, finding out what your session is spending time on is important if you are just getting started.

I found the following indispensable tools for Exadata performance analysis:

  • Session Snapper from Tanel Poder, available from his website
  • The Tuning and Diagnostic Pack license
  • The Real Time SQL Monitor package
  • A good understanding of DBMS_XPLAN
  • Oracle 10046 traces
  • collectl – I have blogged about it before here and mention it for reference only

There are probably a lot more than those, but these are the bare essentials. Let’s have a look at them in a bit more detail.

Snapper

I don’t think I have to lose a single word about snapper-it’s an established tool for performance diagnostics tought by Tanel in his advanced troubleshooting course. I use snapper primarily for analysis of smart scans. Below is an example for a session using serial execution to scan a table (best viewed with Firefox):

SQL> select count(1) from order_items;

SQL> @snapper all 5 1 243
Sampling SID 243 with interval 5 seconds, taking 1 snapshots...
setting stats to all due option = all

-- Session Snapper v3.52 by Tanel Poder @ E2SN ( http://tech.e2sn.com )

-------------------------------------------------------------------------------------------------------------------------------------
SID, USERNAME  , TYPE, STATISTIC                                                 ,     HDELTA, HDELTA/SEC,    %TIME, GRAPH
-------------------------------------------------------------------------------------------------------------------------------------
243, SOE       , STAT, session logical reads                                     ,     95.02k,        19k,
243, SOE       , STAT, user I/O wait time                                        ,        331,       66.2,
243, SOE       , STAT, non-idle wait time                                        ,        332,       66.4,
243, SOE       , STAT, non-idle wait count                                       ,      1.51k,      302.2,
243, SOE       , STAT, physical read total IO requests                           ,        776,      155.2,
243, SOE       , STAT, physical read total multi block requests                  ,        746,      149.2,
243, SOE       , STAT, physical read requests optimized                          ,         30,          6,
243, SOE       , STAT, physical read total bytes                                 ,    778.67M,    155.73M,
243, SOE       , STAT, cell physical IO interconnect bytes                       ,    778.67M,    155.73M,
243, SOE       , STAT, consistent gets                                           ,     95.06k,     19.01k,
243, SOE       , STAT, consistent gets from cache                                ,          7,        1.4,
243, SOE       , STAT, consistent gets from cache (fastpath)                     ,          7,        1.4,
243, SOE       , STAT, consistent gets direct                                    ,     95.05k,     19.01k,
243, SOE       , STAT, physical reads                                            ,     95.05k,     19.01k,
243, SOE       , STAT, physical reads direct                                     ,     95.05k,     19.01k,
243, SOE       , STAT, physical read IO requests                                 ,        776,      155.2,
243, SOE       , STAT, physical read bytes                                       ,    778.67M,    155.73M,
243, SOE       , STAT, calls to kcmgcs                                           ,          7,        1.4,
243, SOE       , STAT, file io wait time                                         ,        394,       78.8,
243, SOE       , STAT, Number of read IOs issued                                 ,        776,      155.2,
243, SOE       , STAT, no work - consistent read gets                            ,     95.15k,     19.03k,
243, SOE       , STAT, cell flash cache read hits                                ,         30,          6,
243, SOE       , TIME, DB CPU                                                    ,      1.96s,   391.94ms,    39.2%, |@@@@      |
243, SOE       , TIME, sql execute elapsed time                                  ,         6s,       1.2s,   120.0%, |@@@@@@@@@@|
243, SOE       , TIME, DB time                                                   ,         6s,       1.2s,   120.0%, |@@@@@@@@@@|
243, SOE       , WAIT, direct path read                                          ,       3.3s,   660.48ms,    66.0%, |@@@@@@@   |
243, SOE       , WAIT, kfk: async disk IO                                        ,        5ms,        1ms,      .1%, |          |
--  End of Stats snap 1, end=2011-08-23 16:20:03, seconds=5

-----------------------------------------------------------------------
Active% | SQL_ID          | EVENT                     | WAIT_CLASS
-----------------------------------------------------------------------
74% | 1p7y7pvzmxx3y   | direct path read          | User I/O
26% | 1p7y7pvzmxx3y   | ON CPU                    | ON CPU

--  End of ASH snap 1, end=2011-08-23 16:20:03, seconds=5, samples_taken=47

Watch out for the cell smart entries to figure out of a smart scan is happening. In the above example, there wasn’t one.

The same query again, but this time making use of session offloading:

SQL> select /*+full(t)*/ count(1) from order_items t;

SQL> @snapper all 5 1 243
Sampling SID 243 with interval 5 seconds, taking 1 snapshots...
setting stats to all due option = all

-- Session Snapper v3.52 by Tanel Poder @ E2SN ( http://tech.e2sn.com )

-------------------------------------------------------------------------------------------------------------------------------------
SID, USERNAME  , TYPE, STATISTIC                                                 ,     HDELTA, HDELTA/SEC,    %TIME, GRAPH
-------------------------------------------------------------------------------------------------------------------------------------
243, SOE       , STAT, session logical reads                                     ,    427.69k,     85.54k,
243, SOE       , STAT, user I/O wait time                                        ,        165,         33,
243, SOE       , STAT, non-idle wait time                                        ,        167,       33.4,
243, SOE       , STAT, non-idle wait count                                       ,      3.52k,        703,
243, SOE       , STAT, enqueue waits                                             ,         12,        2.4,
243, SOE       , STAT, enqueue requests                                          ,         10,          2,
243, SOE       , STAT, enqueue conversions                                       ,         15,          3,
243, SOE       , STAT, enqueue releases                                          ,         10,          2,
243, SOE       , STAT, global enqueue gets sync                                  ,         25,          5,
243, SOE       , STAT, global enqueue releases                                   ,         10,          2,
243, SOE       , STAT, physical read total IO requests                           ,      4.56k,      911.6,
243, SOE       , STAT, physical read total multi block requests                  ,      4.31k,      862.2,
243, SOE       , STAT, physical read total bytes                                 ,       3.5G,     700.6M,
243, SOE       , STAT, cell physical IO interconnect bytes                       ,      1.52G,    303.95M,
243, SOE       , STAT, ges messages sent                                         ,         12,        2.4,
243, SOE       , STAT, consistent gets                                           ,    427.54k,     85.51k,
243, SOE       , STAT, consistent gets from cache                                ,         75,         15,
243, SOE       , STAT, consistent gets from cache (fastpath)                     ,         75,         15,
243, SOE       , STAT, consistent gets direct                                    ,    427.47k,     85.49k,
243, SOE       , STAT, physical reads                                            ,    427.47k,     85.49k,
243, SOE       , STAT, physical reads direct                                     ,    427.47k,     85.49k,
243, SOE       , STAT, physical read IO requests                                 ,      4.56k,        911,
243, SOE       , STAT, physical read bytes                                       ,       3.5G,    700.36M,
243, SOE       , STAT, calls to kcmgcs                                           ,         75,         15,
243, SOE       , STAT, file io wait time                                         ,     17.45k,      3.49k,
243, SOE       , STAT, cell physical IO bytes eligible for predicate offload     ,      3.49G,       699M,
243, SOE       , STAT, cell smart IO session cache lookups                       ,          5,          1,
243, SOE       , STAT, cell smart IO session cache hits                          ,          5,          1,
243, SOE       , STAT, cell physical IO interconnect bytes returned by smart scan,      1.52G,    303.25M,
243, SOE       , STAT, cell session smart scan efficiency                        ,         -3,        -.6,
243, SOE       , STAT, table scans (long tables)                                 ,          5,          1,
243, SOE       , STAT, table scans (direct read)                                 ,          5,          1,
243, SOE       , STAT, table scan rows gotten                                    ,    129.59M,     25.92M,
243, SOE       , STAT, table scan blocks gotten                                  ,    422.53k,     84.51k,
243, SOE       , STAT, cell scans                                                ,          5,          1,
243, SOE       , STAT, cell blocks processed by cache layer                      ,    501.97k,    100.39k,
243, SOE       , STAT, cell blocks processed by txn layer                        ,    501.97k,    100.39k,
243, SOE       , STAT, cell blocks processed by data layer                       ,     426.9k,     85.38k,
243, SOE       , STAT, cell blocks helped by minscn optimization                 ,    501.91k,    100.38k,
243, SOE       , STAT, cell simulated session smart scan efficiency              ,       3.5G,    699.17M,
243, SOE       , STAT, cell IO uncompressed bytes                                ,       3.5G,    699.17M,
243, SOE       , TIME, DB CPU                                                    ,      3.98s,   796.68ms,    79.7%, |@@@@@@@@  |
243, SOE       , TIME, sql execute elapsed time                                  ,      6.31s,      1.26s,   126.2%, |@@@@@@@@@@|
243, SOE       , TIME, DB time                                                   ,      6.31s,      1.26s,   126.2%, |@@@@@@@@@@|
243, SOE       , WAIT, enq: KO - fast object checkpoint                          ,     2.01ms,      402us,      .0%, |          |
243, SOE       , WAIT, cell smart table scan                                     ,      1.65s,   329.33ms,    32.9%, |@@@@      |
243, SOE       , WAIT, events in waitclass Other                                 ,     5.69ms,     1.14ms,      .1%, |          |
--  End of Stats snap 1, end=2011-08-23 16:21:16, seconds=5

-----------------------------------------------------------------------
Active% | SQL_ID          | EVENT                     | WAIT_CLASS
-----------------------------------------------------------------------
66% | adhm3mbjfzysd   | ON CPU                    | ON CPU
34% | adhm3mbjfzysd   | cell smart table scan     | User I/O

--  End of ASH snap 1, end=2011-08-23 16:21:16, seconds=5, samples_taken=44

The enq: KO fast object checkpoint and the “cell smart table scan” are giveaways for a smart scan.

DBMS_XPLAN

Another proven, useful tool in the arsenal of the performance analyst. For Exadata you might want to use the display_cursor function. It can take a sql_id, a child number and a format parameter. I like the format ‘ALL’ best, as it gives me most information about a statement.

Note that even though you might get the word “storage” in the execution plan, it doesn’t mean you actually see a smart scan in the end! Always check the session wait events and session counters for the word “smart” to make sure one is occurring.

SQL> select * from table(dbms_xplan.display_cursor('3xjhbw9m5u9qu',format=>'ALL'))
2  /

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------
SQL_ID  3xjhbw9m5u9qu, child number 0
-------------------------------------
select count(*) from order_items where product_id > 10

Plan hash value: 2209137760

-------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                 |       |       |   249K(100)|          |
|   1 |  SORT AGGREGATE               |                 |     1 |     4 |            |          |
|*  2 |   INDEX STORAGE FAST FULL SCAN| ITEM_PRODUCT_IX |   338M|  1293M|   249K  (2)| 00:50:00 |
-------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

1 - SEL$1
2 - SEL$1 / ORDER_ITEMS@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

2 - storage("PRODUCT_ID">10)
filter("PRODUCT_ID">10)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

1 - (#keys=0) COUNT(*)[22]

31 rows selected.

This leads me to the next tool, SQL Monitoring, and it rocks.

Real Time SQL Monitoring

This is yet another awesome tool in your repository if you have the license. It allows you to check what’s happening during the execution of a SQL statement, which is more than useful in larger data warehouse style queries.

The easiest and most accessible way is to use OEM Grid Control or database console to view the report. I personally like to run the report in a putty session on a large screen. To make it easier to run the report based on a SQL ID I created the below script “report.sql”:

set trim on long 2000000 longchunksize 2000000
set trimspool on  pagesize 0 echo off timing off linesize 1000
select DBMS_SQLTUNE.REPORT_SQL_MONITOR(
sql_id => '&1',
report_level=>'ALL')
from dual;

With this at hand you can query v$sesssion for a SQL_ID and feed it into the code snippet.

Unfortunately I can’t upload text files to wordpress for security reasons, and posting the output here isn’t possible since the report is rather wide. I decided to run the report with HTML output and type “EM” instead an post the print screens. The command I used is sjhown below:

SQL> select dbms_sqltune.report_sql_monitor(sql_id=>’&1′, type=>’EM’, report_level=>’+histogram’) from dual;

The output of the script would be far too wide so I moved it into the attachement of the EM report type here for an insert /*+ append */ into table select ….

You can use this to even look at statistics gathering-in the below two examples I executed dbms_stats-this report has been created for a stats gathering session with a DOP of 16

What I like best about the monitoring report is that it points out where you are in the execution plan, and it provides an estimate for how much work has already been done. More intelligent people have written lots about using these tools, off the top of my head I’d check Greg Rahn’s as well as Doug Burn’s weblogs for more information.

Oracle 10046 trace

Cary Millsap has written the standard about extended trace files and I strongly recommend reading his book and papers on the subject.

Whilst you can run queries in isolation and easily identify them during testing, the matter gets considerably more difficult as soon as you have to trace a session from a connection pool or other third party application. Most of them are not instrumented making tracing by

Of course you could use the MRTools to enable tracing on a wider scale and then trawl through the trace files with their help, but it might be difficult to get permission to do so on a busy production system.

The lesson learned for me was how to enable tracing in parallel query: this MOS note “Master Note; How to Get a 10046 trace for a Parallel Query [ID 1102801.1]” has the answer. It becomes more difficult if your PQ spans instances, but there are ways around this. You also should consider using DBMS_MONITOR to enable session trace if you can’t use “alter session set events” for some reason. Be sure to read all of the note, especially the bit at the bottom which explains the new 11g interface using “alter session set sql_trace” syntax.

I personally liked setting the tracefile identifier to something more easily identifiable. The ADRCI has a nice option to show trace files-if your ADR is consolidated in a cluster file system you could simply run “adcri > show tracefile -t” to list them all and pick the ones with your custom tracefile identifier.

What about command line utilities?

In the past I liked to use strace to see what a process was doing while “on cpu”. Unfortunately this isn’t really an option with Exadata, as most of the calls are purely network related. After all, RDS is a network protocol. Consider this output from strace during a 10 second query (taken on a database node):

#  strace -cp 29031
Process 29031 attached - interrupt to quit
Process 29031 detached
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
88.85    0.033064           2     14635         5 poll
5.47    0.002035           0     14754      7377 setsockopt
2.68    0.000999         999         1           munmap
2.43    0.000906           0     22075      7321 recvmsg
0.56    0.000209           0      7399           sendmsg
0.00    0.000000           0         8         6 read
0.00    0.000000           0         2           write
0.00    0.000000           0         3           mmap
0.00    0.000000           0        60           rt_sigprocmask
0.00    0.000000           0        15         5 rt_sigreturn
0.00    0.000000           0        30           setitimer
0.00    0.000000           0         1           semctl
0.00    0.000000           0       262           getrusage
0.00    0.000000           0       112           times
0.00    0.000000           0         1           io_setup
0.00    0.000000           0         1           io_destroy
0.00    0.000000           0        23           semtimedop
------ ----------- ----------- --------- --------- ----------------
100.00    0.037213                 59382     14714 total

Not too enlightening-all I/O happens in the cells.

Summary

These are the tools I found very useful in getting started with Exadata performance analysis, and they are very good for the job at hand. Note that snapper can be used for getting information about parallel query and SQL Monitor is a blessing when it comes to looking at huge parallel queries involving lots of tables.

Next presentations

I’ve got a couple of presentations coming up.

The first is at the UKOUG Management and Infrastructure SIG on Tuesday 20th September. I chair this SIG and it is all about how to manage Oracle when you have lots of databases, very big databases or a particular need to know a bit more about the rest of the IT Infrastructure. You can find the schedule here. The aim is to talk at a technical level but not the deep-dive of some of the RDBMS or HA presentations as we want to cover a wider brief.

As I say, one thing we do is look at the hardware your Oracle databases and application servers need to run on. This meeting we have Steve Shaw from Intel talking about getting the most out of your Xeon-based servers, but the general concepts apply to other platforms. If you are old enough, you will remember how you used to set up “HIMEM.SYS”, “EMM386.SYS and try to keep as much of the first 640K of your memory free. You might even have got down and dirty with you BIOS settings. We did it as the performance boosts were significant. Well, we don’t do that sort of thing anymore and Steve’s talk will probably make you want to! It still is a free way to get more out of your hardware.

Piet de Visser is also coming along and I always really enjoy his presentations. This time he is covering something of interest/concern to many of us – Outsourcing. I think that will be a pretty lively session.

I’m presenting as well, with Neil Chandler on the topic of how deep you should dive when solving technical issues. To 10046 trace or not.

We meet in Oracle’s city office, so handy for anyone in or around London or for anyone coming in from North of London (the office is 5 minutes walk from Liverpool Street Station and three stops along the underground from King’s Cross St Pancras). We’ve still got to finalise one or two agenda slots but they will be real-world talks about Enterprise Manager/GRID control. One fixed item on the agenda is that those who wish to retire to a pub afterwards to continue the discussions.

You may have noticed the little logo for the UKOUG TEBS conference appearing at the right of this blog. The agenda is not quite public yet so I cannot say too much, but I will be presenting at the event, on Index Organized Tables. I’ll be showing demonstrations of some of the things I have been blogging about and expanding on areas, joining it all up into one session. I might also be presenting on database design but that talk is being discussed and finalised at present. The UKOUG team have a lot of things that they have to pull together and organise just for the presentations, never mind the rest of the conferences such as the exhibition hall, catering, registration etc. I’ve been involved in helping with the agenda organisation this year, in a minor way, so I’ve seen it all from the inside.

The TEBS conference is, for me, the meeting highlight of the year. I’ve been to Oracle Open World and, interesting though it is and with some fabulous presentations and events, it is just too big, corporate and company line for me. Oh, I’d go again when the opportunity arises, but for me the UKOUG TEBS conference has a much better feel to it, you still get a load of great talks from Oracle about the latest/greatest, but you also get loads and loads of talks by many of the best in the field and the talks are independent – no pressure to be upbeat about Oracle or avoid any negative messages. In fact, if you honestly feel something in Oracle is worth avoiding, you are free to present and say “Don’t Do This!” :-)

I had planned to go to more of the European conferences this year but it did not get myself organised. For me, as an Independent consultant, I need to present to justify the trip and I keep failing to get the submissions in on time.

VirtualBox 4.1.2 Released…

VirtualBox 4.1.2 was released while I was out in Australia. I’ve just been playing catch-up and noticed it reported here.

The downloads and changelogs are in the usual places.

Cheers

Tim…




Book Review: Beginning Oracle SQL

August 23, 2011 The Ideal Guide for Beginning to Intermediate Oracle SQL: Providing the Logical Reasoning Behind the Approach to Understanding and Solving Data Access Challenges http://www.amazon.com/Beginning-Oracle-SQL-Experts-Voice/dp/1430271973/ The “Beginning Oracle SQL” book is an updated version of the book “Mastering Oracle SQL and SQL*Plus” which covered Oracle release versions through 10.1.0.2.  The original version, while [...]

V8 Bundled Exec call – and Oracle Program Interface (OPI) calls

So, what he hell is that V8 Bundled Exec call which shows up in various Oracle 11g monitoring reports?!

It’s yet another piece of instrumentation which can be useful for diagnosing non-trivial performance problems. Oracle ASH has allowed us to measure what is the top wait event or top SQLID for a long time, but now it’s also possible to take a step back and see what type of operation the database session is servicing. 

I am talking about Oracle Program Interface (OPI) calls. Basically for each OCI call in the client side (like , OCIStmtExecute, OCIStmtFetch, etc) there’s a corresponding server side OPI function (like opiexe(), opifch2() etc). 

It has been possible to trace all the OPI calls with event 10051 as I’ve explained here, but since Oracle 11g this data is also conveniently accessible from ASH views (the various monitoring reports, including SQL Monitoring report also use ASH data for some features).

So, I can write a simple query against ASH, which doesn’t group the samples by wait event or SQLID, but just by the general OPI call type (TOPLEVELCALLNAME column) and also by the SQL command type (using V$SQLCOMMAND in 11.2):  

SQL> SELECT
  2      a.top_level_call#
  3    , a.top_level_call_name
  4    , a.top_level_sql_opcode
  5    , s.command_name
  6    , COUNT(*)
  7  FROM
  8      v$active_session_history a
  9    , v$sqlcommand s
 10  WHERE
 11      a.top_level_sql_opcode = s.command_type
 12  GROUP BY
 13      a.top_level_call#
 14    , a.top_level_call_name
 15    , a.top_level_sql_opcode
 16    , s.command_name
 17  ORDER BY
 18*     COUNT(*) DESC
 19  /

TOP_LEVEL_CALL# TOP_LEVEL_CALL_NAME     top_op# COMMAND_NAME                     COUNT(*)
--------------- -------------------- ---------- ------------------------------ ----------
             94 V8 Bundled Exec               7 DELETE                              10505
              0                               0                                      4041
             59 VERSION2                      0                                       579
             59 VERSION2                     47 PL/SQL EXECUTE                        377
             59 VERSION2                      3 SELECT                                191
             96 LOB/FILE operations         170 CALL METHOD                            67
             59 VERSION2                    170 CALL METHOD                            66
             94 V8 Bundled Exec               6 UPDATE                                 52
             59 VERSION2                      6 UPDATE                                 41
             59 VERSION2                      7 DELETE                                 36
             94 V8 Bundled Exec               3 SELECT                                 24
             96 LOB/FILE operations          47 PL/SQL EXECUTE                         18
             59 VERSION2                      2 INSERT                                  8
             94 V8 Bundled Exec               1 CREATE TABLE                            5
              0                               3 SELECT                                  3
             59 VERSION2                     15 ALTER TABLE                             1
             96 LOB/FILE operations           0                                         1
             59 VERSION2                     12 DROP TABLE                              1
              5 FETCH                         3 SELECT                                  1
             94 V8 Bundled Exec              12 DROP TABLE                              1

20 rows selected.

 

Aas you see above, most of the ASH samples in my test database have been created by a DELETE type SQL statement, executed via V8 Bundled Exec type of an OPI call.

So, what is this call about? Let’s explain its name. Look into other call types in the above output. In the bottom you see a FETCH call (fetching from a SELECT type statement). Also there are a few LOB/FILE operations calls, which are used exclusively for accessing LOB data (via LOB locator, bypassing the usual SQL processing layer).

In Oracle 7 you would also see PARSE and EXECUTE calls, but starting from Oracle 8 not anymore. This is because starting from Oracle 8, the OPI layer in database side can accept bundled OCI calls from the client – to reduce the number of network roundtrips. So, basically instead of sending the PARSE and EXEC requests in separate SQLNet roundtrips (increasing the latency), the OCI client libraries can bundle these requests together and send them to database in one SQLNet payload. The database server side understands it and is able to extract these separate OPI requests from the bundled packet (in right order) and execute the corresponding OPI function for each separate call. 

Note that this is why you frequently see the kpoal8() function close to the beginning in Oracle server process stack traces (and where I usually start reading them from), this is the function which processes all the OPI requests sent to it in a bundled package. So, whenever there’s a OCIStmtExecute() call extracted from the bundle, the opiexe() function is called in Oracle kernel with appropriate arguments extracted from the same bundle. Whenever we extract an OCIStmtFetch2() call from the bundle, the corresponding opifch2() function is called in the kernel.

Hopefully this explains why is there such a call “V8 Bundled Exec” in Oracle. It just allows to reduce client – server communication latency by allowing to bundle multiple database requests together into a single SQL*Net payload. In other words, it’s just how Oracle works and it’s perfectly normal to see V8 Bundled Exec as the top OPI call type in performnace reports. If you see this OPI call as the top one, then you’d need to drill down into what’s the actual SQLID which consumes the most of the response time (and further breakdown like which wait event and execution plan step takes the most time).

But the ability of breaking down database response time by OPI call becomes much more useful when troubleshooting somewhat more exotic performance problems like LOB access times (where there’s no SQL statement associated with the database call) or other direct OPI calls which are executed without parsing and running a SQL cursor.

For example, have you noticed that the behavior of ROLLBACK command in sqlplus is different from the shorter ROLL command?

When you issue a ROLLBACK, then Oracle will actually send the string “ROLLBACK” to the database as a regular SQL statement (using V8 Bundled Exec), it will be parsed there (with all the latching and shared pool overhead) as regular SQL – and then Oracle realizes that the command in it is a rollback. Then rollback is performed.

But if you issue a ROLL command, then sqlplus understands it and doesn’t send it to the database for parsing like a regular SQL statement. Instead it will send an OCITransRollback() call, which will call the corresponding OPI function directly, bypassing the SQL processing layer completely. Instead of the usual “V8 Bundled Exec” bundle call you would see a “Transaction Commit/Rollback” OPI call as it was called directly, without any SQL statement processing involved. This is why you sometimes see WAIT#0 lines in SQLTrace, where the waits seem to be associated with some non-existent cursor #0. Whenever the wait happens when the database session is servicing an OPI call which bypasses the SQL processing layer (kks/kkx modules) then the SQLTrace just shows cursor#0 as the wait’s “owner”.

The same happens when using things like connection.commit() in JDBC, the client does not send a SQL statement with text “commit” into the datbase, but rather will call out the OPI commit function out directly.

So, how many different OPI calls are there? Well, a lot, as you can see from v$toplevelcall (or its underlying x$orafn) in Oracle 11.2:

 

SQL> SELECT * FROM v$toplevelcall;

TOP_LEVEL_CALL# TOP_LEVEL_CALL_NAME
--------------- ----------------------------------------------------------------
              0
              2 OPEN
              3 PARSE
              4 EXECUTE
              5 FETCH
              8 CLOSE
              9 LOGOFF
             10 DESCRIBE
             11 DEFINE
             12 COMMIT ON
             13 COMMIT OFF
             14 COMMIT
             15 ROLLBACK
             16 SET OPTIONS
             17 RESUME
             18 VERSION
             20 CANCEL
             21 GET ERR MSG
             23 SPECIAL
             24 ABORT
             25 DEQ ROW
             26 FETCH LONG
             31 HOW MANY
             32 INIT
             33 CHANGE USER
             34 BIND REF POS
             35 GET BIND VAR
             36 GET INTO VAR
             37 BINDV REF
             38 BINDN REF
             39 PARSE EXE
             40 PARSE SYNTAX
             41 PARSE SYNSDI
             42 CONTINUE
             43 ARRAY DESC
             44 INIT PARS
             45 FIN PARS
             46 PUT PAR
             48 START ORACLE
             49 STOP ORACLE
             50 RUN IND PROC
             52 ARCHIVE OP
             53 MED REC STRT
             54 MED REC TABS
             55 MED REC GETS
             56 MED REC RECL
             57 MED REC CANC
             58 LOGON
             59 VERSION2
             60 INIT
             62 EVERYTHING
             65 DIRECT LOAD
             66 UL BUFFER XMIT
             67 DISTRIB XACTION
             68 DESCRIBE INDEXES
             69 SESSION OPS
             70 EXEC w/SCN
             71 FAST UPI
             72 FETCH LONG
             74 V7 PARSE
             76 PL/SQL RPC
             78 EXEC & FCH
             79 XA OPS
             80 KGL OP
             81 LOGON
             82 LOGON
             83 Streaming op
             84 SES OPS (71)
             85 XA OPS (71)
             86 DEBUG
             87 DEBUGS
             88 XA XA Start
             89 XA XA Commit
             90 XA XA Prepare
             91 x/import
             92 KOD OP
             93 RPI Callback with ctxdef
             94 V8 Bundled Exec
             95 Streaming op
             96 LOB/FILE operations
             97 FILE Create
             98 V8 Describe Query
             99 Connect
            100 OPEN Recursive
            101 Bundled KPR
            102 Bundled PL/SQL
            103 Transaction Start/End
            104 Transaction Commit/Rollback
            105 Cursor close all
            106 Failover session info
            107 SES OPS (80)
            108 Do Dummy Defines
            109 INIT V8 PARS
            110 FIN V8 PARS
            111 PUT V8 PAR
            112 TERM V8 PARS
            114 INIT UNTR CB
            115 OAUTH
            116 Failover get info
            117 Commit Remote Sites
            118 OSESSKEY
            119 V8 Describe Any
            120 Cancel All
            121 Enqueue
            122 Dequeue pre 8.1
            123 Object Transfer
            124 RFS op
            125 Notification
            126 Listen
            127 Commit Remote Sites >= V813
            128 DirPathPrepare
            129 DirPathLoadStream
            130 DirPathMiscOps
            131 MEMORY STATS
            132 AQ Prop Status
            134 remote Fetch Archive Log (FAL)
            135 Client ID propagation
            136 DR Server CNX Process
            138 SPFILE parameter put
            139 KPFC exchange
            140 V82 Object Transfer
            141 Push transaction
            142 Pop transaction
            143 KFN Operation
            144 DirPathUnloadStream
            145 AQ batch enqueue/dequeue
            146 File transfer
            147 PING
            148 TSM
            150 Begin TSM
            151 End TSM
            152 Set schema
            153 Fetch from suspended result-set
            154 Key value pair
            155 XS Create Session Op
            156 XS Session RoundtripOp
            157 XS Piggyback Oper.
            158 KSRPC Execution
            159 Streams combined capture/apply
            160 AQ replay information
            161 SSCR
            162 OSESSGET
            163 OSESSRLS
            165 workload replay data
            166 replay statistic data
            167 Query Cache Stats
            168 Query Cache IDs
            169 RPC Test Stream
            170 replay plsql rpc
            171 XStream Out
            172 Golden Gate RPC

151 rows selected.

 

A lot of calls … special stuff (like DESCRIBE) which will bypass the SQL layer completely (but may in turn invoke further recursive SQL statements through the Recursive Program Interface – RPI).

Ok, time to stop – if you want to learn more, enable SQL trace with waits & binds and event 10051 at level 1 in your test database and try to describe a table or read some LOB columns for example!