Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Less slamming V$SQL

It’s a holiday here today in Perth, so a very brief blog post because I’ve been busy doing other things today Smile

IMG_20190303_114220_012

When I’m contacted by customers or friends and asked to help out with tuning their applications, probably the two most common issues are one or two poorly performing SQL statements, or the server being overly stressed with CPU load with no single SQL being the obvious cause.

In either case, a common plan of attack is probe V$SQL to gauge what statements are potential trouble makers. Due to the myriad of statistical metrics stored in V$SQL it is a very flexible view to discover more about potential problems just by adjusting the predicates. For example:

High I/O?

order by DISK_READS desc

High CPU?

order by BUFFER_GETS desc

Poor parsing applications?

order by PARSE_CALLS / EXECUTIONS

Memory hogs?

order by SHARABLE_MEM desc

each of which can be tweaked to hone into more specific requirements. But there is a risk to doing this. In particular, if you are diagnosing a system that is under duress, than hammering away at V$SQL may do more damage than good. Don’t forget that a system under duress is probably having its shared memory structures contended for just as heavily. And by aggressively querying V$SQL, you may be adding to that workload and/or contention.

You might be thinking this leaves in a Catch-22 situation – how can we diagnose a struggling system if diagnosing it will make it struggle more? Fortunately, a simple remedy is at hand.

Instead of querying V$SQL, try querying V$SQLSTATS instead. From the documentation:

V$SQLSTATS displays basic performance statistics for SQL cursors and contains one row per SQL statement (that is, one row per unique value of SQL_ID). The column definitions for columns in V$SQLSTATS are identical to those in the V$SQL and V$SQLAREA views. However, the V$SQLSTATS view differs from V$SQL and V$SQLAREA in that it is faster, more scalable, and has a greater data retention (the statistics may still appear in this view, even after the cursor has been aged out of the shared pool). Note that V$SQLSTATS contains a subset of columns that appear in V$SQL and V$SQLAREA.

Obviously this isn’t a leave pass to smash away at this performance view, but by avoiding V$SQL you are steering clear of a critical shared memory structure which any session wishing to parse/execute SQL will be competing for. V$SQLSTATS has been present since 10g and should be a drop-in replacement for your existing queries that use V$SQL.

Hash Optimisation-

Franck Pachot did an interesting presentation at the OBUG (Belgium user group) Tech Days showing how to use one of the O/S debug/trace tools to step through the function calls that Oracle made during different types of joins. This prompted me to ask him a question about a possible optimisation of hash joins as follows:

The hash join operation creates an in-memory hash table from the rowsource produced by its first child operation then probes the hash table with rows from the row source produced by the second child operation; but if there are no rows in the first row source then there’s no need to acquire rows from the second row source, so Oracle doesn’t call the second child operation.

Does the hash join adopt a “symmetrical” strategy – starting the second child operation prematurely to find the first couple of rows before executing the first child operation and building the in-memory hash table ?

Franck got back to me before the end of the event with the answer – which was “no”. (This makes sense, of course, because if there’s no data in the second rowsource – which is supposed to be the “big” rowsource – you’ve probably done a lot of work which you may not have needed to do if you’d checked the first rowsource properly first.  (Thought – might Oracle look for the first row from the first row source, then check the second row source for a first row, then finish the first rowsource and build? How difficult do you want to make your life?)

So Franck got me an answer by looking at function calls – but I’d only asked him because I thought he might have done the experiment already and might be able to give me an answer immediately rather than having to waste his valuable Belgian Beer Drinking time to do some tests for me. (And I’ve never got around to playing with strace, ptrace, dtrace, truss, et. al. so I didn’t fancy doing the tracing for myself, of course). But this particular question doesn’t need any examination of function calls to get an answer: an extended trace file would be sufficient. Here’s how to start:


rem
rem     Script:         hash_join_opt.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2019
rem
rem     Last tested
rem             11.2.0.4
rem

create table t1
segment creation immediate
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        mod(rownum,20)                  n20,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;

create table t2
segment creation immediate
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        mod(rownum,10)                  n10,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;


begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T2',
                method_opt  => 'for all columns size 1'
        );
end;
/

set serveroutput off

alter system flush buffer_cache;
execute dbms_lock.sleep(2)

alter session set statistics_level = all;
alter session set events '10046 trace name context forever, level 8';

select
        t1.v1, t2.v1
from
        t1, t2
where
        t1.n20 = 19
and     t2.id = t1.id
and     t2.n10 = 7.5
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
alter session set events '10046 trace name context off';
alter session set statistics_level = typical;

I’ve created a pair of tables with similar volumes of data and then executed a query that should do a hash join between the tables. The t2 table will return no rows since the predicate on n10 is asking for a value that doesn’t exist but the optimizer will think it will supply 10% of the table since I’ve avoided creating a histogram on the column and the num_distinct for the n10 column is 10. You’ll notice that I’ve flushed the buffer_cache (with a couple of seconds pause, just in case) before executing the query. Here’s the execution plan with rowsource execution stats:


SQL_ID  bscu1r7cu36ur, child number 0
-------------------------------------
select  t1.v1, t2.v1 from  t1, t2 where  t1.n20 = 19 and t2.id = t1.id
and t2.n10 = 7.5

Plan hash value: 1838229974

-------------------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |      0 |00:00:00.04 |     352 |    348 |       |       |          |
|*  1 |  HASH JOIN         |      |      1 |    500 |      0 |00:00:00.04 |     352 |    348 |  1519K|  1519K| 1487K (0)|
|*  2 |   TABLE ACCESS FULL| T1   |      1 |    500 |    500 |00:00:00.03 |     176 |    174 |       |       |          |
|*  3 |   TABLE ACCESS FULL| T2   |      1 |   1000 |      0 |00:00:00.01 |     176 |    174 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."ID"="T1"."ID")
   2 - filter("T1"."N20"=19)
   3 - filter("T2"."N10"=7.5)

As you can see, the tablescan of t2 returned no rows so the hash join returned no rows, but the stats show 174 blocks read from both t1 and t2. The fact that we read 174 blocks from t2 doesn’t mean we didn’t do a pre-emptive check, of course, as those reads could (by the hypothesis) have been the reads required by the pre-emptive check. The fact that we did a complete scan both tables, though, tells us that the reads on t2 (probably) weren’t from a pre-emptive check (if they were then Oracle shouldn’t have scanned the whole of t1).

We can get extra corroborative evidence though when we check the trace file – which shows us the following waits:

PARSING IN CURSOR #139927916187824 len=88 dep=0 uid=62 oct=3 lid=62 tim=1551445144496821 hv=3651246935 ad='97353e10' sqlid='bscu1r7cu36ur'
select
        t1.v1, t2.v1
from
        t1, t2
where
        t1.n20 = 19
and     t2.id = t1.id
and     t2.n10 = 7.5
END OF STMT

PARSE #139927916187824:c=3000,e=2760,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=1838229974,tim=1551445144496811
EXEC #139927916187824:c=0,e=78,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1838229974,tim=1551445144497155
WAIT #139927916187824: nam='SQL*Net message to client' ela= 8 driver id=1650815232 #bytes=1 p3=0 obj#=351199 tim=1551445144497224

WAIT #139927916187824: nam='db file sequential read' ela= 634 file#=5 block#=130 blocks=1 obj#=351198 tim=1551445144498386
WAIT #139927916187824: nam='db file scattered read' ela= 397 file#=5 block#=131 blocks=5 obj#=351198 tim=1551445144498971
WAIT #139927916187824: nam='db file scattered read' ela= 1752 file#=5 block#=136 blocks=8 obj#=351198 tim=1551445144502411
WAIT #139927916187824: nam='db file scattered read' ela= 426 file#=5 block#=145 blocks=7 obj#=351198 tim=1551445144504056
WAIT #139927916187824: nam='db file scattered read' ela= 590 file#=5 block#=152 blocks=8 obj#=351198 tim=1551445144505655
WAIT #139927916187824: nam='db file scattered read' ela= 8063 file#=5 block#=161 blocks=7 obj#=351198 tim=1551445144514574
WAIT #139927916187824: nam='db file scattered read' ela= 402 file#=5 block#=168 blocks=8 obj#=351198 tim=1551445144515741
WAIT #139927916187824: nam='db file scattered read' ela= 803 file#=5 block#=177 blocks=7 obj#=351198 tim=1551445144517495
WAIT #139927916187824: nam='db file scattered read' ela= 423 file#=5 block#=184 blocks=8 obj#=351198 tim=1551445144518743
WAIT #139927916187824: nam='db file scattered read' ela= 422 file#=5 block#=193 blocks=7 obj#=351198 tim=1551445144520696
WAIT #139927916187824: nam='db file scattered read' ela= 266 file#=5 block#=200 blocks=8 obj#=351198 tim=1551445144521460
WAIT #139927916187824: nam='db file scattered read' ela= 307 file#=5 block#=209 blocks=7 obj#=351198 tim=1551445144522249
WAIT #139927916187824: nam='db file scattered read' ela= 313 file#=5 block#=216 blocks=8 obj#=351198 tim=1551445144523166
WAIT #139927916187824: nam='db file scattered read' ela= 247 file#=5 block#=225 blocks=7 obj#=351198 tim=1551445144523927
WAIT #139927916187824: nam='db file scattered read' ela= 315 file#=5 block#=232 blocks=8 obj#=351198 tim=1551445144524646
WAIT #139927916187824: nam='db file scattered read' ela= 326 file#=5 block#=241 blocks=7 obj#=351198 tim=1551445144525565
WAIT #139927916187824: nam='db file scattered read' ela= 186 file#=5 block#=248 blocks=8 obj#=351198 tim=1551445144526255
WAIT #139927916187824: nam='db file scattered read' ela= 341 file#=5 block#=258 blocks=55 obj#=351198 tim=1551445144527363
----------------
WAIT #139927916187824: nam='db file sequential read' ela= 201 file#=5 block#=386 blocks=1 obj#=351199 tim=1551445144531165
WAIT #139927916187824: nam='db file scattered read' ela= 221 file#=5 block#=387 blocks=5 obj#=351199 tim=1551445144531492
WAIT #139927916187824: nam='db file scattered read' ela= 194 file#=5 block#=392 blocks=8 obj#=351199 tim=1551445144531750
WAIT #139927916187824: nam='db file scattered read' ela= 301 file#=5 block#=401 blocks=7 obj#=351199 tim=1551445144532271
WAIT #139927916187824: nam='db file scattered read' ela= 272 file#=5 block#=408 blocks=8 obj#=351199 tim=1551445144532756
WAIT #139927916187824: nam='db file scattered read' ela= 258 file#=5 block#=417 blocks=7 obj#=351199 tim=1551445144533218
WAIT #139927916187824: nam='db file scattered read' ela= 242 file#=5 block#=424 blocks=8 obj#=351199 tim=1551445144533704
WAIT #139927916187824: nam='db file scattered read' ela= 232 file#=5 block#=433 blocks=7 obj#=351199 tim=1551445144534125
WAIT #139927916187824: nam='db file scattered read' ela= 213 file#=5 block#=440 blocks=8 obj#=351199 tim=1551445144534506
WAIT #139927916187824: nam='db file scattered read' ela= 241 file#=5 block#=449 blocks=7 obj#=351199 tim=1551445144534914
WAIT #139927916187824: nam='db file scattered read' ela= 221 file#=5 block#=456 blocks=8 obj#=351199 tim=1551445144535258
WAIT #139927916187824: nam='db file scattered read' ela= 311 file#=5 block#=465 blocks=7 obj#=351199 tim=1551445144536518
WAIT #139927916187824: nam='db file scattered read' ela= 215 file#=5 block#=472 blocks=8 obj#=351199 tim=1551445144536906
WAIT #139927916187824: nam='db file scattered read' ela= 164 file#=5 block#=481 blocks=7 obj#=351199 tim=1551445144537343
WAIT #139927916187824: nam='db file scattered read' ela= 160 file#=5 block#=488 blocks=8 obj#=351199 tim=1551445144537624
WAIT #139927916187824: nam='db file scattered read' ela= 253 file#=5 block#=497 blocks=7 obj#=351199 tim=1551445144538041
WAIT #139927916187824: nam='db file scattered read' ela= 252 file#=5 block#=504 blocks=8 obj#=351199 tim=1551445144538456
WAIT #139927916187824: nam='db file scattered read' ela= 947 file#=5 block#=514 blocks=55 obj#=351199 tim=1551445144539690

FETCH #139927916187824:c=25996,e=43227,p=348,cr=352,cu=0,mis=0,r=0,dep=0,og=1,plh=1838229974,tim=1551445144540502

I’ve inserted a few gaps into this section of the trace file but haven’t deleted any lines. As you can see there’s a series of reads for obj# 351198 (table t1), followed by a series of reads of obj# 351199. We don’t do anything subtle like peeking at t1, switching to peek at t2, then continuing with t1: it’s a simple end to end brute force scan of each table in turn.

But there’s more …

Before I created the test above I decided to check whether I had already done the test some time in the past and written about it. A search through my blog didn’t turn up any notes about this question, but it did uncover the following footnote to an earlier posting about hash joins:

taking a closer look at the set of trace files generated in the broadcast test I discovered that the first set of slaves start their parallel tablescan of t1 but stop after just one read from each slave, then the second set of slaves scans and builds the hash table from t2 before calling for further data from t1.

Naturally I included a link to the footnote in the draft notes for this blog entry and promptly forgot about writing the blog note until a couple of days ago when Chinar Aliyev produced a note on the hash join buffered operation in which he had included an execution plan similar to the broadcast plan from my earlier blog note – so I thought I’d take another look at it, because it looks like some versions of the parallel hash join can do exactly the pre-emptive test that the serial execution plan doesn’t.

So, here’s a query to run against the same data set, and the resulting execution plan – pulled from memory after executing the query:


select
        /*+
                parallel(2)
                leading(t1 t2)
                use_hash(t2)
                pq_distribute(t2 none broadcast)
        */
        t1.v1, t2.v1
--      t1.v1, count(*)
from
        t1, t2
where
        t1.n20 = 19
and     t2.id = t1.id
and     t2.n10 = 7
;

----------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Starts | E-Rows |    TQ  |IN-OUT| PQ Distrib | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
----------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |          |      1 |        |        |      |            |      0 |00:00:00.22 |      10 |      2 |       |       |          |
|   1 |  PX COORDINATOR          |          |      1 |        |        |      |            |      0 |00:00:00.22 |      10 |      2 |       |       |          |
|   2 |   PX SEND QC (RANDOM)    | :TQ10001 |      0 |    500 |  Q1,01 | P->S | QC (RAND)  |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|*  3 |    HASH JOIN             |          |      1 |    500 |  Q1,01 | PCWP |            |      0 |00:00:00.12 |     128 |    173 |  1483K|  1483K|     2/0/0|
|   4 |     PX BLOCK ITERATOR    |          |      2 |    500 |  Q1,01 | PCWC |            |    500 |00:00:00.03 |     248 |    173 |       |       |          |
|*  5 |      TABLE ACCESS FULL   | T1       |     25 |    500 |  Q1,01 | PCWP |            |    500 |00:00:00.02 |     248 |    173 |       |       |          |
|   6 |     BUFFER SORT          |          |      2 |        |  Q1,01 | PCWC |            |   2000 |00:00:00.07 |       0 |      0 |   108K|   108K|     2/0/0|
|   7 |      PX RECEIVE          |          |      2 |   1000 |  Q1,01 | PCWP |            |   2000 |00:00:00.06 |       0 |      0 |       |       |          |
|   8 |       PX SEND BROADCAST  | :TQ10000 |      0 |   1000 |  Q1,00 | P->P | BROADCAST  |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   9 |        PX BLOCK ITERATOR |          |      2 |   1000 |  Q1,00 | PCWC |            |   1000 |00:00:00.05 |     248 |    173 |       |       |          |
|* 10 |         TABLE ACCESS FULL| T2       |     25 |   1000 |  Q1,00 | PCWP |            |   1000 |00:00:00.04 |     248 |    173 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------------------------

Now the rule for reading a DFO tree in a parallel plan is to follow the table queues (a.k.a. DFO numbers / TQ numbers) – but in this case we have a bit of a mixture as the table scan of t1 isn’t used as the input to a distribution so it’s hard to know whether the first physical activity is supposed to be the tablescan of t2 (feeding the zeroth table queue) or the tablescan of t1 which doesn’t feed a table queue until after the hash join.

Given that t2 is being broadcast we should probably expect to see the first set of parallel query slaves starting first with a scan of “random” sections of t1 to build an in-memory hash table, and then see the second set of parallel query slaves scanning t2 and broadcasting the results to the first set of slaves (every slave gets a copy of every row) to do the probe. We can try to confirm this by looking at the trace files. First we extract “db file scattered read” lines from the four trace files and then we sort them by timestamp.

test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 677 file#=5 block#=220 blocks=4 obj#=351218 tim=1551450113700076
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 214 file#=5 block#=225 blocks=3 obj#=351218 tim=1551450113701131
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 238 file#=5 block#=161 blocks=7 obj#=351218 tim=1551450113702026
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 318 file#=5 block#=177 blocks=6 obj#=351218 tim=1551450113703464
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 371 file#=5 block#=190 blocks=2 obj#=351218 tim=1551450113705168
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 227 file#=5 block#=193 blocks=5 obj#=351218 tim=1551450113705690
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 225 file#=5 block#=138 blocks=6 obj#=351218 tim=1551450113706850
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 487 file#=5 block#=131 blocks=5 obj#=351218 tim=1551450113708512
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 228 file#=5 block#=136 blocks=2 obj#=351218 tim=1551450113709161
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 219 file#=5 block#=294 blocks=7 obj#=351218 tim=1551450113710099
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 150 file#=5 block#=184 blocks=6 obj#=351218 tim=1551450113711398
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 229 file#=5 block#=273 blocks=7 obj#=351218 tim=1551450113712456
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 223 file#=5 block#=280 blocks=7 obj#=351218 tim=1551450113713829
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 253 file#=5 block#=168 blocks=7 obj#=351218 tim=1551450113715124
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 1014 file#=5 block#=308 blocks=5 obj#=351218 tim=1551450113717046
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 266 file#=5 block#=235 blocks=5 obj#=351218 tim=1551450113718684
test_p000_5963.trc:WAIT #140673176773096: nam='db file scattered read' ela= 208 file#=5 block#=241 blocks=2 obj#=351218 tim=1551450113719321
----
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 326 file#=5 block#=287 blocks=7 obj#=351218 tim=1551450113700003
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 147 file#=5 block#=198 blocks=2 obj#=351218 tim=1551450113701394
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 72 file#=5 block#=200 blocks=5 obj#=351218 tim=1551450113701840
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 459 file#=5 block#=153 blocks=7 obj#=351218 tim=1551450113703442
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 308 file#=5 block#=259 blocks=7 obj#=351218 tim=1551450113705132
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 313 file#=5 block#=266 blocks=7 obj#=351218 tim=1551450113706540
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 304 file#=5 block#=243 blocks=5 obj#=351218 tim=1551450113707925
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 219 file#=5 block#=248 blocks=2 obj#=351218 tim=1551450113708505
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 590 file#=5 block#=228 blocks=4 obj#=351218 tim=1551450113709705
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 178 file#=5 block#=232 blocks=3 obj#=351218 tim=1551450113710270
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 142 file#=5 block#=205 blocks=3 obj#=351218 tim=1551450113711046
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 98 file#=5 block#=209 blocks=4 obj#=351218 tim=1551450113711457
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 179 file#=5 block#=213 blocks=3 obj#=351218 tim=1551450113712308
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 296 file#=5 block#=216 blocks=4 obj#=351218 tim=1551450113712967
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 188 file#=5 block#=301 blocks=7 obj#=351218 tim=1551450113714238
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 207 file#=5 block#=250 blocks=6 obj#=351218 tim=1551450113715393
test_p001_5967.trc:WAIT #140483692405224: nam='db file scattered read' ela= 567 file#=5 block#=146 blocks=6 obj#=351218 tim=1551450113717686
----
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 260 file#=5 block#=529 blocks=7 obj#=351219 tim=1551450113698677
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 196 file#=5 block#=522 blocks=7 obj#=351219 tim=1551450113722013
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 145 file#=5 block#=564 blocks=5 obj#=351219 tim=1551450113723239
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 317 file#=5 block#=491 blocks=5 obj#=351219 tim=1551450113724578
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 128 file#=5 block#=497 blocks=2 obj#=351219 tim=1551450113725217
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 495 file#=5 block#=402 blocks=6 obj#=351219 tim=1551450113726250
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 168 file#=5 block#=394 blocks=6 obj#=351219 tim=1551450113727928
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 249 file#=5 block#=424 blocks=7 obj#=351219 tim=1551450113730384
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 120 file#=5 block#=476 blocks=4 obj#=351219 tim=1551450113731225
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 485 file#=5 block#=481 blocks=3 obj#=351219 tim=1551450113732064
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 594 file#=5 block#=387 blocks=5 obj#=351219 tim=1551450113734168
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 323 file#=5 block#=392 blocks=2 obj#=351219 tim=1551450113735040
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 121 file#=5 block#=506 blocks=6 obj#=351219 tim=1551450113736253
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 285 file#=5 block#=440 blocks=6 obj#=351219 tim=1551450113739084
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 345 file#=5 block#=433 blocks=6 obj#=351219 tim=1551450113740940
----
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 261 file#=5 block#=550 blocks=7 obj#=351219 tim=1551450113698687
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 155 file#=5 block#=484 blocks=4 obj#=351219 tim=1551450113721738
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 131 file#=5 block#=488 blocks=3 obj#=351219 tim=1551450113722260
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 143 file#=5 block#=557 blocks=7 obj#=351219 tim=1551450113723245
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 138 file#=5 block#=454 blocks=2 obj#=351219 tim=1551450113724545
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 124 file#=5 block#=456 blocks=5 obj#=351219 tim=1551450113724954
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 243 file#=5 block#=469 blocks=3 obj#=351219 tim=1551450113725947
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 523 file#=5 block#=472 blocks=4 obj#=351219 tim=1551450113726745
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 178 file#=5 block#=409 blocks=7 obj#=351219 tim=1551450113727781
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 230 file#=5 block#=543 blocks=7 obj#=351219 tim=1551450113729470
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 488 file#=5 block#=515 blocks=7 obj#=351219 tim=1551450113731618
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 356 file#=5 block#=461 blocks=3 obj#=351219 tim=1551450113733840
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 185 file#=5 block#=465 blocks=4 obj#=351219 tim=1551450113735195
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 284 file#=5 block#=536 blocks=7 obj#=351219 tim=1551450113736172
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 204 file#=5 block#=499 blocks=5 obj#=351219 tim=1551450113737957
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 306 file#=5 block#=504 blocks=2 obj#=351219 tim=1551450113738871
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 281 file#=5 block#=446 blocks=2 obj#=351219 tim=1551450113739710
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 158 file#=5 block#=449 blocks=5 obj#=351219 tim=1551450113740159
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 99 file#=5 block#=417 blocks=7 obj#=351219 tim=1551450113741709


tim=1551450113698677 obj#=351219
tim=1551450113698687 obj#=351219
----
tim=1551450113700003 obj#=351218
tim=1551450113700076 obj#=351218
tim=1551450113701131 obj#=351218
tim=1551450113701394 obj#=351218
tim=1551450113701840 obj#=351218
tim=1551450113702026 obj#=351218
tim=1551450113703442 obj#=351218
tim=1551450113703464 obj#=351218
tim=1551450113705132 obj#=351218
tim=1551450113705168 obj#=351218
tim=1551450113705690 obj#=351218
tim=1551450113706540 obj#=351218
tim=1551450113706850 obj#=351218
tim=1551450113707925 obj#=351218
tim=1551450113708505 obj#=351218
tim=1551450113708512 obj#=351218
tim=1551450113709161 obj#=351218
tim=1551450113709705 obj#=351218
tim=1551450113710099 obj#=351218
tim=1551450113710270 obj#=351218
tim=1551450113711046 obj#=351218
tim=1551450113711398 obj#=351218
tim=1551450113711457 obj#=351218
tim=1551450113712308 obj#=351218
tim=1551450113712456 obj#=351218
tim=1551450113712967 obj#=351218
tim=1551450113713829 obj#=351218
tim=1551450113714238 obj#=351218
tim=1551450113715124 obj#=351218
tim=1551450113715393 obj#=351218
tim=1551450113717046 obj#=351218
tim=1551450113717686 obj#=351218
tim=1551450113718684 obj#=351218
tim=1551450113719321 obj#=351218
--
tim=1551450113721738 obj#=351219
tim=1551450113722013 obj#=351219
tim=1551450113722260 obj#=351219
tim=1551450113723239 obj#=351219
tim=1551450113723245 obj#=351219
tim=1551450113724545 obj#=351219
tim=1551450113724578 obj#=351219
tim=1551450113724954 obj#=351219
tim=1551450113725217 obj#=351219
tim=1551450113725947 obj#=351219
tim=1551450113726250 obj#=351219
tim=1551450113726745 obj#=351219
tim=1551450113727781 obj#=351219
tim=1551450113727928 obj#=351219
tim=1551450113729470 obj#=351219
tim=1551450113730384 obj#=351219
tim=1551450113731225 obj#=351219
tim=1551450113731618 obj#=351219
tim=1551450113732064 obj#=351219
tim=1551450113733840 obj#=351219
tim=1551450113734168 obj#=351219
tim=1551450113735040 obj#=351219
tim=1551450113735195 obj#=351219
tim=1551450113736172 obj#=351219
tim=1551450113736253 obj#=351219
tim=1551450113737957 obj#=351219
tim=1551450113738871 obj#=351219
tim=1551450113739084 obj#=351219
tim=1551450113739710 obj#=351219
tim=1551450113740159 obj#=351219
tim=1551450113740940 obj#=351219
tim=1551450113741709 obj#=351219

I’ve got four slave processes p000 through to p0003, and in the top part of the output you can see that p000/p001 scan object 351218 (after several experiments the object_id for t1 has climbed a bit), and p002/p003 scan object 351219 (t2). Sorting by the timestamps (tim=), though, we can see that there are two reads for t2 (351219) followed by many reads of t1 (351218), finishing with a load of reads of t2. Cross-checking the timestamps on the sorted output with the original extract we can see that the two initial reads of t2 are the first reads by p002 and p003 respectively.

tim=1551450113698677 obj#=351219
test_p002_5971.trc:WAIT #139968162111976: nam='db file scattered read' ela= 260 file#=5 block#=529 blocks=7 obj#=351219 tim=1551450113698677

tim=1551450113698687 obj#=351219
test_p003_5975.trc:WAIT #140601219817960: nam='db file scattered read' ela= 261 file#=5 block#=550 blocks=7 obj#=351219 tim=1551450113698687

It really does look as if at run time Oracle does the scan and build for t1 first but starts with a quick peek at t2 to check that it’s going to return some data. We can be a little more fussy with this test – let’s change some of the data in t2 so that Oracle doesn’t find any rows in the first two scattered reads and see if anything changes.

This is going to take a little fiddling but in my case I can see that the first reads by slaves p002/p003 are looking at file 5, blocks 529 – 535 and blocks 550 – 556; and broadly speaking the scattered reads are working backwards down the table – so I’m going to update a load of data that appears in the second half of the table on the next run of the test with a simple command to put the n10 column value out of range:


update t2 set n10 = 0 where n10 = 7 and id > 750;
commit;

-- test the query

update t2 set n10 = 0 where n10 = 7;
commit;

-- test the query

After the first update the pattern of reads (in time order) changed do:

  • 22 scattered reads of t2
  • 34 scattered reads of t1
  • 12 scattered reads of t2

This really does look like Oracle checking whether t2 will have any data before doing the full scan and build from t1.

Unfortunately when I did the final update that eliminated all the interesting data from t2 the pattern (in time order) turned into:

  • 34 scattered reads of t2
  • 34 scattered reads of t1

In other words – having discovered NO data in t2 that could join Oracle still scanned and built the hash table from t1. So maybe there’s a reason other than a sanity check for peeking at the second table, or maybe there’s a bug in the extreme case where the second table produces no data at all. (For reference you might also want to see an older note on how a parallel hash join doesn’t skip the probe table when the build table is empty.)

Just as a quick check that I wasn’t chasing an old bug at this point I repeated the tests on 18.3.0.0 – the same patterns emerged.

 

/proc/meminfo formatted for humans

Here is a small awk script I use to format memory information on Linux:

awk '/Hugepagesize:/{p=$2} / 0 /{next} / kB$/{v[sprintf("%9d GB %-s",int($2/1024/1024),$0)]=$0;next} {h[$0]=$2} /HugePages_Total/{hpt=$2} /HugePages_Free/{hpf=$2} {h["HugePages Used (Total-Free)"]=hpt-hpf} END{for(k in v) print k;for (k in h) print sprintf("%9d GB %-s",p*h[k]/1024/1024,k)}' /proc/meminfo|sort -nr|grep --color=auto -iE "^|(HugePage)[^:]*" #awk #meminfo

This reads /proc/meminfo and formats it to display the size in GB on the first column. Most of the statistics are in kB (formatted in the ‘v’ arrays in the awk script) but for Huge Pages we must read the Hugepagesize as they are in number of pages (stored in the ‘h’ array in the awk script). Then I sort it by size, and color the ‘HugePage’ pattern with grep.

Here is an example of the output on a ‘ VM.DenseIO2.24’ compute shape in the Oracle Cloud (320GB — showing 314 MemTotal here).

I have allocated 102400 Huge Pages (200GB) with the following line in /etc/sysctl.conf

vm.nr_hugepages=102400

Remember that this can be allocated dynamically (sysctl -p) but be careful to leave enough small pages (here is an example where the system cannot boot because of invalid settings: https://blog.dbi-services.com/kernel-panic-not-syncing-out-of-memory-and-no-killable-processes/)

In this example, 64 GB of those Huge Pages are used (136 GB free within 200GB total). They were allocated by two Oracle Database instances having a 32GB System Global Area each. This is visible from the alert.log. When the first instance started, the 102400 pages were free and 16385 were allocated:

Supported system pagesize(s):
PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES
4K Configured 12 12
2048K 102400 16385 16385

When the second instance started, only 102400–16385=86015 were free and another 16385 were allocated:

Supported system pagesize(s):
PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES
4K Configured 12 12
2048K 86015 16385 16385

So, this leaves 120 GB of free small pages, approximately counted by MemAvailable but I recommend Frits Hoogland post for a better calculation:

Linux memory usage

The current post follows the awk snippet I posted in a tweet— follow me on Twitter if you like this kind of posts…

Added 3-MAR-2019

I’ve added the calculation of used Huge Pages as the difference between Total and Free (added in italic in the code above):

    32767 GB VmallocTotal:   34359738367 kB
32510 GB VmallocChunk: 34089686416 kB
503 GB MemTotal: 528074912 kB
483 GB DirectMap1G: 506462208 kB
457 GB HugePages_Total: 234000
436 GB HugePages_Free: 223470

39 GB CommitLimit: 41181260 kB
33 GB MemAvailable: 35024844 kB
31 GB Cached: 33027992 kB
30 GB DirectMap2M: 31846400 kB
20 GB HugePages Used (Total-Free)
18 GB Inactive: 19130736 kB
17 GB Inactive(file): 18231100 kB
17 GB Active: 18713072 kB
15 GB SwapTotal: 16759804 kB

Introducing I.T. to an Elderly Relative

Introducing an older person to the connected world can be a challenge. So I thought I would describe my recent experiences in introducing my elderly mother to I.T and the internet. Each such situation will be different of course, depending on the prior experience of the person and the skills you believe they have. I’m going to run through what I think are the main initial considerations. I knew from the start it was going to be a particular challenge with my mother, so I think she is a good example. Hopefully, for many the task will be a little easier…

Firstly, why are we doing this?

Not everyone has to be on the internet and I knew it was going to be stressful for everyone involved, so the first question to ask is “Is it in the best interest of Uncle Bob to go through this?”

For years my mother has shown very little interest in computers or the internet, and at times she has been quite “Oh, those damn things!” about it all. But over the last 2 or 3 years Mum’s started showing an interest. This has nothing to do with the fact that her youngest son’s whole working life has been in I.T., I think she’s simply started to feel she is missing out as there are so many references on TV programs and the newspaper to things on the internet. “Just go to blingy bong for more information!”. And to her, it really is “blingy bong”.

I think it is vital that the person wants to get online – and this is not a one-week wonder.

Before now my mum had mentioned getting online but then lost interest when the one thing she was interested in disappeared, such as checking the state of play in the Vuelta cycling race as it was not on her TV. Setting someone up on the internet is not cheap and I knew she would insist on paying. You have to organise broadband to the property, buy a device and then spend time in training them. If mum lost interest after a couple of days of trying, it would all be a waste of effort. But she had been constant in mentioning this for a couple of months.

Another reason to get Mum online is so she can stay in touch more easily {am I really sure I want this?!?}. Her hearing is not as good as it was and phone calls are a ‘dedicated, binary activity’. What do I mean by that? Well, when you are on the phone, you have to keep the conversation going and you are doing nothing else, this is your only chance to communicate – dedicated. And when you are not on the phone you are not in contact – Binary (all or nothing).

I think those of us in the technology industry or who grew up in the last… 4 decades maybe take this for granted, but with email, texts, messenger, whatsapp etc you can throw a message or two at people when the need occurs to you, and leave them for the person to pick up. It is a more relaxed way of communicating and, in many ways, more reliable. At present if mum needs me to come over and change light bulbs she needs to call me in the evening. She won’t call me during the day, she is convinced nothing short of death is important enough to call during the day! So she also needs to remember to call and mum is getting worse for that. If she is online she can send me a message when she notices the bulb in hall has blown.

The next step is to assess the capabilities of the person you are helping.

https://mwidlake.files.wordpress.com/2019/02/old_typwriter.jpg?w=600&h=600 600w, https://mwidlake.files.wordpress.com/2019/02/old_typwriter.jpg?w=150&h=150 150w" sizes="(max-width: 300px) 100vw, 300px" />

I’ve introduced a few other people (mother-in-law, brother to some degree, relatives of friends) to computers and the internet over the years and the size of the challenge is very much dictated by their skills. I think you need to be honest about how much and how soon people can learn, especially if they are older or have learning needs. It’s great to be surprised by them doing better than you expected, but if they do worse then it can be demoralising for both parties.

My mother-in-law was a retired science teacher, interested in a dozen things, confident, and self-motivated. When she asked me to help her get on the internet I knew it was not going to be too hard.  But something I did not consider is that she had never typed at all (which surprised me, but there you go), so the keyboard was an initial, surprise challenge to the task. Just think about it, you have to explain the “enter” key, the “delete” key, “shift” key, special symbols… But the Mother-in-law was used to using equipment and took to it well. It did mean that the first session was almost totally about introducing her to the keyboard and just a few basics on turning the machine on and off and using email. After that I went on in later sessions to show her the basics of Windows, email, web browsing and she was soon teaching herself. She got a couple of “computes for dummies” and went through them.

Learning skills deteriorate as you age – but each individual is different. Be realistic.

My mother had also never used a typewriter – but she is also not good with technology. Getting her to understand how to use a video player was a task way back when.  It is not that she is no good with mechanical things or controlling them, she was a sewing machinist all her career – but she never moved from a simple sewing machine with just a dozen manually selected stitch patterns to ones which you can program or that have a lot of controls. This might be mean to say, but she struggled with an electronic cat-flap when we installed one for her! {Well, we installed it for the cats to be honest, we do not make Mum enter and exit the house on her hands and knees through a small hole in the door}. My mum has also never had (or wanted) a mobile phone, let alone a smart phone. Apps, widgets, icons, touch screens are all things she has never used.  We were going to have to keep it very, very simple. Mum also lacks focus and retention of details. Lots of repetition would be needed to learn, and only a few things at a time.

Third Question – What hardware?

This is a major consideration. A few years ago if you wanted internet access and email the choice was simply “Mac or PC” and probably came down to what you personally preferred and felt most comfortable supporting.

I realised from the very start that my mum would never cope with a Windows PC or a Mac. I know some people are so Mac-fanboy that they will insist it is “so easy anyone could use them” but no, Macs can have issues and there is a lot of stuff to initially learn to get going. And, like PC’s, they DO go wrong and have issues.

https://mwidlake.files.wordpress.com/2019/02/ipad.jpeg?w=424&h=600 424w, https://mwidlake.files.wordpress.com/2019/02/ipad.jpeg?w=106&h=150 106w" sizes="(max-width: 212px) 100vw, 212px" />

Choice made – will it be the correct one?

I did initially investigate if I could make a Windows PC work for my mum. I can sort out most issues on a PC and so it would be easier for me to support her. You can set Windows up to be simpler for an older person. I was more than happy setting up other older people with a PC in the past, as I’ve mentioned. Another big advantage with a PC would be I could set it up so I could remote access it and help. I live 2.5 hours from Mum, remote access would be a major boon. In another situation I think I would go down that route, set up a Windows laptop, reduce what was available on it, put on the things I felt they would want initially and ensure I had full access to the machine. I could then do interactive “show and tell” sessions. Of course, you have to consider privacy if you have full access to someone’s machine. But I felt I was trying to come up with a solution that was more easy for me rather than more easy for the person I was helping.

My final factor in my decision on what to go for was “the internet”. There is bad stuff on the internet (I don’t mean content so much, what my Mum looks at is up to her and I am under no illusions that when someone gets old they do not become a child to protect. I don’t understand why some people seem to think old people are sweet and innocent! Old people used to be young, wild, risk-taking and randy. They’ve lived a life and learnt about the world and they know what they do and do not like). What bothers me about the internet is viruses, spyware, downloads that screw your system over. No matter how much I would explain to my mum, there was a good chance she would end up clicking on something and downloading some crap that messed up the system or stole her details. Machines that are not Windows PCs suffer from this a lot less.

For a while my mum said she wanted an Alexa or something similar. Something she could ask about Lonnie Donegan’s greatest hits (this is a totally true example). But talking to her she also wanted email and BBC news and sport. Also, I’ve seen people using an Alexa and getting it to understand & do what you want is pretty hit & miss, I could see that really frustrating my Mum. Also I don’t like the damned, nasty, spying, uncontrolled bloody things – they listen all the time and I don’t think it is at all clear what gets send back to the manufacturer, how it is processed, how they use it for sales & marketing.

So, for my mum a tablet was the way to go. It is simpler, much more like using a phone (you know, the mobile phone she has never had!) and has no complication of separate components. Plus it is smaller. I decided on an iPad because:

    • The three people she is most likely to be in contact with already have an iPad mini or iPhone,
    • They are simple. Simple-ish. Well, not too complicated.
    • I felt it was big enough for her to see things on it but not so big as to be in the way.
    • The interface is pretty well designed and swish.
    • They are relatively unaffected by viruses and malware (not impervious though)
    • It will survive being dropped on the carpeted floor of her house many, many, many times.
    • You can’t harm them by just typing things and running apps. {Hmm, I’ll come back to that in a later post…}
    • If she really hated it, I could make use of a new iPad <br />
</li></ul></li></ul></div>

    	  	<div class=

19c Auto Index: the dictionary views

The abbreviation AI may be misleading but it has nothing to do with Artificial Intelligence. And you may have been surprised that the ‘A’ means ‘Automatic’ rather than ‘Autonomous’ as the latter is constantly used to tag any new feature in the database since 18c. But this difference is really important: ‘Autonomous’ supposes that you don’t have anything to do and don’t even need to be notified about what happened. On the opposite, ‘Automatic’ means that some things are done without your intervention, in order to help you, but you are still in charge of managing them. And you need to look at the dictionary views, to be aware of the findings, recommendations, and implementations. Automatic Indexing is an evolution of the Advisors that were introduced since 10g and, in the same way, it provides many dictionary views to understand its activity.

This posts present those views, mainly defined in the catproc script $ORACLE_HOME/rdbms/admin/cataivw.sql, grouped in the following areas:

  • Configuration
  • Activity log
  • Indexes created
  • SQL statements

Automatic Indexing — Configuration Parameters

DBA_AUTO_INDEX_CONFIG

The configuration parameters are displayed with this view.

Here I’ve set AUTO_INDEX_MODE to run and automatically implement its findings:

exec dbms_auto_index.configure('auto_index_mode','implement');

The other parameters are the default:

select * from dba_auto_index_config order by 1;
PARAMETER_NAME                    PARAMETER_VALUE   LAST_MODIFIED
AUTO_INDEX_DEFAULT_TABLESPACE
AUTO_INDEX_MODE IMPLEMENT 14:20:12
AUTO_INDEX_REPORT_RETENTION 31
AUTO_INDEX_RETENTION_FOR_AUTO 373
AUTO_INDEX_RETENTION_FOR_MANUAL
AUTO_INDEX_SCHEMA
AUTO_INDEX_SPACE_BUDGET 50

Actually, the table behind this view is SMB$CONFIG which is from the SQL Management Base. SMB stores what the optimizer needs to persist about SQL statements: SQL Profiles, SQL Plan Baselines, SQL Patches and in 19c SQL Quarantine. And Automatic Indexing is heavily linked with them to control the scope of its implementations.

SMB$CONFIG

SMB$CONFIG shows additional underscore parameters.

select * from sys.smb$config where parameter_name like '%AUTO_INDEX%' order by 1;

Here I’ve set _AUTO_INDEX_TRACE to the value 2 to get more tracing in the job trace. I’ve set it with:

exec sys.dbms_auto_index_internal.configure( '_AUTO_INDEX_TRACE', 2, allow_internal=>true);


DBA_ADVISOR_TASKS

Automatic Indexing is based on the Optimizer Advisor Framework. Here are the new Advisor tasks:

SQL> select * from dba_advisor_tasks where owner='SYS' order by task_id;
TASK_ID TASK_NAME                          ADVISOR_NAME
2 SYS_AUTO_SPM_EVOLVE_TASK SPM Evolve Advisor
3 SYS_AI_SPM_EVOLVE_TASK SPM Evolve Advisor
4 SYS_AI_VERIFY_TASK SQL Performance Analyzer
5 SYS_AUTO_INDEX_TASK SQL Access Advisor
6 AUTO_STATS_ADVISOR_TASK Statistics Advisor
7 INDIVIDUAL_STATS_ADVISOR_TASK Statistics Advisor


Automatic Indexing — Activity Log

DBA_ADVISOR_EXECUTIONS

The standard Advisor views can give information about the Auto Indexing activity. The TASK_ID=5 and TASK_NAME=SYS_AUTO_INDEX_TASK

select * from dba_advisor_executions where task_name='SYS_AUTO_INDEX_TASK' order by execution_id;


DBA_AUTO_INDEX_EXECUTIONS

You don’t need to go to those views because Auto Indexing provides specific ones, based on the same WRI$_ADV_TASKS and WRI$_ADV_EXECUTIONS tables.

select * from dba_auto_index_executions order by execution_start;


DBA_ADVISOR_OBJECTS

The Advisor Framework stores additional information as objects in WRI$_ADV_OBJECTS. The Automatic Indexing ones are the log information from its activity, with TYPE_ID=31 and TYPE=’AUTO INDEX INFORMATION’ (these types are defined in X$KEAOBJT)

select * from dba_advisor_objects where task_name='SYS_AUTO_INDEX_TASK' and type='AUTO INDEX INFORMATION'order by object_id;


dbms_auto_index_internal.finding_name(attr7)

Those Advisor views show general attributes that have a different signification for each object type, and for AUTO INDEX INFORMATION the ATTR7 identifies the finding. Those, as far as I know, are not visible from a table but hardcoded in the DBMS_AUTO_INDEX_INTERNAL function. Here are all possible values:

SQL> select attr7,sys.dbms_auto_index_internal.finding_name(attr7) from (select rownum attr7 from xmltable('1 to 51')) order by 1;
  ATTR7 SYS.DBMS_AUTO_INDEX_INTERNAL.FINDING_NAME(ATTR7)
1 Compiled statements
2 Statements using auto index in compilation verification(final)
3 Statements discarded (misestimate or high selectivity of indexes)
4 New index candidate
5 Candidate indexes
6 Index for rebuild
7 Rebuilt indexes
8 Redundant index
9 Redundant indexes
10 Misestimate in sql_id
11 Pruned indexes
12 SPM begin
13 Statements considered by SPM
14 Indexes in first verification
15 Indexes in second verification
16 No significant improvement with index
17 Ineffective indexes
18 Significant improvement with index
19 Effective indexes
20 Error for statement
21 Timeout for statement
22 No buffer gets for statement
23 Statement regressed or no significant improvement
24 Regressed statements
25 Statement produced same plan
26 Statement has same performance
27 Unchanged statements
28 Statement improved
29 Improved statements
30 Index created
31 Index dropped
32 Index rebuilt
33 Index marked unusable
34 Index marked visible
35 Index marked invisible
36
37 Auto index clean up work done
38 Execution validation for mis-estimated statements done
39 Auto index action based on performance validation done
40 Auto index compilation verification done
41 Statements using auto index in compilation verification
42 SPM end
43 Max space budget reached
44 Report mode, performance validated but index stays invisible
45 Out-of-space during rebuild
46 Statements in STS
47 Auto index execution start
48 Resuming auto index execution
49 Skipping table from auto index creation
50 Auto index execution end
51 Implement validated auto index

“_auto_index_log”

Oracle has an internal view to display the advisor objects as a log of its activity:

select * from sys."_auto_index_log" order by log_id;

Unfortunately, there’s no public dictionary view on it except an aggregated one to sum the statistics.

DBA_AUTO_INDEX_STATISTICS

Based on “_auto_index_log” we have some summary counters

select * from dba_auto_index_statistics where value>0 order by 1;


Automatic Indexing — New Indexes

DBA_ADVISOR_OBJECTS

The goal of Auto Indexing activity is to create (and drop) indexes and this index information is stored as objects with TYPE_ID=2 and TYPE=’INDEX’ in the Advisor Framework objects

select * from dba_advisor_objects where task_name='SYS_AUTO_INDEX_TASK' and type='INDEX' order by object_id;


“_auto_index_ind_objects”

This internal view decodes the attributes in the context of Automatic Indexes to display more information about the created indexes

select * from sys."_auto_index_ind_objects" order by object_id;


DBA_INDEXES

The full metadata is available from the dictionary views about indexes. The name of those indexes start with ‘SYS_AI_’ and are flagged with this new AUTO column.

The DDL as generated by DBMS_METADATA has this AUTO attribute, but we cannot use it ourselves. The tablespace here is SYSTEM because I’ve not set the configuration AUTO_INDEX_DEFAULT_TABLESPACE parameter. It seems that 12cR2 ADVANCED LOW compression is used. I’ll update this post when having more information about the licensing consequences. [Update: I raised the point and a new parameter should come in 19.3]

DBA_AUTO_INDEX_IND_ACTIONS

A public view is available to see the commands that were executed to create this index, which shows the different steps:

  • created as unusable to analyze the execution plans
  • rebuilt (online) but invisible to control which statements will verify it
  • made visible (which does not lock the table since 12c) when accepted
select * from dba_auto_index_ind_actions order by action_id;

Those actions, with start/end timestamp, give more detail about the operations that we have seen in “_auto_index_log” (New index candidate, Index rebuilt, Index marked visible).

Automatic Indexing — SQL Statements

The Automatic Indexing works basically by capturing a SQL Tuning Set on which it runs the SQL Access Advisor, and the tuning set is visible as SYS_AUTO_STS (here created at the same time as my first ‘Statements in STS’ in “_auto_index_log”):


DBA_ADVISOR_OBJECTS

Automatic Indexing goes further than the SQL Access Advisor. The statements are continuously verified to detect improvement and regressions. They are stored with some flags as TYPE_ID=7 and TYPE=’SQL’

select * from dba_advisor_objects where task_name='SYS_AUTO_INDEX_TASK' and type='SQL' order by object_id;


“_auto_index_sql_objects”

The internal view decodes attr2 as the Plan Hash Value and attr7 as some flags about the verification(my guess on a quick test is that flag is set to 1 when the SQL was improved, 2 when regression has been seen)

select * from sys."_auto_index_sql_objects" order by object_id;


DBA_AUTO_INDEX_SQL_ACTIONS

The goal of those verifications is to prevent the regressions by blacklisting the new index usage for some queries. We can see that in SQL actions:

select * from dba_auto_index_sql_actions;

This is an example where Automatic Indexing has called loaded the previous plan as accepted in a SQL Plan Baseline by calling DBMS_SPM_INTERNAL.LOAD_PLANS_FROMSQL_SET

DBA_SQL_PLAN_BASELINES

This SQL Plan Baseline is identified with the ‘EVOLVE-AUTO-INDEX-LOAD’ origin.

This is quite surprising because fixing the previous plan does not only prevent the usage of the new AUTO index, but also any new one that I can create manually (at least until it automatically evolved).

SQL_PATCHES

I expected to see the regressions locked down by SQL Patches rather than SQL Plan Baselines. In this first test, I see no SQL Patch created, but this is another dictionary view to look at when trying to understand Automatic Indexing.

19c Easy Connect

When TCP/IP was the protocol used mostly everywhere, Oracle introduced EZCONNECT naming method to avoid long connection strings with parentheses everywhere. They said that it was a way to avoid tnsnames.ora but that’s not completely true:

  • you can use full connection strings without a tnsnames.ora
  • you still need a tnsnames.ora for more complex connection strings

But the idea was to replace:

(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=APP.service.net))(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=2483)))

by:

localhost:2483/APP.service.net

and even reduce it to a simple hostname when using the default port and default service name.

I use it a lot when connecting manually, but as soon as there’s a need to add multiple hosts for transparent failover or load balancing, or to add some additional parameters, the full connection string is required.

In 19c, Oracle has extended the Easy Connect syntax to allow this in EZCONNECT naming and again telling us that it is a way to avoid a tnsnames.ora:

https://docs.oracle.com/en/database/oracle/oracle-database/19/netag/configuring-naming-methods.html#GUID-B0437826-43C1-49EC-A94D-B650B6A4A6EE

I still don’t agree. You probably still prefer to deploy a common tnsnames.ora to be sure that all applications connect with the correct string, or use an LDAP directory. However, they are not wrong... I am. I use EZCONNECT as a syntactic sugar. But for Oracle it a naming method as defined in NAMES.DIRECTORY_PATH.

However, if you have a centralized way to store the JDBC URL, then EZCONNECT will avoid any local configuration in 19c as mentioned in the documentation. Let’s see what is possible.

I’m using tnsping to resolve the EZCONNECT string to a full description one, which I ident for better readability.

This is the most simple, and not new, using the default port 1521:

tnsping //geneva/PDB1
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
)

And the most complex before 19c is mentioning the server type (dedicated, shared or DRCP) and the instance name

tnsping //geneva:1521/PDB1:pooled/CDB1
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
(INSTANCE_NAME=CDB1)
(SERVER=pooled)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
)

Now in 19c we can mention multiple hosts which will be converted to a Load Balancing address list

tnsping //geneva,lausanne/PDB1
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
)
(LOAD_BALANCE=on)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=lausanne)
(PORT=1521)
)
)

With the same port for:

tnsping //geneva,lausanne:1522/PDB1
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
)
(LOAD_BALANCE=on)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1522)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=lausanne)
(PORT=1522)
)
)

or different port:

tnsping //geneva:1521,lausanne:1522/PDB1
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
)
(LOAD_BALANCE=on)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=lausanne)
(PORT=1522)
)
)

and even add parameters in URL style (starting with ? and separated by &):

tnsping //geneva:1521,lausanne:1522/PDB1?transport_connect_timeout=5&retry_count=2
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
)
(transport_connect_timeout=5)
(retry_count=2)
(LOAD_BALANCE=on)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=lausanne)
(PORT=1522)
)
)

By the way, lowering the default TCP timeout is a good idea most of the times because your network probably answers in less than 60 seconds when available, and you don’t want to wait one minute before trying another address. More about those settings:

TRANSPORT_CONNECT_TIMEOUT and RETRY_COUNT - Blog dbi services

Note that there are no verifications in the EZCONNECT string. You can mention non-existing parameters:

tnsping //geneva:1521,lausanne:1522/PDB1?make_application_fast=on
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
)
(make_application_fast=on)
(LOAD_BALANCE=on)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=lausanne)
(PORT=1522)
)
)

and even try to disable the automatically added Load Balancing which probably results in something you don’t want (both on and off):

tnsping //geneva:1521,lausanne:1522/PDB1?LOAD_BALANCE=off
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
)
(LOAD_BALANCE=off)
(LOAD_BALANCE=on)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=lausanne)
(PORT=1522)
)
)

(I opened a SR for this one, will update this post about this)

There’s something I like to do when quickly connecting to an unmounted instance (undocumented and even more unrecommended in this way):

tnsping //geneva/CDB1)(UR=A
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=CDB1)
(UR=A)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
)

So it is still possible here. Do not put the UR=A as a parameter (after ?) because those go with the address list rather than CONNECT_DATA.

Note that SQLCL, the SQL Developer command line accepts all that (tested in 18.4) except when mentioning an instance name in addition to a parameter:

But this combination is probably not very useful in most of the cases.

In summary, if you have a centralized way to store the connection strings, then EZCONNECT is a good idea. Because the developers do not like the SQL*Net descriptions with all those parentheses (they are born with XML or JSON or even YAML for the younger ones) and this URL-like syntax will be better accepted by everyone.

Pricks Upon the Soul of Women’s Ambition

There were two trends in topics that repeatedly came up at RMOUG around women in attendance that I felt deserved a deeper investment of time. As I spend a significant amount of time in a more progressive atmosphere, I find I’m even more aware of them than previously. I hope that my words of support and any advice I offered was helpful to those who sought me out this week.

Be Happy With What You Have” or “You’re Never Satisfied” Trend

Numerous conversations touched on the common thread around discomfort with women’s ambition. I honestly believe that most individuals aren’t even aware of the discomfort, but depending on the level, their response is believed to be supportive even though they’re squashing their ambition to openly hostile. Its astounding how often we push the idea that women should be happy with what we’ve achieved, not with what others have achieved or with what our ambition hopes we’ll achieve.

This comment, in all its myriad forms, strikes me odd:

  1. Its often offered to us with no ill-intent, but rather sympathy in hopes to cheer us up, resulting in more damage in the long run.
  2. I’ve never once, heard this type of response used with my male peers.

One woman talked about how she was told that she should be happy with how much she’d accomplished and not waste time on the ‘what ifs’. What caused me to stop to request details about the situation, quickly we were able to recognize that the what ifs were actually goals she wanted to achieve, having set timelines and milestones for. As the discussion progressed, this incredibly skilled women realized that most of the goals had been set in her yearly review, but as she took them on, her manager undermined her objectives, second-guessed her every move and often micro-managed her.

I asked her to replay the experience, but to replace her in the situation with a male peer. She was incredibly surprised how different she felt about the interaction by simply removing her from the scenario and experiencing the situation with someone that wouldn’t be subject to the subtle layers of bias.

A second woman was telling me how frustrated she was, after she had been turned down for a management position. Her manager told her that with the personal demands on her, she should be relieved that they had awarded the promotion to her male peer, but she was anything but satisfied with it. I asked her some questions about the situation and we quickly surmised that she was more senior, had more experience and had been paramount to a larger percentage of high profile projects.

My friend quickly realized why she was frustrated. She knew in her gut that she was the better qualified candidate and had been left out on an earned promotion. That her manager, hoping to soothe her, bypassed this obvious lack of consideration and instead told her she must be relieved, which was simply salt in the wound.

The one thing that comes out of these examples is bias is very inconspicuous. Its rarely, if ever, direct and its so ingrained in society, we may miss it until we have an honest discussion about it.

Women are going to have ambitions in their career, same as their male peers and I only see this desire to succeed increasing with upcoming generations. Old biases need to be pointed out, with a continued focus on education over persecution, to change what is happening around us in our everyday life that impacts the rise of our female peers in the workplace.

Telling Women to “Think Globally and Do More”

This was another conversation where the person suggesting this had no ill intentions. He simply mentioned the challenges for women’s rights in the world and that we women should be stepping up to help change the world. The problem is, most women are already overwhelmed by demands and challenges that impact our ability to empower ourselves and those around us everyday. To ask women to do more at a global level, in all honesty, isn’t helpful. I remember trying to make a global difference for just my technical arena and it often left me frustrated and deflated. I discovered if I focused on making a difference in just one person’s life at a time, then asked them to pay it forward, the ripples this made on a larger scale created more success.

It left me less responsible for the larger scope, only needing to worry about my responsibilities to the individuals I was mentoring and sponsoring, while the reach became more sustainable by leveraging a larger group of individuals than just me as an army of one.

Stop asking women to change the world. Let women, (and men) lift as we rise and in turn, make a sustainable, global change through smaller initiatives. For those that simply mention the challenges that other women face to dismiss a woman’s challenges in the here and now- just STOP. It’s a version of gaslighting that I have a personal pet peeve about. One woman’s challenges and setbacks are not made lesser by the crimes inflicted on another.

Peace out.



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBAKevlar [Pricks Upon the Soul of Women’s Ambition], All Right Reserved. 2019.

Worth the wait

Yes, I know it’s been awhile Smile

Yes, I know people have been angry at the delay Smile

But, can we put that behind us, and rejoice in the fact…that YES

It’s here!

Yes, 18c XE for Windows is now available.

https://www.oracle.com/technetwork/database/database-technologies/express-edition/downloads/index.html

Most probably, most developers in your organization are running a Windows PC. Now every single one of them can have a fully functioning Oracle database for developing, prototyping, testing, deploying, learning, exploring, educating, … the list goes on… on their own PC for absolutely zero cost.

19c DG Broker export/import configuration

This is something I wanted for a long time: be able to save a broker configuration to be able to re-configure it if needed. What I usually do is maintain a script with all commands. What I dreamed was being able to export the configuration as a script. What we have now, in 19c, is the ability to export/import the configuration as a .xml file.

Actually, the configuration is already stored as XML in the broker configuration files (the .dat ones):

SQLcl: Release 18.4 Production on Tue Feb 19 13:40:27 2019
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.2.0.0.0
SQL> show parameter broker
NAME TYPE VALUE
---------------------- ------- -------------------------------------
dg_broker_config_file1 string ?/dbs/dr1CDB1A.dat
dg_broker_config_file2 string ?/dbs/dr2CDB1A.dat
dg_broker_start boolean TRUE

Those are binary files, but we can look at the content with ‘strings’. And since 12cR2 the content shows some XML. In all versions, we can see a message from the authors, ‘fine folks at NEDC’. Here is where this was developed:

NEDC Facility Information

So here are the strings in my broker file:

[oracle@db192 ~]$ strings /u01/app/oracle/product/DB192/dbs/dr1CDB1A.dat
}|{z
cdb1a
cdb1a
Brought to you by the fine folks at NEDC.
<?xml version="1.0" encoding="UTF-8"?>

ONLINE
1934436017
ONLINE
1673904225
1673904226
1673904227
cdb1a

PRIMARY
PRIMARY
...
cdb1b
cdb1a
0
1000647508


0
513

If you want to know more about those, my ex-colleague William Sescu at dbi-services explained everything:

Oracle 12c - How to Recover lost DataGuard Broker Configuration Files - Blog dbi services

Now in 19c, no need to parse that as we can export this metadata in a well-formatted XML:

[oracle@db192 ~]$ dgmgrl /
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Tue Feb 19 13:46:05 2019
Version 19.2.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
Welcome to DGMGRL, type "help" for information.
Connected to "CDB1B"
Connected as SYSDG.
DGMGRL> export configuration to MYCONFIG;
Succeeded.
DGMGRL>

You cannot mention a full path as the file goes to the diag trace directory (the one called ‘udump’ by the seasoned DBAs). If you are not sure, the broker trace shows it:

The Data Guard broker metadata is exported to /u01/app/oracle/diag/rdbms/cdb1b/CDB1B/trace/myconfig

Here you see that it was transformed to lower-case, as usual in DGMGRL, but you can also quote it to make it case sensitive. What this doesn’t show is that when you do not provide an extension, ‘.log’ will be added (which is a funny default for an XML file…)

[oracle@db192 trace]$ ls -alrt $ORACLE_BASE/diag/rdbms/*/$ORACLE_SID/trace/*myconfig*
-rw-r--r--. 1 oracle oinstall 4992 Feb 19 13:46 /u01/app/oracle/diag/rdbms/cdb1b/CDB1B/trace/myconfig.log

If you compare it with the .dat you will see that the XML content is exactly the same:

[oracle@db192 trace]$ cat /u01/app/oracle/diag/rdbms/cdb1b/CDB1B/trace/myconfig.log
<?xml version="1.0" encoding="UTF-8"?>

ONLINE
1934436017
ONLINE
1673904225
1673904226
1673904227
cdb1a

PRIMARY
PRIMARY
...
cdb1b
cdb1a
0
1000647508


0
513

Once you have the export, I test it by removing the configuration and import it:

DGMGRL> import configuration from myconfig;
Succeeded. Run ENABLE CONFIGURATION to enable the imported configuration.
DGMGRL> enable configuration
Enabled.

This import must be done from the primary or you get an error.

Actually, the XML file is cool because it has more information than just the configuration commands. Here is the XML formatted in https://countwordsfree.com/xmlviewer:

Here I have the history of the last 10 switchovers which can be very useful. And then you want to know what is this timestamp unit?

In the XML file you will find many timestamps.

Most of them are EPOCH, the number of seconds since 01-JAN-1970:


Success
0
1550611393

This, 19-FEB-2019 21:23:13 which is exactly when I exported this file.

But look at the switchover history:


PhysicalSwitchover
cdb1a
cdb1b
0
1000674973


PhysicalSwitchover
cdb1b
cdb1a
0
1000675048


PhysicalSwitchover
cdb1a
cdb1b
0
1000675267

Those ones are the number of seconds since 06-JUN-1987 and are actually the time when the switchover command started: 19.02.19 at 21:16:13, 21:17:28, and 21:21:07.

Unique Indexes Force Hints To Be “Ignored” Part II (One Of The Few)

In Part I, I showed a demo of how the introduction of a Unique Index appears to force a hint to be “ignored”. This is a classic case of what difference a Unique Index can make in the CBO deliberations. So what’s going on here? When I run the first, un-hinted query: we notice something a […]