Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Controlling The Presentation Monster (Preparing to Present)

As I covered before, nerves before a presentation are not a bad thing. In fact, many excellent presenters/performers recognise that those butterflies of anxiety are necessary to make your presentation really sing. But you need to control the Presentation Monster. You want to take it down from lion-taming to annoyed-but-fundamentally-not-evil-cat-training.

https://mwidlake.files.wordpress.com/2019/11/monster.jpg?w=150 150w, https://mwidlake.files.wordpress.com/2019/11/monster.jpg?w=300 300w" sizes="(max-width: 441px) 100vw, 441px" />

Presentation Monster Gonna Get You

Embrace the Emotion

As the linked-to post above describes, nerves before a performance (and presenting is a kind of performance) are normal. So the first thing to do is accept that you not only will be nervous/anxious/really wanting the toilet very badly but that, if you didn’t, your talk is probably going to be poor.

Just accepting that and knowing that the people you see presenting apparently in an island of calm are mostly faking it helps. If they can fake, it so can you. Some of the below will help you turn down the anxiety dial or, if there is a need, even turn it up a little to get you buzzing.

Practice, practice…. practice.

I know it sounds obvious, but this is so true. You need to run through your presentation several times and in the right way. And people often don’t do it well.

When I prepare a new presentation, once it is written, I want to run through it from start to finish, in real time, 3 times. This is where most people go wrong and they make one of the following mistakes:

  • They will spot a problem on a slide, for example some text is garbled or an image is too small. And they stop to fix it, and then continue the practice session. Well, you just stopped the flow of it all and any timings you do will broken. Don’t do this – if you are still correcting big mistakes then your talk is not ready for the practising step, small mistakes you can go back to once you finish.
  • As each slide flicks up they go “yeah, OK, I know what I am going to say” – and move on. Don’t. Say it. Imagine the audience, talk to them, include any anecdotes or side points you plan (or suddenly think of), and speak slowly. It is way better to be faster for the real thing than slower as most presentations lead up to a Big Point or the Most Interesting Bits at the end, and if you run out of time…
  • They never time it. How do you know this takes 45 minutes unless you have done it in 45 minutes?

Practice any anecdotes or jokes. Ideally you want them to sound spontaneous but, like sincerity, you can fake spontaneity </p />
</p></div>

    	  	<div class=

Video : Oracle REST Data Services (ORDS) : OAuth Authorization Code

In today’s video we look at the OAuth Authorization Code flow for Oracle REST Data Services.

This goes together with a previous video about first-party authentication here.

Both videos are based on parts of this article.

There are loads of other ORDS articles here.

The star of today’s video is Atul Kumar, who has done on bunch of video’s on his YouTube channel.

Cheers

Tim…


Video : Oracle REST Data Services (ORDS) : OAuth Authorization Code was first posted on November 18, 2019 at 9:27 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Parse Time

This is a note I started drafting In October 2012. It’s a case study from an optimizer (10053) trace file someone emailed to me, and it describes some of the high-level steps I went through to see if I could pinpoint what the optimizer was doing that fooled it into spending a huge amount of time optimising a statement that ultimately executed very quickly.

Unfortunately I never finished my notes and I can no longer find the trace file that the article was based on, so I don’t really know what I was planning to say to complete the last observation I had recorded.

I was prompted a  couple of days ago to publish the notes so far becuase I was reminded in a conversation with members of the Oak Table Network about an article that Franck Pachot wrote a couple of years ago. In 12c Oracle Corp. introduced a time-reporting mechanism for the optimizer trace. If some optimisation step takes “too long” (1 second, by default) then then optimizer will write a “TIMER:” line into the trace file telling you what the operation was and how long it took to complete and how much CPU time it used.  The default for “too long” can be adjusted by setting a “fix control”.  This makes it a lot easier to find out where the time went if you see a very long parse time.

But let’s get back to the original trace file and drafted blog note. It started with a question on OTN and an extract from a tkprof output to back up a nasty  performance issue.

=============================================================================================

 

What do you do about a parse time of 46 seconds ? That was the question that came up on OTN a few days ago – and here’s the tkprof output to demonstrate it.

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1     46.27      46.53          0          5          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.33       0.63        129      30331          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4     46.60      47.17        129      30336          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 144  
Number of plan statistics captured: 1
 
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=30331 pr=129 pw=0 time=637272 us)
       863        863        863   VIEW  VM_NWVW_1 (cr=30331 pr=129 pw=0 time=637378 us cost=1331 size=10 card=1)
       ... and lots more lines of plan

According to tkprof, it takes 46 seconds – virtually all CPU time – to optimise this statement, then 0.63 seconds to run it. You might spot that this is 11gR2 (in fact it’s 11.2.0.3) from the fact that the second line of the “Row Source Operation” includes a report of the estimated cost of the query, which is only 1,331.

Things were actually worse than they seem at first sight; when we saw more of tkprof output the following also showed up:

SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE 
  NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') 
  NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_00"), 
  NVL(SUM(C2),:"SYS_B_01") 
FROM
 (SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("VAL_000002") FULL("VAL_000002") 
  NO_PARALLEL_INDEX("VAL_000002") */ :"SYS_B_02" AS C1, 
  CASE WHEN
    ...
  END AS C2 FROM "BISWEBB"."RECORDTEXTVALUE" 
  SAMPLE BLOCK (:"SYS_B_21" , :"SYS_B_22") SEED (:"SYS_B_23") "VAL_000002" 
  WHERE ... 
 ) SAMPLESUB
 
 
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        5      0.00       0.00          0          0          0           0
Execute      5      0.00       0.00          0          0          0           0
Fetch        5     21.41      24.14      11108      37331          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       15     21.41      24.15      11108      37331          0           5
 
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 144     (recursive depth: 1)
Number of plan statistics captured: 3
 
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=7466 pr=3703 pw=0 time=5230126 us)
   3137126    3137126    3137126   PARTITION HASH ALL PARTITION: 1 128 (cr=7466 pr=3703 pw=0 time=2547843 us cost=18758 size=131597088 card=3133264)
   3137126    3137126    3137126    TABLE ACCESS SAMPLE RECORDTEXTVALUE PARTITION: 1 128 (cr=7466 pr=3703 pw=0 time=2372509 us cost=18758 size=131597088 card=3133264)

This piece of SQL executed five times as the query was optimised, adding a further 24 seconds elapsed time and 21 CPU seconds which, surprisingly, weren’t included in the headline 46 seconds. The total time spent in optimising the statement was around 70 seconds, of which about 68 seconds were spent on (or waiting for) the CPU.

This is unusual – I don’t often see SQL statements taking more than a few seconds to parse – not since 8i, and not without complex partition views – and I certainly don’t expect to see a low cost query in 11.2.0.3 taking anything like 70 (or even 46) seconds to optimise.

The OP had enabled the 10046 and the 10053 traces at the same time – and since the parse time was sufficiently unusual I asked him to email me the raw trace file – all 200MB of it.

Since it’s not easy to process 200MB of trace the first thing to do is extract a few headline details, and I thought you might be interested to hear about some of the methods I use on the rare occasions when I decide to look at a 10053.

My aim is to investigate a very long parse time and the tkprof output had already shown me that there were a lot of tables in the query, so I had the feeling that the problem would relate to the amount of work done testing possible join orders; I’ve also noticed that the dynamic sampling code ran five times – so I’m expecting to see some critical stage of the optimisation run 5 times (although I don’t know why it should).

Step 1: Use grep (or find if you’re on Windows) to do a quick check for the number of join orders considered. I’m just searching for the text “Join order[” appearing at the start of line and then counting how many times I find it:

[jonathan@linux01 big_trace]$ grep "^Join order\[" orcl_ora_25306.trc  | wc -l
6266

That’s 6,266 join orders considered – let’s take a slightly closer look:

[jonathan@linux01 big_trace]$ grep -n "^Join order\[" orcl_ora_25306.trc >temp.txt
[jonathan@linux01 big_trace]$ tail -2 temp.txt
4458394:Join order[581]:  RECORDTYPEMEMBER[RTM]#9  RECORD_[VAL_000049]#13  ...... from$_subquery$_008[TBL_000020]#2
4458825:Join order[1]:  VM_NWVW_1[VM_NWVW_1]#0

The line of dots represents another 11 tables (or similar objects) in the join order. But there are only 581 join orders (apparently) before the last one in the file (which is a single view transformation). I’ve used the “-n” option with grep, so if I wanted to look at the right bit of the file I could tail the last few thousand lines, but my machine is happy to use vi on a 200MB file, and a quick search (backwards) through the file finds the number 581 in the following text (which does not appear in all versions of the trace file):

Number of join permutations tried: 581

So a quick grep for “join permutations” might be a good idea. (In the absence of this line I’d have got to the same result by directing the earlier grep for “^Join order\[“ to a file and playing around with the contents of the file.

[jonathan@linux01 big_trace]$ grep -n "join permutations" orcl_ora_25306.trc
11495:Number of join permutations tried: 2
11849:Number of join permutations tried: 1
12439:Number of join permutations tried: 2
13826:Number of join permutations tried: 2
14180:Number of join permutations tried: 1
14552:Number of join permutations tried: 2
15938:Number of join permutations tried: 2
16292:Number of join permutations tried: 1
16665:Number of join permutations tried: 2
18141:Number of join permutations tried: 2
18550:Number of join permutations tried: 2
18959:Number of join permutations tried: 2
622799:Number of join permutations tried: 374
624183:Number of join permutations tried: 2
624592:Number of join permutations tried: 2
624919:Number of join permutations tried: 1
625211:Number of join permutations tried: 2
1759817:Number of join permutations tried: 673
1760302:Number of join permutations tried: 1
1760593:Number of join permutations tried: 2
1760910:Number of join permutations tried: 1
1761202:Number of join permutations tried: 2
2750475:Number of join permutations tried: 674
2751325:Number of join permutations tried: 2
2751642:Number of join permutations tried: 1
2751933:Number of join permutations tried: 2
2752250:Number of join permutations tried: 1
2752542:Number of join permutations tried: 2
3586276:Number of join permutations tried: 571
3587133:Number of join permutations tried: 2
3587461:Number of join permutations tried: 1
3587755:Number of join permutations tried: 2
3588079:Number of join permutations tried: 1
3588374:Number of join permutations tried: 2
4458608:Number of join permutations tried: 581
4458832:Number of join permutations tried: 1

The key thing we see here is that there are five sections of long searches, and a few very small searches. Examination of the small search lists shows that they relate to some inline views which simply join a couple of tables. For each of the long searches we can see that the first join order in each set is for 14 “tables”. This is where the work is going. But if you add up the number of permutations in the long searches you get a total of 2,873, which is a long way off the 6,266 that we found with our grep for “^Join order[“ – so where do the extra join orders come from ? Let’s take a closer look at the file where we dumped all the Join order lines – the last 10 lines look like this:

4452004:Join order[577]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4452086:Join order[577]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4453254:Join order[578]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4453382:Join order[578]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4454573:Join order[579]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4454655:Join order[579]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4455823:Join order[580]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4455905:Join order[580]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4457051:Join order[581]:  RECORDTYPEMEMBER[RTM]#9  RECORD_[VAL_000049]#13  ...
4458394:Join order[581]:  RECORDTYPEMEMBER[RTM]#9  RECORD_[VAL_000049]#13  ...
4458825:Join order[1]:  VM_NWVW_1[VM_NWVW_1]#0

Every single join order seems to have appeared twice, and doubling the counts we got for the sum of the permutations gets us close to the total we got for the join order search. Again, we could zoom in a little closer, does the text near the start of the two occurrences of join order 581 give us any clues ? We see the following just before the second one:

****** Recost for ORDER BY (using join row order) *******

The optimizer has tried to find a way of eliminating some of the cost by letting the table join order affect the order of the final output. Let’s do another grep to see how many join orders have been recosted:

[jonathan@linux01 big_trace]$ grep "Recost for ORDER BY" orcl_ora_25306.trc | sort | uniq -c
    452 ****** Recost for ORDER BY (using index) ************
   2896 ****** Recost for ORDER BY (using join row order) *******

So we’ve done a huge amount recosting. Let’s check arithmetic: 452 + 2,896 + 2,873 = 6,221, which is remarkably close to the 6,266 we needed (and we have ignored a few dozen join orders that were needed for the inline views, and the final error is too small for me to worry about).

We can conclude, therefore, that we did a huge amount of work costing a 14 table join a little over 6,000 times. It’s possible, of course, that we discarded lots of join orders very early on in the cost stage, so we could count the number of times we see a “Now joining” message – to complete a single pass on a 14 table join the optimizer will have to report “Now joining” 13 times.

[jonathan@linux01 big_trace]$ grep -n "Now joining" orcl_ora_25306.trc | wc -l
43989

Since the message appeared 44,000 times from 6,200 join orders we have an average of 7 steps evaluated per join order. Because of the way that the optimizer takes short-cuts I think this is a fairly strong clue that most of the join order calculations actually completed, or get very close to completing, over the whole 14 tables. (The optimizer remembers “partial results” from previous join order calculations, so doesn’t have to do 13 “Now joining” steps on every single join order.)

We still need to know why the optimizer tried so hard before supplying a plan – so let’s look for the “Best so far” lines, which the trace file reports each time the optimizer finds a better plan than the previous best. Here’s an example of what we’re looking for:

       Cost: 206984.61  Degree: 1  Resp: 206984.61  Card: 0.00 Bytes: 632
***********************
Best so far:  Table#: 0  cost: 56.9744  card: 1.0000  bytes: 30
              Table#: 3  cost: 59.9853  card: 0.0000  bytes: 83
              Table#: 6  cost: 60.9869  card: 0.0000  bytes: 151
              Table#:10  cost: 61.9909  card: 0.0000  bytes: 185
              Table#: 5  cost: 62.9928  card: 0.0000  bytes: 253
              Table#: 2  cost: 65.0004  card: 0.0000  bytes: 306
              Table#: 1  cost: 122.4741  card: 0.0000  bytes: 336
              Table#: 8  cost: 123.4760  card: 0.0000  bytes: 387
              Table#: 4  cost: 125.4836  card: 0.0000  bytes: 440
              Table#: 7  cost: 343.2625  card: 0.0000  bytes: 470
              Table#: 9  cost: 345.2659  card: 0.0000  bytes: 530
              Table#:11  cost: 206981.5979  card: 0.0000  bytes: 564
              Table#:12  cost: 206982.6017  card: 0.0000  bytes: 598
              Table#:13  cost: 206984.6055  card: 0.0000  bytes: 632
***********************

As you can see, we get a list of the tables (identified by their position in the first join order examined) with details of accumulated cost. But just above this tabular display there’s a repeat of the cost that we end up with. So let’s write, and apply, a little awk script to find all the “Best so far” lines and then print the line two above. Here’s a suitable script, followed by a call to use it:

{
        if (index($0,"Best so far") != 0) {print NR m2}
        m2 = m1; m1 = $0;
}

awk -f cost.awk orcl_ora_25306.trc >temp.txt

There was a bit of a mess in the output – there are a couple of special cases (relating, in our trace file, to the inline views and the appearance of a “group by placement”) that cause irregular patterns to appear, but the script was effective for the critical 14 table join. And looking through the list of costs for the various permutations we find that almost all the options show a cost of about 206,000 – except for the last few in two of the five “permutation sets” that suddenly drop to costs of around 1,500 and 1,300. The very high starting cost explains why the optimizer was prepared to spend so much time trying to find a good path and why it kept working so hard until the cost dropped very sharply.

Side bar: I have an old note from OCIS (the precursor or the precursor of the precursor of MOS) that the optimizer will stop searching when the number of join orders tests * the number of “non-single-row” tables (according to the single table access path) * 0.3 is greater than the best cost so far.  I even have a test script (run against 8.1.7.4, dated September 2002) that seems to demonstrate the formula.  The formula may be terribly out of date by now and the rules of exactly how and when it applies may have changed – the model didn’t seem to work when I ran it against 19.3 – but the principle probably still holds true.

At this point we might decide that we ought to look at the initial join order and at the join order where the cost dropped dramatically, and try to work out why Oracle picked such a bad starting join order, and what it was about the better join order that the optimizer had missed. This might allow us to recognise some error in the statistics for either the “bad” starting order or the “good” starting order and allow us to solve the problem by (e.g.) creating a column group or gather some specific statistics. We might simply decide that we’ll take a good join order and pass it to the optimizer through a /*+ leading() */ hint, or simply take the entire outline and attach it to the query through a faked SQL Profile (or embedded set of hints).

However, for the purposes of this exercise (and because sometimes you have to find a strategic solution rather than a “single statement” solution) I’m going to carry on working through mechanisms for dissecting the trace file without looking too closely at any of the fine detail.

The final “high-level” target I picked was to pin down why there were 5 sets of join orders. I had noticed something particular about the execution plan supplied – it showed several occurrences of the operation “VIEW PUSHED PREDICATE” so I wondered if this might be relevant. So I did a quick check near the start of the main body of the trace file for anything that might be a clue, and found the following just after the “QUERY BLOCK SIGNATURE”.

QUERY BLOCK SIGNATURE
---------------------
  signature(): NULL
***********************************
Cost-Based Join Predicate Push-down
***********************************
JPPD: Checking validity of push-down in query block SEL$6E5D879B (#4)
JPPD:   Checking validity of push-down from query block SEL$6E5D879B (#4) to query block SEL$C20BB4FE (#6)
Check Basic Validity for Non-Union View for query block SEL$C20BB4FE (#6)
JPPD:     JPPD bypassed: View has non-standard group by.
JPPD:   No valid views found to push predicate into.
JPPD: Checking validity of push-down in query block SEL$799AD133 (#3)
JPPD:   Checking validity of push-down from query block SEL$799AD133 (#3) to query block SEL$EFE55ECA (#7)
Check Basic Validity for Non-Union View for query block SEL$EFE55ECA (#7)
JPPD:     JPPD bypassed: View has non-standard group by.
JPPD:   No valid views found to push predicate into.
JPPD: Checking validity of push-down in query block SEL$C2AA4F6A (#2)
JPPD:   Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$799AD133 (#3)
Check Basic Validity for Non-Union View for query block SEL$799AD133 (#3)
JPPD:     Passed validity checks
JPPD:   Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$6E5D879B (#4)
Check Basic Validity for Non-Union View for query block SEL$6E5D879B (#4)
JPPD:     Passed validity checks
JPPD:   Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$FC56C448 (#5)
Check Basic Validity for Non-Union View for query block SEL$FC56C448 (#5)
JPPD:     Passed validity checks
JPPD: JPPD:   Pushdown from query block SEL$C2AA4F6A (#2) passed validity checks.
Join-Predicate push-down on query block SEL$C2AA4F6A (#2)
JPPD: Using search type: linear
JPPD: Considering join predicate push-down
JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)

As you can see we are doing cost-based join-predicate pushdown, and there are three targets which are valid for the operation. Notice the line that says “using search type: linear”, and the suggestive “starting iteration 1” – let’s look for more lines with “Starting iteration”

[jonathan@linux01 big_trace]$ grep -n "Starting iteration" orcl_ora_25306.trc
9934:GBP: Starting iteration 1, state space = (20,21) : (0,0)
11529:GBP: Starting iteration 2, state space = (20,21) : (0,C)
11562:GBP: Starting iteration 3, state space = (20,21) : (F,0)
12479:GBP: Starting iteration 4, state space = (20,21) : (F,C)
12517:GBP: Starting iteration 1, state space = (18,19) : (0,0)
13860:GBP: Starting iteration 2, state space = (18,19) : (0,C)
13893:GBP: Starting iteration 3, state space = (18,19) : (F,0)
14587:GBP: Starting iteration 4, state space = (18,19) : (F,C)
14628:GBP: Starting iteration 1, state space = (16,17) : (0,0)
15972:GBP: Starting iteration 2, state space = (16,17) : (0,C)
16005:GBP: Starting iteration 3, state space = (16,17) : (F,0)
16700:GBP: Starting iteration 4, state space = (16,17) : (F,C)
16877:JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)
622904:JPPD: Starting iteration 2, state space = (3,4,5) : (1,0,0)
1759914:JPPD: Starting iteration 3, state space = (3,4,5) : (1,1,0)
2750592:JPPD: Starting iteration 4, state space = (3,4,5) : (1,1,1)

There are four iterations for state space (3,4,5) – and look at the huge gaps between their “Starting iteration” lines. In fact, let’s go a little closer and combine their starting lines with the lines above where I searched for “Number of join permutations tried:”


16877:JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)
622799:Number of join permutations tried: 374

622904:JPPD: Starting iteration 2, state space = (3,4,5) : (1,0,0)
1759817:Number of join permutations tried: 673

1759914:JPPD: Starting iteration 3, state space = (3,4,5) : (1,1,0)
2750475:Number of join permutations tried: 674

2750592:JPPD: Starting iteration 4, state space = (3,4,5) : (1,1,1)
3586276:Number of join permutations tried: 571

4458608:Number of join permutations tried: 581

At this point my notes end and I don’t know where I was going with the investigation. I know that I suggested to the OP that the cost-based join predicate pushdown was having a huge impact on the optimization time and suggested he experiment with disabling the feature. (Parse time dropped dramatically, but query run-time went through the roof – so that proved a point, but wasn’t a useful strategy). I don’t know, however, what the fifth long series of permutations was for, so if I could find the trace file one of the things I’d do next would be to look at the detail a few lines before line 4,458,608 to see what triggered that part of the re-optimization. I’d also want to know whether the final execution plan came from the fifth series and could be reached without involving all the join predicate pushdown work, or whether it was a plan that was only going to appear after the optimizer had worked through all 4 iterations.

The final plan did involve all 3 pushed predicates (which looksl like it might have been from iteration 4), so it might have been possible to find a generic strategy for forcing unconditional predicate pushing without doing all the expensive intermediate work.

Version 12c and beyond

That was then, and this is now. And something completely different might have appeared in 12c (or 19c) – but the one thing that is particularly helpful is that you can bet that every iteration of the JPPD state spaces would have produced a “TIMER:” line in the trace file, making it very easy to run grep -n “TIMER:” (or -nT as I recently discovered) against the trace file to pinpoint the issue very quickly.

Here’s an example from my “killer_parse.sql” query after setting “_fix_control”=’16923858:4′ (1e4 microseconds = 1/100th second) in an instance of 19c:


$ grep -nT TIMER or19_ora_21051.trc

16426  :TIMER:      bitmap access paths cpu: 0.104006 sec elapsed: 0.105076 sec
252758 :TIMER:     costing general plans cpu: 0.040666 sec elapsed: 0.040471 sec
309460 :TIMER:      bitmap access paths cpu: 0.079509 sec elapsed: 0.079074 sec
312584 :TIMER: CBQT OR expansion SEL$765CDFAA cpu: 10.474142 sec elapsed: 10.508788 sec
313974 :TIMER: Complex View Merging SEL$765CDFAA cpu: 1.475173 sec elapsed: 1.475418 sec
315716 :TIMER: Table Expansion SEL$765CDFAA cpu: 0.046262 sec elapsed: 0.046647 sec
316036 :TIMER: Star Transformation SEL$765CDFAA cpu: 0.029077 sec elapsed: 0.026912 sec
318207 :TIMER: Type Checking after CBQT SEL$765CDFAA cpu: 0.220506 sec elapsed: 0.219273 sec
318208 :TIMER: Cost-Based Transformations (Overall) SEL$765CDFAA cpu: 13.632516 sec elapsed: 13.666360 sec
328948 :TIMER:      bitmap access paths cpu: 0.093973 sec elapsed: 0.095008 sec
632935 :TIMER: Access Path Analysis (Final) SEL$765CDFAA cpu: 7.703016 sec elapsed: 7.755957 sec
633092 :TIMER: SQL Optimization (Overall) SEL$765CDFAA cpu: 21.539010 sec elapsed: 21.632012 sec

The closing 21.63 seconds (line 633092) is largely 7.7559 seconds (632,935) plus 13.666 seconds (line 318208) Cost-Based Transformation time, and that 13.666 seconds is mostly the 1.475 seconds (line 313,974) plus 10.508 seconds (line 312,584) for CBQT OR expansion – so let’s try disabling OR expansion (alter session set “_no_or_expansion”=true;) and try again:


$ grep -nT TIMER or19_ora_22205.trc
14884  :TIMER:      bitmap access paths cpu: 0.062453 sec elapsed: 0.064501 sec
15228  :TIMER: Access Path Analysis (Final) SEL$1 cpu: 0.256751 sec elapsed: 0.262467 sec
15234  :TIMER: SQL Optimization (Overall) SEL$1 cpu: 0.264099 sec elapsed: 0.268183 sec

Not only was optimisation faster, the runtime was quicker too.

Warning – it’s not always that easy.

 

Library Cache Stats

In resonse to a comment that one of my notes references a call to a packate “snap_libcache”, I’ve posted this version of SQL that can be run by SYS to create the package, with a public synonym, and privileges granted to public to execute it. The package doesn’t report the DLM (RAC) related activity, and is suitable only for 11g onwards (older versions require a massive decode of an index value to convert indx numbers into names).

rem
rem Script: snap_11_libcache.sql
rem Author: Jonathan Lewis
rem Dated: March 2001 (updated for 11g)
rem Purpose: Package to get snapshot start and delta of library cache stats
rem
rem Notes
rem Lots of changes needed by 11.2.x.x where x$kglst holds
rem two types – TYPE (107) and NAMESPACE (84) – but no
rem longer needs a complex decode.
rem
rem Has to be run by SYS to create the package
rem
rem Usage:
rem set serveroutput on size 1000000 format wrapped
rem set linesize 144
rem set trimspool on
rem execute snap_libcache.start_snap
rem — do something
rem execute snap_libcache.end_snap
rem

create or replace package snap_libcache as
procedure start_snap;
procedure end_snap;
end;
/

create or replace package body snap_libcache as

cursor c1 is
select
indx,
kglsttyp lib_type,
kglstdsc name,
kglstget gets,
kglstght get_hits,
kglstpin pins,
kglstpht pin_hits,
kglstrld reloads,
kglstinv invalidations,
kglstlrq dlm_lock_requests,
kglstprq dlm_pin_requests,
— kglstprl dlm_pin_releases,
— kglstirq dlm_invalidation_requests,
kglstmiv dlm_invalidations
from x$kglst
;

type w_type1 is table of c1%rowtype index by binary_integer;
w_list1 w_type1;
w_empty_list w_type1;

m_start_time date;
m_start_flag char(1);
m_end_time date;

procedure start_snap is
begin

m_start_time := sysdate;
m_start_flag := ‘U’;
w_list1 := w_empty_list;

for r in c1 loop
w_list1(r.indx).gets := r.gets;
w_list1(r.indx).get_hits := r.get_hits;
w_list1(r.indx).pins := r.pins;
w_list1(r.indx).pin_hits := r.pin_hits;
w_list1(r.indx).reloads := r.reloads;
w_list1(r.indx).invalidations := r.invalidations;
end loop;

end start_snap;

procedure end_snap is
begin

m_end_time := sysdate;

dbms_output.put_line(‘———————————‘);
dbms_output.put_line(‘Library Cache – ‘ ||
to_char(m_end_time,’dd-Mon hh24:mi:ss’)
);

if m_start_flag = ‘U’ then
dbms_output.put_line(‘Interval:- ‘ ||
trunc(86400 * (m_end_time – m_start_time)) ||
‘ seconds’
);
else
dbms_output.put_line(‘Since Startup:- ‘ ||
to_char(m_start_time,’dd-Mon hh24:mi:ss’)
);
end if;

dbms_output.put_line(‘———————————‘);

dbms_output.put_line(
rpad(‘Type’,10) ||
rpad(‘Description’,41) ||
lpad(‘Gets’,12) ||
lpad(‘Hits’,12) ||
lpad(‘Ratio’,6) ||
lpad(‘Pins’,12) ||
lpad(‘Hits’,12) ||
lpad(‘Ratio’,6) ||
lpad(‘Invalidations’,14) ||
lpad(‘Reloads’,10)
);

dbms_output.put_line(
rpad(‘—–‘,10) ||
rpad(‘—–‘,41) ||
lpad(‘—-‘,12) ||
lpad(‘—-‘,12) ||
lpad(‘—–‘,6) ||
lpad(‘—-‘,12) ||
lpad(‘—-‘,12) ||
lpad(‘—–‘,6) ||
lpad(‘————-‘,14) ||
lpad(‘——‘,10)
);

for r in c1 loop
if (not w_list1.exists(r.indx)) then
w_list1(r.indx).gets := 0;
w_list1(r.indx).get_hits := 0;
w_list1(r.indx).pins := 0;
w_list1(r.indx).pin_hits := 0;
w_list1(r.indx).invalidations := 0;
w_list1(r.indx).reloads := 0;
end if;

if (
(w_list1(r.indx).gets != r.gets)
or (w_list1(r.indx).get_hits != r.get_hits)
or (w_list1(r.indx).pins != r.pins)
or (w_list1(r.indx).pin_hits != r.pin_hits)
or (w_list1(r.indx).invalidations != r.invalidations)
or (w_list1(r.indx).reloads != r.reloads)
) then

dbms_output.put(rpad(substr(r.lib_type,1,10),10));
dbms_output.put(rpad(substr(r.name,1,41),41));
dbms_output.put(to_char(
r.gets – w_list1(r.indx).gets,
‘999,999,990’)
);
dbms_output.put(to_char(
r.get_hits – w_list1(r.indx).get_hits,
‘999,999,990’));
dbms_output.put(to_char(
(r.get_hits – w_list1(r.indx).get_hits)/
greatest(
r.gets – w_list1(r.indx).gets,
1
),
‘999.0’));
dbms_output.put(to_char(
r.pins – w_list1(r.indx).pins,
‘999,999,990’)
);
dbms_output.put(to_char(
r.pin_hits – w_list1(r.indx).pin_hits,
‘999,999,990’));
dbms_output.put(to_char(
(r.pin_hits – w_list1(r.indx).pin_hits)/
greatest(
r.pins – w_list1(r.indx).pins,
1
),
‘999.0’));
dbms_output.put(to_char(
r.invalidations – w_list1(r.indx).invalidations,
‘9,999,999,990’)
);
dbms_output.put(to_char(
r.reloads – w_list1(r.indx).reloads,
‘9,999,990’)
);
dbms_output.new_line;
end if;

end loop;

end end_snap;

begin
select
startup_time, ‘S’
into
m_start_time, m_start_flag
from
v$instance;

end snap_libcache;
/

drop public synonym snap_libcache;
create public synonym snap_libcache for snap_libcache;
grant execute on snap_libcache to public;

You’ll note that there are two classes of data, “namespace” and “type”. The dynamic view v$librarycache reports only the namespace rows.

Friday Philosophy – Jerks, Plain & Simple

A couple of weeks ago I saw a post on social media by someone who had just encountered a jerk. You know, someone who is an arrogant, bullying, self-important cockwomble (*).

The offended party had tweeted a link to the thread where the abuse had happened and I went and took a look. It’s not really that important in what manner the jerk was being a jerk, though in this case they were asking for free help that the original poster was giving, and when they did not get as much help as they felt entitled to, they became abusive to the person helping. Sadly, nothing new there.

But what did make me pause was that the original tweet that linked to this tale of the toss-pot ended with the line “Welcome to my world as a woman in I.T.” And I paused – but not for the obvious reason.

I went back and checked the thread to make sure I was right, that I had not missed anything. I hadn’t, not that I could see on the thread anyway.

At no point in this woman’s dealings with this jerk had anything been said about sex, gender, male, female, anything. This person asking for help was undoubtedly not a nice person, the speed with which they swapped from “please help me” through “You have to do more for me” and then on to a tirade of abuse was rapid. And it was nasty abuse – but none of it was sexist.

The woman had made a point that they received this sort of treatment because of their gender – but there was no sign of gender being part of this at all.  And that bothered me. It bothered me for three reasons.

  • I keep coming across people who immediately assume that if someone treats them badly, is offensive, does not like them, or is in any way a jerk towards them, it is because of whatever minority group they are part of. Sometimes sadly that is the case. However, in others it is not – but the offended person has stopped being able to discern that difference. At times I’ve suffered from bullying and being ignored in my own little way but I realized that I was being over sensitive and sometimes I was being ignored just because the person ignoring me ignored pretty much everyone, or was derogatory to everyone. It was nothing to do with my height.
  • Maybe because of that first point, where any issue is perceived as being due to an “..ism”, some people have developed an attitude that all people not in their minority group are against them. For example, I regularly come across the attitude of “all men are….”. I’ve been told to my face that all men think that women are not as good at programming as men. Well, you’re wrong. Not all men think like that. I told the person in question I did not hold that opinion and the reply was something like “well you’re about the only exception then!”. They were determined to hold a point of view in which it was not that there are still some sexist men about – but that all men were sexist, and rabidly so. That’s pretty annoying and it risks making people not want to help fight your corner.
  • I’ve had people say to me “I can’t say anything about X doing this wrong as I’ll only get accused of …ism” – and It’s occasionally been a worry for me. This can lead to a backlash where people get away with poor work or bad behaviour as no one wants to be labelled with an “…ism”

What worries me about this “cry wolf” scenario and the attitudes of “they are all going to be out to get you” is that it actually perpetuates the very thing we need to stand against. When I come across someone who is constantly, always citing all their interpersonal problems as being due to the minority they associate themselves with, I confess I thinking to myself “well, perhaps I’ll be a little wary of you, you seem to have issues here”. It’s like a chap I met who was adamant that every boss he had ever had was “a moron”. He’d had a lot of bosses and he could not accept that maybe, just maybe, at times perhaps the boss was not the problem.

Don’t get me wrong, we absolutely should challenge behaviour that treats a group of people as inferior, just for being part of that group. We should not condone any “..ism”. We should all ask ourselves if we are being unconsciously prejudiced and, if so, work to stop that. But we should be wary of citing bigotry as a knee-jerk reaction or falling into the error of challenging sexism, racism etc with exactly the same attitude but just from the other side of the fence.

And I think part of this is recognising that sometimes, often, jerks are just jerks. There is no …ism. Let’s just call them out for being jerks. And if there is an …ism, we call them out for being absolute, utter jerks.

 

(*) cockwomble is a term that seems to be becoming more universally recognised. It just means someone who is a bit of an idiot, someone you don’t want to spend any more time with that you need. A Womble is a creature from UK kids TV in the 70’s and 80’s. They are made-up animals that wander around Wimbledon Common (a large park in London), tidying it up and making things out of the rubbish they find. Sue made this cockwomble out of a beany-bag womble and a beany-bag cockerel.

Delphix XPP explained

This article was originally posted on the Delphix Support blog on 15-Nov 2015, but with the deprecation of the XPP feature with the new 6.0 release and higher, it was decided best to remove this article.

So, I have saved it and posted it here instead, almost 4 years to the day after it was originally posted…

The topic of converting Oracle databases from one of the proprietary
UNIX platforms (i.e. Solaris, AIX, or HP-UX) to Linux seems at first pretty
esoteric and far-fetched.

Meh. Right?

Plus, a lot of folks who have used Solaris, AIX, and HP-UX over the years can
argue that those operating systems have far more capabilities and
technical advantages than Linux, and they may be absolutely correct.  Who wants to get caught up in a religious debate?

Meh again.  Right?

But interestingly enough, the big issue here is not the relative technical merit.

The Cloud is about the commoditization of servers and services.  Unless you are using the hosting services or Cloud services offered by the operating system vendor themselves (i.e. Oracle for Solaris, IBM for AIX, or by HP or HP-UX), then the only operating systems being offered by hosting or Cloud vendors are either Linux or Windows on Intel x86 or x86_64.

So, converting an Oracle database from UNIX to Linux is a prerequisite to moving it to the Cloud.  Betcha didn’t think of it that way, eh?

This is why Delphix uses the term modernization to describe the capability of cross-platform provisioning (i.e. XPP), also known as UNIX-to-Linux (i.e. U2L).  It is not because Linux on x86/x86_64 platforms are any more modern than Solaris on SPARC, AIX on Power, or HP-UX on PA-RISC/Itanium, but because modernization involves moving the data center to hosting or the Cloud.

I’m not willing to engage in an argument about which platform has more technical merit than one another.  However, I will assert that if you’re not running Oracle on Linux, then you are not positioned for the future, just based on the economics.  If you really think technical merit beats economics, then I have a nice VAX running VMS for you.

So, Delphix XPP a.k.a. U2L is the on-ramp to the Cloud.  I prefer the term “XPP” rather than “U2L”, so I’m going to use that going forward…

Now let’s be clear on a few things.  Only Oracle can write the software to migrate an existing Oracle database from one operating system to another, from one “endian” chipset to another.  So Delphix has not, and could not, write the basic processes to perform conversion, because Oracle datafile formats are themselves proprietary.  But what Delphix has done is take the processes for conversion created by Oracle and actually made them feasible.

Here’s how…

The capability of cross-platform migration was introduced to Oracle RMAN in Oracle10g database release 1 (i.e. 10.1).  This was the CONVERT DATABASE functionality, and it was only capable of converting same-endian chipsets.

The terms “big endian” and “little endian” were invented in Jonathan Swift’s satirical novel “Gulliver’s Travels”, first published in 1726.

The term refers to a civil war in the novel between the people of Lilliput, some of whom ate their soft-boiled eggs by cracking the blunt or “big” end of an egg, others who ate their soft-boiled eggs by cracking the pointy or “little” end of an egg.  This wickedly sharp satire accurately portrays controversy and conflicts over the choices of ordering of bytes within words by processor manufacturers in the 1960s and 1970s, which have subsequently been propagated forward to this day.

-Jonathan Swift “Gulliver’s Travels”, 1726

All three of the proprietary UNIX variants (i.e. Solaris on SPARC, AIX on Power, and HP-UX on both PA-RISC and Itanium) are big-endian.  All x86 and x86_64 chips are little-endian, so XPP involves converting numerics from big-endian to little-endian.

In Oracle10 release 2 (i.e. 10.2), Oracle added the CONVERT TABLESPACE and CONVERT DATAFILE functionality which permit conversion between endian platforms (i.e. big to little or little to big).

So, at this point in time, XPP is functionality that has existed in Oracle for about 10 years now, or as long as Oracle10gR2 has been available.

But XPP hasn’t been a frequently-used feature over that time, as you are no doubt aware.  I know lots of people have known about it, but very few have actually done it.  And even fewer have done it outside of playgrounds, out in real-life databases.

There are two reasons for this: 1) the trend toward commoditization of x86/x86_64 is only now accelerating with the maturation of the Cloud and 2) there are substantial obstacles.

The most substantial obstacle is the fact that the functionality involving endian conversion (i.e. CONVERT DATAFILE/TABLESPACE) also includes Oracle’s transportable tablespace (a.k.a. TTS) functionality.  TTS requires that the source database be in read-only state during any tablespace transport operations.

Now, if you’re trying to convert a Linux copy of a UNIX production database, you don’t want to have to interrupt service in the production database by making it read-only.  In the IT biz, we call that down time and it is bad.  Instead, what you’re going to have to do is create a full copy of the UNIX production database, and then make that copy read-only.

This sounds innocuous enough, but let’s think that through?  If our production database on UNIX is several terabytes or larger, and we wish to simply test the conversion to Linux using CONVERT DATAFILE / TABLESPACE, then we need enough space for two additional copies of the production database:  one for the source UNIX copy (to be set to read-only) and one for the target converted Linux copy.  To paraphrase an old saying, “A couple terabytes here and a couple terabytes there, and pretty soon you’re talking real storage“.  And of course, it is not just about allocating twice the volume of the production storage for this test, there is also the time involved in copying two complete sets of the production database.  Needless to say, it is not easy to enter into this exercise lightly.  Nor repeatedly.  And so, this 10 year old functionality remains generally a big mystery.

Enter Delphix data virtualization.

Delphix can create a virtual database (VDB) for the read-only UNIX copy of the production database, requiring only a few minutes to completely provision regardless of the size of the database.  Of course, practically no additional storage is required for this virtual database which is set to read-only almost immediately and stays read-only throughout its brief lifetime.

Even better, Delphix can also create a VDB for the converted Linux copy of the database, and because the Delphix file-system is able to transparently deduplicate (or mooch) blocks that contain the same contents but in different endian format, even the converted Linux copy of the database consumes very little storage as a result of the conversion.

To use numbers from actual experience, converting a 5 TB production UNIX database without Delphix requires the allocation and copying of 10 TB of additional storage.  Converting the same 5 TB production UNIX database using Delphix requires less than 12 GB of additional storage.

Please note the change from “TB” to “GB” in that last sentence? That is an order of magnitude difference.

Admittedly, while the storage savings are consistent, the amount of time needed to perform XPP potentially has one or both of two long-running operations, which is one fewer long-running operation than when not using Delphix.  Those long-running operations are…

  1. Copying the production UNIX database to create the non-production UNIX database copy
  2. Copying database metadata for TTS operations from the non-production UNIX database copy to the converted Linux database
  3. Using the RMAN CONVERT DATAFILE command to convert all datafile blocks from UNIX to Linux

Item #1 is a long-running and expensive step only without Delphix XPP; with XPP, it is fast and inexpensive.

Item #2 is only an issue for certain databases supporting enormous applications like Peoplesoft which contain millions of objects like tables and indexes.  For most applications, there are only several thousand objects, so the single-threaded export/import of database metadata is not a concern

Item #3 might be an issue if the database is very large, because the RMAN CONVERT DATAFILE operation is converting every single block in the database.

Both items #2 and #3 could issues regardless of whether Delphix XPP is used or not.

So please understand from the beginning that XPP might be a long-running process because of the one or two long-running steps.  At least, with Delphix XPP there are only the one or two potentially long-running steps, because without Delphix XPP there is always one more long-running step.

Also, please understand that without the orchestration provided by Delphix, there are about a dozen non-trivial smaller steps to be performed manually in addition to the 2-3 long-running steps mentioned above.  For example, after the RMAN CONVERT DATAFILE processes are complete, there are several small post-processing tasks, such as recompiling all PL/SQL objects in the database, and so on.  Doing these steps manually as documented by Oracle is prone to human error, and the additional time and effort of automating these steps is strongly recommended without Delphix.

From a cookbook standpoint, there are two major phases of a Delphix XPP operation:  1) validation and 2) conversion, as documented online here.

The validation step, or validating an Oracle dSource for XPP, is documented online here.  It automates the creation of a UNIX database copy to be set to read-only so that validation procedures like the CHECK_DB and CHECK_EXTERNAL procedures in the DBMS_TDB package and the TRANSPORT_SET_CHECK procedure in the DBMS_TTS package can be executed automatically.  These procedures will fail if the database is not in read-only mode, and will verify whether the database contains any of the many restrictions listed in Oracle documentation for conversions online here and for transporting tablespaces online here.  Validation with Delphix XPP is fast and easy, so it can be performed over and over, to ensure that any restrictions encountered have been addressed and eliminated, one way or the other.  Delphix XPP also allows the upload of a SQL script to fix issues identified during validation, if it is not feasible to fix the problem in the source production database itself.  The creation and upload of scripts to fix identified XPP issues is documented online here.

The conversion step, or provisioning a Linux VDB from a UNIX dSource, is fully automated and is documented online here.  Conversion is possible only after validation is successfully completed.  If a SQL script was uploaded to address restrictions encountered during validation, then that script should continue to be used during conversion.  During conversion, Delphix automates the many small and large steps, including the provisioning of a read-only UNIX database copy, the export/import of database metadata, and the RMAN CONVERT DATAFILE processes.

So you can see that a lot of blood, sweat, and tears have been put into this feature.

XPP is another example of the enabling power of data virtualization.  Without it, Oracle cross-platform migration is a complicated, expensive, and slow process with many manual steps.  With Delphix data virtualization, XPP is simple and inexpensive.  It is still not a fast process due to the long-running steps described earlier, but even so it may be fast or easy enough to become feasible as your organization moves toward migration into the Cloud.

Do you work in I.T?

If you do, then you know that only 20% of your I.T work is spent at the office. And no, I’m not talking about telecommuting or working remotely here. I’m talking about the 80% of our time we spend handling all those I.T calls from friends and family Smile

So I was motivated to put together this little ode – dedicated to all of us that have to go the extra mile to keep our loved one’s I.T devices working as they should.

Enjoy and share with all those I.T people in your life Smile

Getting started with #Exasol on #AWS

It’s amazingly easy to run an Exasol Cluster on Amazon Web Services (AWS).

Subscribe Exasol in AWS marketplace

After having registered and having logged in to your AWS account, go to the AWS marketplace and search for Exasol:

https://uhesse.files.wordpress.com/2019/11/aws01.png?w=1240&h=530 1240w, https://uhesse.files.wordpress.com/2019/11/aws01.png?w=150&h=64 150w, https://uhesse.files.wordpress.com/2019/11/aws01.png?w=300&h=128 300w, https://uhesse.files.wordpress.com/2019/11/aws01.png?w=768&h=328 768w, https://uhesse.files.wordpress.com/2019/11/aws01.png?w=1024&h=437 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Click on the Exasol Single Node and Cluster BYOL link and then on Continue to Subscribe:

https://uhesse.files.wordpress.com/2019/11/aws02.png?w=1240&h=282 1240w, https://uhesse.files.wordpress.com/2019/11/aws02.png?w=150&h=34 150w, https://uhesse.files.wordpress.com/2019/11/aws02.png?w=300&h=68 300w, https://uhesse.files.wordpress.com/2019/11/aws02.png?w=768&h=174 768w, https://uhesse.files.wordpress.com/2019/11/aws02.png?w=1024&h=232 1024w" sizes="(max-width: 620px) 100vw, 620px" />

After having reviewed the T&C, click on Accept Terms. It shows this message afterwards:

https://uhesse.files.wordpress.com/2019/11/aws03.png?w=1240&h=540 1240w, https://uhesse.files.wordpress.com/2019/11/aws03.png?w=150&h=65 150w, https://uhesse.files.wordpress.com/2019/11/aws03.png?w=300&h=131 300w, https://uhesse.files.wordpress.com/2019/11/aws03.png?w=768&h=335 768w, https://uhesse.files.wordpress.com/2019/11/aws03.png?w=1024&h=446 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Create Key Pair

Now login to the AWS Management Console, select a region close to your location and open the EC2 Dashboard. Click on Key Pairs:

https://uhesse.files.wordpress.com/2019/11/aws04.png?w=1238&h=482 1238w, https://uhesse.files.wordpress.com/2019/11/aws04.png?w=150&h=58 150w, https://uhesse.files.wordpress.com/2019/11/aws04.png?w=300&h=117 300w, https://uhesse.files.wordpress.com/2019/11/aws04.png?w=768&h=299 768w, https://uhesse.files.wordpress.com/2019/11/aws04.png?w=1024&h=399 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Click on Create Key Pair now and enter a name for the new Key Pair, then click on Create:

https://uhesse.files.wordpress.com/2019/11/aws05.png?w=1240&h=494 1240w, https://uhesse.files.wordpress.com/2019/11/aws05.png?w=150&h=60 150w, https://uhesse.files.wordpress.com/2019/11/aws05.png?w=300&h=120 300w, https://uhesse.files.wordpress.com/2019/11/aws05.png?w=768&h=306 768w, https://uhesse.files.wordpress.com/2019/11/aws05.png?w=1024&h=408 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Now you are ready to use the Exasol Cloud Deployment Wizard. Stay logged in with AWS Management Console as you will be routed back there by the Deployment Wizard soon.

Using the Cloud Deployment Wizard

Put this URL into your browser: https://cloudtools.exasol.com/ and click on AWS then:

https://uhesse.files.wordpress.com/2019/11/aws06.png?w=1238&h=562 1238w, https://uhesse.files.wordpress.com/2019/11/aws06.png?w=150&h=68 150w, https://uhesse.files.wordpress.com/2019/11/aws06.png?w=300&h=136 300w, https://uhesse.files.wordpress.com/2019/11/aws06.png?w=768&h=349 768w, https://uhesse.files.wordpress.com/2019/11/aws06.png?w=1024&h=465 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Select a region close to your location and click on Continue:

https://uhesse.files.wordpress.com/2019/11/aws07.png?w=1240&h=622 1240w, https://uhesse.files.wordpress.com/2019/11/aws07.png?w=150&h=75 150w, https://uhesse.files.wordpress.com/2019/11/aws07.png?w=300&h=151 300w, https://uhesse.files.wordpress.com/2019/11/aws07.png?w=768&h=385 768w, https://uhesse.files.wordpress.com/2019/11/aws07.png?w=1024&h=514 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Click on Advanced Configuration and specify

https://uhesse.files.wordpress.com/2019/11/aws08.png?w=1240&h=782 1240w, https://uhesse.files.wordpress.com/2019/11/aws08.png?w=150&h=95 150w, https://uhesse.files.wordpress.com/2019/11/aws08.png?w=300&h=189 300w, https://uhesse.files.wordpress.com/2019/11/aws08.png?w=768&h=484 768w, https://uhesse.files.wordpress.com/2019/11/aws08.png?w=1024&h=645 1024w" sizes="(max-width: 620px) 100vw, 620px" />

License Model Bring-your-own-license, System Type Enterprise Cluster, Instance Family Memory Optimized, Instance Type r5, Instance Model r5 large, Number of DB Node 1 then click Continue.

BYOL works without license file with a limit of 20 GB memory for the database. Means no costs are charged by Exasol (But by Amazon) for this environment.

Select create new VPC and click on Launch Stack on this page now:

https://uhesse.files.wordpress.com/2019/11/aws09.png?w=150&h=118 150w, https://uhesse.files.wordpress.com/2019/11/aws09.png?w=300&h=237 300w, https://uhesse.files.wordpress.com/2019/11/aws09.png?w=768&h=606 768w, https://uhesse.files.wordpress.com/2019/11/aws09.png 942w" sizes="(max-width: 620px) 100vw, 620px" />

Using CloudFormation

This takes you to the Quick create stack page of CloudFormation in AWS Management Console:

https://uhesse.files.wordpress.com/2019/11/aws10.png?w=1238&h=612 1238w, https://uhesse.files.wordpress.com/2019/11/aws10.png?w=150&h=74 150w, https://uhesse.files.wordpress.com/2019/11/aws10.png?w=300&h=148 300w, https://uhesse.files.wordpress.com/2019/11/aws10.png?w=768&h=380 768w, https://uhesse.files.wordpress.com/2019/11/aws10.png?w=1024&h=506 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Enter these details on the page:

Stack name

Key Pair (select the key pair created previously)

SYS User Password

ADMIN User Password

Public IPs (true)

Tick the acknowledge box and click on Create stack

Now go to the EC2 details page and copy the Public IP of the management node:

https://uhesse.files.wordpress.com/2019/11/aws11.png?w=1240&h=630 1240w, https://uhesse.files.wordpress.com/2019/11/aws11.png?w=150&h=76 150w, https://uhesse.files.wordpress.com/2019/11/aws11.png?w=300&h=153 300w, https://uhesse.files.wordpress.com/2019/11/aws11.png?w=768&h=391 768w, https://uhesse.files.wordpress.com/2019/11/aws11.png?w=1024&h=521 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Put that with an https:// prefix into a browser and click on Advanced:

https://uhesse.files.wordpress.com/2019/11/aws12.png?w=1240&h=752 1240w, https://uhesse.files.wordpress.com/2019/11/aws12.png?w=150&h=91 150w, https://uhesse.files.wordpress.com/2019/11/aws12.png?w=300&h=182 300w, https://uhesse.files.wordpress.com/2019/11/aws12.png?w=768&h=466 768w, https://uhesse.files.wordpress.com/2019/11/aws12.png?w=1024&h=621 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Then you should see a progress bar like this:

https://uhesse.files.wordpress.com/2019/11/aws13.png?w=1238&h=646 1238w, https://uhesse.files.wordpress.com/2019/11/aws13.png?w=150&h=78 150w, https://uhesse.files.wordpress.com/2019/11/aws13.png?w=300&h=157 300w, https://uhesse.files.wordpress.com/2019/11/aws13.png?w=768&h=401 768w, https://uhesse.files.wordpress.com/2019/11/aws13.png?w=1024&h=534 1024w" sizes="(max-width: 620px) 100vw, 620px" />

That screen changes after about 30 Minutes to the EXAoperation login screen.

Login as user admin with the password, you specified previously on the CloudFormation Quick create stack page. There should be a database running:

https://uhesse.files.wordpress.com/2019/11/aws14.png?w=1240&h=374 1240w, https://uhesse.files.wordpress.com/2019/11/aws14.png?w=150&h=45 150w, https://uhesse.files.wordpress.com/2019/11/aws14.png?w=300&h=90 300w, https://uhesse.files.wordpress.com/2019/11/aws14.png?w=768&h=232 768w, https://uhesse.files.wordpress.com/2019/11/aws14.png?w=1024&h=309 1024w" sizes="(max-width: 620px) 100vw, 620px" />

As you can see now, you have a database, a remote archive volume using an Amazon S3 bucket ready for backup & restore and a log service to monitor your system.

This database is limited to 20 GB memory only unless a license file is uploaded to the license server aka management node. For educational purposes, I don’t need more.

Use Elastic IPs

The public IPs of your data nodes will change upon every restart, which is probably not convenient.
Therefore, click on Elastic IPs in the EC2 dashboard, then click on Allocate new address:

https://uhesse.files.wordpress.com/2019/11/aws15.png?w=1240&h=382 1240w, https://uhesse.files.wordpress.com/2019/11/aws15.png?w=150&h=46 150w, https://uhesse.files.wordpress.com/2019/11/aws15.png?w=300&h=92 300w, https://uhesse.files.wordpress.com/2019/11/aws15.png?w=768&h=236 768w, https://uhesse.files.wordpress.com/2019/11/aws15.png?w=1024&h=315 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Select Amazon pool then click on Allocate:

https://uhesse.files.wordpress.com/2019/11/aws16.png?w=1240&h=428 1240w, https://uhesse.files.wordpress.com/2019/11/aws16.png?w=150&h=52 150w, https://uhesse.files.wordpress.com/2019/11/aws16.png?w=300&h=104 300w, https://uhesse.files.wordpress.com/2019/11/aws16.png?w=768&h=265 768w, https://uhesse.files.wordpress.com/2019/11/aws16.png?w=1024&h=354 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Click on the IP on the following screen:

https://uhesse.files.wordpress.com/2019/11/aws17.png?w=150&h=44 150w, https://uhesse.files.wordpress.com/2019/11/aws17.png?w=300&h=87 300w, https://uhesse.files.wordpress.com/2019/11/aws17.png?w=768&h=223 768w, https://uhesse.files.wordpress.com/2019/11/aws17.png?w=1024&h=297 1024w, https://uhesse.files.wordpress.com/2019/11/aws17.png 1026w" sizes="(max-width: 620px) 100vw, 620px" />

Select the action Associate address on the next screen:

https://uhesse.files.wordpress.com/2019/11/aws18.png?w=150&h=54 150w, https://uhesse.files.wordpress.com/2019/11/aws18.png?w=300&h=108 300w, https://uhesse.files.wordpress.com/2019/11/aws18.png?w=768&h=276 768w, https://uhesse.files.wordpress.com/2019/11/aws18.png 777w" sizes="(max-width: 620px) 100vw, 620px" />

Select the data node from the Select instance pull down menu and click on Associate:

https://uhesse.files.wordpress.com/2019/11/aws19.png?w=150&h=69 150w, https://uhesse.files.wordpress.com/2019/11/aws19.png?w=300&h=138 300w, https://uhesse.files.wordpress.com/2019/11/aws19.png?w=768&h=353 768w, https://uhesse.files.wordpress.com/2019/11/aws19.png?w=1024&h=471 1024w, https://uhesse.files.wordpress.com/2019/11/aws19.png 1137w" sizes="(max-width: 620px) 100vw, 620px" />

Close the next screen and go to the EC2 instance page. You should see the elastic IP assigned to the data node there:

https://uhesse.files.wordpress.com/2019/11/aws20.png?w=1240&h=534 1240w, https://uhesse.files.wordpress.com/2019/11/aws20.png?w=150&h=65 150w, https://uhesse.files.wordpress.com/2019/11/aws20.png?w=300&h=129 300w, https://uhesse.files.wordpress.com/2019/11/aws20.png?w=768&h=331 768w, https://uhesse.files.wordpress.com/2019/11/aws20.png?w=1024&h=441 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Connect with a SQL Client to your Exasol database on AWS

This is how that looks with DbVisualizer:

https://uhesse.files.wordpress.com/2019/11/aws21.png?w=150 150w, https://uhesse.files.wordpress.com/2019/11/aws21.png?w=300 300w" sizes="(max-width: 510px) 100vw, 510px" />

And that’s it: Now you have an Exasol 1+0 cluster running on AWS. That’s not the same as a single node system, because this 1+0 cluster can be enlarged with more data nodes. I will show how to do that in future posts.

A word about costs: Instead of using our corporate AWS account, I registered myself to see how much that will take. It was less than 80 Euro with a 2+1 cluster environment I used for about one month, shutting down the EC2 instances whenever I didn’t need them for testing and for creating courseware. It should be well below 10 Euro per day with the very moderate resource consumption configured for the environment subject to my postings.

Stay tuned for some more to come about Exasol on AWS </p />
</p></div>

    	  	<div class=

WordPress 5.3 “Kirk”

WordPress 5.3 “Kirk” has been released.

I guess many of the people out there are running on hosted solutions, so your upgrade will happen when your hosting company decides to apply it. For those that self-host, you’re just a button press away.

This morning I’ve updated 5 separate blogs, all hosted independently, and they all went through fine. At the moment the upgrades have to be manually initiated, but I’m guessing in a few days they’ll just run through automatically.

Since it was introduced, the Site Health feature (Tools > Site Health) has been improved with each release. It’s worth running this to check if there are any recommended security or performance settings. The new release picked up some things the previous release didn’t. My website was bounced a couple of times while I made changes to sort them.

If I’m honest, I can barely notice a difference. I’m not a “power user” of WordPress. I just write blog posts with it. There are allegedly improvements to the block editing, but I really can’t tell. A couple of the popup controls look a little different, but from the way I use it, that’s about all. I’m sure people with different usage patterns will have a different reaction to this release.

I was interested to see what the Twenty Twenty theme would look like. Not my cup of tea! </p />
</p></div>

    	  	<div class=

UKOUG TechFest19 Survival Guide

Brighton, December 1st-4th 2019, Grand Hotel, Techfest2019. This is the big Technology event for the UKOUG this year, one of the largest Oracle Tech events in Europe.

The UKOUG conference is traditionally the one to finish the European year of conferencing and it is always incredibly popular with both delegates and presenters. There are two things that are not traditional about this year’s UKOUG December conference:

  • It is Technology Focused. We asked our members when they wanted their annual conference and there was a strong split between Business Applications and Technology members, with many Business Apps members finding December a bad time to be out of the office and most of them preferring May/June, so we split the conference and the big Business Apps conference will be in June 2020. However, our Tech members wanted to stick to December.
  • The conference is in the South of England. Birmingham was our conference home for many years and we have been to Manchester & Liverpool, so time to try the South.

I’m really please we are in Brighton as it is a lively, fascinating place. Also, being that little bit further south, it might be less cold. Slightly!

Why Come?

Because there will be fantastic talks, round tables, Q&A sessions, experts to quiz, people with the the same technical challenges as you, Partners able to offer services and, last but not least, it will be fun!

Technical Content

The UKOUG conferences are very popular with presenters. On average we get 5 submissions per presenting slot, more for some streams. We could fill the conference with talks from Oracle ACEs, Oracle Certified Masters, and the best Oracle Corp offer. What we actually do is have stream-specific teams that select not just known speakers but also topics we know are hot, new presenters, avoid repeating content. It’s damned hard work but we aim to give you:

  • Independent experts who will tell you exactly how it is, like Richard Foote on indexes (all the way from Auz, so a rare chance to see him), Frank Pachot from CERN, Security guru Pete Finnigan, Abigail Giles-Haigh, Craig Shallahamer, Jonathan Lewis, Zahid Anwar, Loneke Dikmans…
  • Oracle giving you the latest information “from the horses mouth” and, just as important, the chance to meet product managers and other experts. People like Maria Colgan, Mike Deitrich, Jeff Smith, Nigel Bayliss, Susan Duncan
  • 9 or more concurrent streams across Development, Analytics & Data Science, Database, Systems & Infrastrructure, and APEX. No matter what your interest in the Oracle Tech world we hope your problem will not be “is there a session of interest” but “which session of interest do I go to now?”
  • Roundtable discussions, panels, keynotes, presentations – and the chance to meet the experts around the conference and at the socials

Fun

Learning stuff at conference is the name of the game, but so is having some fun. The more enjoyable the conference and the social times after are, the more you you will get out of the content. I know from personal experience that if a conference is just information and being serious, after a few hours my brain shuts off.

Also, it’s when you are more relaxed that the magic thing about attending an event in person happens – you meet people and get to know them better. This opens doors to industry experts, you find people dealing with the same PIA technical issues as you, you exchange war stories. You make friends. I get just as much (if not more) from the people I meet at conference than the official presentations.

Monday evening there will be networking drinks, Tuesday will be the big party (and I’ve been promised No Loud Music!!!). If you are a UKOUG volunteer or speaker, there is a drinks reception Sunday night. (I know of a couple of other events being put on by other companies too, such as Rittman Mead).

We will be having the retro games consoles scattered around the venue again.

And, we are in Brighton! Of course as the UKOUG President I would never encourage you to leave the conference hotel… But as a human being I would say go and look around Brighton, have a bit of fun! You might want to do what I am doing and be in Brighton a day or two before the event (or after) and really enjoy what the town has to offer.  Mrs Widlake is coming with me on Saturday so we can have a mini break.

One other fun thing – Mark Rittman is organising a gentle cycle ride Sunday morning. Details can be found {here},it will be a couple of hours via a cafe, prior to Super Sunday starting. I plan to take part.

Now, the practical stuff:

Getting There

Train

Basically, if you can get to London OK, you can get to Brighton just fine. Trains go from Victoria in under an hour, from St Pancras (very convenient if you come to London on Eurostar), London Bridge (both about 90 mins) and, if you live near Cambridge, you can get a direct train through London to Brighton. There is a direct service from Gatwick Airport taking about half an hour.

I’d strongly advise booking *now*. If you come down on Saturday or Sunday, it can cost as little as £15-20 from London, £40 from Birmingham, Bristol or Leeds.

If you don’t often travel by train just be aware that “open” tickets and booking only a few days ahead can be eye-wateringly expensive. Plan ahead, decide when you are travelling, and book ASAP.

Plane

The best international airport to fly to for Brighton is Gatwick, as there is a fast (1/2 hour) train service direct to Brighton for as little as £10. A taxi will take 40-50 minutes and cost that many pounds.

Heathrow is also sort-of on the same side of London as Brighton but you will either have to go into London to Victoria by the slow Tube line and then out on the normal train services to Brighton, or take the Heathrow Express (15 mins, about £15 each way) to London Paddington and take the tube Central Line around to Victoria.

If you come in to Stansted, basically get into London (Stansted Express) and work it out from there!

For Luton (and Stansted, sort of) Niall Litchfield says

If you are flying into Luton, don’t go into London and change. Take the shuttle bus to Luton Airport Parkway station (10 minutes) and take the direct train to Brighton. If you are going to Stanstead then you should consider your life choices!

 

Automobile

UPDATE – see comments by Niall Litchfield (again, helpful chap), a local who says to not drive in to Brighton as parking is so bad. He is 20 mins away and will take the local train. Best bet if you must is Park and Ride

It’s relatively simple to drive to Brighton. You go around the M25 to the M23 and down that, and keep going when it turns into the A23. I’m not so sure about coming along the more coastal road (A27) – I have bad memories of it taking ages to get anywhere.

But parking can be expensive. If you are not being provided parking by a hotel you are using or you plan to come in and go home each day then you might like to look at https://www.visitbrighton.com/plan-your-visit/travel-information/parking or similar. I’m no expert on parking in Brighton (I last did it 30 years ago) but I’ll ask someone local and update this accordingly. My one hint would be avoid NCP car parks – they are usually very expensive and, as a company, they are terrible. Ask anyone who commutes by train into London or any other major city and they probably hate NCP with a passion.

Walking/Cycling

Don’t be daft, unless you are local, in which case you know more than I do!

 

Where to Stay

I’m afraid you missed the special deal to stay at the Grand (the location of the conference) but you might still be able to book there. However, at the time of writing (see image), there are many, many hotels available around Brighton and you might want to look at Air B&B for something cheaper.

I personally use Trivago to find accommodation but other websites are available. They should all allow you to what I do which is choose the lowest “comfort” level you want and the price range. I then use the map view as it makes things a lot easier than a list of hotels with no idea where they actually are!

I’m actually staying at the conference venue – as President I have a lot of duties so it makes sense for me to be on-site. I also know that there are a lot of presenters etc staying at the hotel so it should add to the vibe, but sometimes I specifically choose to stay a 5, 10 minute walk from a Conference, so I can get away from it all if I should wish. I find a 10 minutes stroll before a conference wakes me up and doing so after gives my brain a chance to turn off a little.

Coffee, Refreshments etc.

It’s been a problem for years at UKOUG conferences. Getting coffee (or tea or whatever) has been a real challenge as the venues always wanted a fortune to provide catering all day. Catering! Just hot drinks and maybe some biscuits! This year, tea & coffee will be available throughout the conference! I’m not guaranteeing it will be good tea and coffee, I’m not daft, but Brighton has a big coffee culture so I have hopes.

Water should always be available.

If your are a coffee snob (looking at one person in particular here) then, look, we are IN BRIGHTON! Go out the hotel and walk 2 minutes, you will soon find a hipster cafe and can get your double espresso skinny latte with raw cane sugar there. And in fact, yeah, do it! Pop out the venue for 10 mins and go to a local cafe. Or get an ice cream. Or, if you are inclined, a glass of wine and a cake. Cafe culture is all around you.

If you don’t like the provided coffee at the conference, don’t tell me. Tell me about other things that are right or wrong but, honestly, the quality of the coffee is not something I want to hear anything more about. This is the UK and it is an I.T. conference, the coffee is supposed to bad!

You will have been asked when you registered for the event if you have dietary requirements and this should be catered for. Vegetarian options should be provided at all meals as a matter of course. Any issues, as the UKOUG staff and they will sort it out for you.

At the social events there will be soft drinks as well as alcoholic ones. Some people like alcohol, some do not, it really is not that important if you drink or not. BUT if you find there are no soft options then let the UKOUG staff know immediately – we had a problem one year where the caterers only provided beer & wine and no one mentioned it for ages. They just got angry and slagged us off after the event.

There will be no secret whisky tasting this year. There never has been. It’s just a rumour. If whisky is not your thing then feel free to not bring a different thing to share at the non-existing tasting.

Chocolate. I’ve also not heard rumours about a chocolate tasting happening…

Other Hints

Go to at least one talk you know nothing about, that is not your core work area. You will probably learn something unexpectedly useful! You might even get a peak at a shift in your career.

Speak to the famous people. They are human, they are *just like you* (only, of course, much much smarter…). Honestly, just say “hi” or “isn’t it a shame about the Rugby world cup final” or “what bread do you like to bake?” (this is surprisingly likely to get an interested response from a growing number of speakers). Have a little chat. But also, please do not stalk. If you find yourself hanging about after a session to chat to the same person you chatted to three time already, you have become a scary stalker and need to stop.

If you don’t know many people at the conference, go to a panel session or a round table. If you can build up the courage, when you see a circle of half a dozen people chatting and you recognise some of them as “in your area”, go and join in. (And, if you are one of those people in a circle of mates, chatting, keep an eye out for people hanging about nearby looking nervous. I wish we did not stand in these circles, backs to each other, but I can’t think of a good way to break the circle.)

Take breaks. If you do 7 sessions in succession I am willing to bet nothing is going into the brain anymore. If you happen to find yourself talking with people just before a session starts and you are enjoying the conversation, maybe keep it going and have a coffee/water. I really do believe that those contacts you make/develop at conferences and the ad-hoc things you learn as just as valuable as listening to Connor McDonald bang on about SQL in his boring monotone again. He does rubbish slides.