Who's online

There are currently 0 users and 32 guests online.

Recent comments


Oakies Blog Aggregator

Impressions from #ukoug_tech14

ACC Liverpool

The Oracle circus went to Liverpool this year for the annual conference of the UK Oracle User Group and it was a fantastic event there! Top speakers and a very knowledgeable audience too, I was really impressed by the quality we have experienced. Together with my friends and colleagues Iloon and Joel, I was waving the flag for Oracle University again – and it was really fun to do so :-)

The 3 of us

The 3 of us

One little obstacle was that I actually did many presentations and roundtables. So less time for me to listen to the high quality talks of the other speakers…

Joel and I hosted three roundtables:

About Exadata, where we had amongst others Dan Norris (Member of the Platform Integration MAA Team, Oracle) and Jason Arneil (Solutions Architect, e-DBA) contributing

Exadata Roundtable

Exadata Roundtable, Jason and Dan on my left side, Joel and Iloon on my right

About Grid Infrastructure & RAC, where Ian Cookson (Product Manager  Clusterware, Oracle) took many questions from the audience. We could have had Markus Michalewicz also if I only would have told him the day before during the party night – I’m still embarrassed about that.

About Data Guard, where Larry Carpenter (Master product Manager Data Guard and Maximum Availability Architecture, Oracle) took all the questions as usual. AND he hit me for the article about the Active Data Guard underscore parameter, so I think I will remove it…

Iloon delivered her presentation about Apex for DBA Audience, which was very much appreciated and attracted a big crowd again, same as in Nürnberg before.

Joel had two talks on Sunday already: Managing Sequences in a RAC Environment (This is actually a more complex topic than you may think!) and Oracle Automatic Parallel Execution (Obviously complex stuff)

I did two presentations as well: The Data Guard Broker – Why it is recommended and Data Guard 12c New Features in Action

Both times, the UKOUG was so kind to give me very large rooms, and I can say that they haven’t looked empty although I faced tough competition by other interesting talks. This is from the first presentation:

Uwe Hesse

A big THANK YOU goes out to all the friendly people of UKOUG who made this possible and maintained the great event Tech14 was! And also to the bright bunch of Oracle colleagues and Oracle techies (speakers and attendees included) that gave me good company there: You guys are the best! Looking forward to see you at the next conference :-)

Tagged: #ukoug_tech14

Review of Synology DS415+, How to Add Memory to the DS415+, Web Pages Not Databases

December 19, 2014 As frequent readers of this blog likely know, I have not posted very many articles recently.  It seems that lately I struggle to find time to post interesting items to the blog, which is mostly focused on Oracle Database notes, so this is a three in one off-topic post.  I have been busy […]

Oracle University Expert Summit 2015 in Dubai

Oracle University Expert Summit Dubai 2015

Together with Craig Shallahamer, Lucas Jellema, Mark Rittman, Martin Bach, Pete Finnigan and Tim Fox, I will be presenting in Dubai. My topic is Minimizing Downtime with Rolling Upgrade using Data Guard

Click on the picture for details, please!

Hope to see you there :-)

Day 2 of UKOUG Tech 14

Day 2 of the UKOUG Tech 14 conference for me started with the second session (I spent the first session typing up the blog post on Day 1 as there wasn’t a presentation I was interested in). Tammy Bednar was presenting on “Creating a Test / Dev Database in Minutes Using the Oracle Database Appliance”. Tammy is another old timer at Oracle. Like myself and Maria Colgan, she has been an Oracle employee for 20 years. I’d been discussing with Maria on day 1 that people either seem to move on from Oracle within a couple of years, or they stick around for a long time like us. For me, that’s because I’ve been able to move around and do different things that have kept my interest levels fresh.

But I digress. :) I wasn’t aware of what was happening in the ODA space with this topic. As is often the case at Oracle, there are multiple groups developing similar sorts of functionality. So we have the ODA snapshotting capability, Snap Clone on the EM side, and the ability to create a snapshot copy of a pluggable database in Oracle Database 12c. As I’ve been working on the Snap Clone functionality for some time, I was quite interested to see what the ODA folks had been up to as well. Tammy’s quite an accomplished presenter, so I really enjoyed her presentation.

Next up I went to Tim Gorman and Ron Ekin’s presentation title “Dueling Duplications – Physical vs Virtual Data Cloning”. Tim did most of the presenting, and of course, he’s a great presenter as well. The only issue I really had with the presentation was they were comparing apples with oranges, as they were comparing the times taken to duplicate an entire database versus snapshotting a thin clone, so of course the snapshotting was faster. :) Still, it was interesting to see the interface they were using, which I hadn’t seen in action before.

The next session was my first UKOUG presentation, which was on Snap Clone. I had heard from some other presenters that there had been some issues with getting the slides presenting properly, so I made sure that the room had one of the technicians ready. Unfortunately we had major dramas. Eventually even two technicians and myself had to admit defeat. Every time we tried to put PowerPoint into presenting mode, it would present on my laptop screen, rather than the big screen the attendees could see. The strange thing was when I connected to the projector it would display my desktop background but even when I tried to force the display onto the second monitor it would only come up on my laptop. The technicians finally got the slides onto the main display, but only in edit mode – if I tried to present from that configuration, PowerPoint would restart. So I presented the entire presentation in edit mode, losing all my build slides and so on, so it was incredibly frustrating. Adding to that frustration, both my slots were on in the rooms off the exhibition hall, which meant there was a lot of noise both from the hall itself and the adjoining rooms as I mentioned in yesterday’s blog. As I mentioned in that blog, I would certainly recommend those rooms NOT be used for presenting from again!

While the next session was on, I spent some time with the organizers of the conference and they borrowed a different laptop from the conference centre, so my second presentation worked very smoothly. A huge thanks to Faye Wood for managing to get that organized. My second session, titled “Best Practices for Deployment in a Private Cloud – Fast Track to DBaaS”, was the session I was asked to do just last Thursday, so it wasn’t listed in the printed agenda. It was on the agenda boards in the conference centre and the mobile app for the conference, so a few people realized it was on and turned up thankfully (I have heard of two presenters so far that had zero attendees, which must be incredibly disheartening for the presenters!)

That was the end of the day as far as sessions were concerned. There were drinks on in the exhibition hall again, and I spent some time catching up with Eric van der Sluis and Chris Lawless from DBvisit. It’s funny that I’d seen both of them just a couple of weeks ago on the other side of the world at the NZOUG Conference. :)

The conference finished on Tuesday with the Tech14 Social, but I took the opportunity instead to catch up with quite a number of my fellow Oak Table members over a few drinks and then dinner at the Smugglers Cove, a bar / restaurant just next door to the conference. A very enjoyable way to end the day!

Parse Time

Here’s a little query I wrote some time ago to see where my time went while running a query. It’s nothing sophisticated, just one of those simple things you can do with v$active_session_history (or dba_hist_active_sess_history, if you don’t get to the crime scene in time).

set null CPU

        sql_exec_id, in_parse, in_hard_parse, event, count(*)
        sql_id = '{your choice here}'
group by
        sql_exec_id, in_parse, in_hard_parse, event
order by
        sql_exec_id, in_parse, in_hard_parse, count(*)

SQL_EXEC_ID I I EVENT                                            COUNT(*)
----------- - - ---------------------------------------------- ----------
   40341649 N N db file sequential read                                68
   40341649 N N CPU                                                    21
   40341649 N N db file scattered read                                  9
            Y Y CPU                                                     7

I had run the query that I was tracking exactly once, but my ASH query allows for, and separates, multiple executions of the same query by summing on sql_exec_id (the thing that the SQL Monitor also uses). The last row looks a little odd, though: it does’t have a value for sql_exec_id; that’s because those are ASH samples when the query is being optimized, not being executed – note that I’ve reported the columns in_parse and in_hard_parse – and both are set to “Y” for that row.

So  (statistically speaking) it’s probably taken about 7 CPU seconds for Oracle to optimise the statement, and from the rest of the results you can see that it’s taken about 21 CPU seconds to run, with 68 seconds spent on random I/Os and 9 seconds spent on multiblock reads for a total of 103 seconds elapsed.

Seven seconds sounds like quite a lot of time for parsing – but it was a fairly complex statement. However, the reason I’d been running the statement on a test system (a fairly good clone of production) was that I’d been seeing something stranger on production and I needed to get a baseline on the test system before I starting trying to fix the problem. Here’s the equivalent ASH reports for the same statement when it had run on production at a time that allowed me to capture its ASH samples.

SQL_EXEC_ID I I EVENT                                            COUNT(*)
----------- - - ---------------------------------------------- ----------
  514257929 N N CPU                                                    21
  514257929 Y Y latch: row cache objects                                1
            Y Y CPU                                                   119

Note the 119 CPU seconds spent parsing to run a 22 second query ! But that wasn’t the worst of it – sometimes the results looked more like this:

SQL_EXEC_ID I I EVENT                                            COUNT(*)
----------- - - ---------------------------------------------- ----------
  523748347 N N db file sequential read                                 1
  523748347 N N CPU                                                    32
            Y Y resmgr:cpu quantum                                     11
            Y Y latch: row cache objects                               38
            Y Y CPU                                                   415

That’s a pretty catastrophic optimsation time – especially since the statement can be optimised in seven seconds in another environment. You might note the resource manager kicking in there, the session is exceeding the CPU limit set for its resource group – though not very often given how infrequently it seems to be waiting on “resmgr:cpu quantum”. But there’s another important wrinkle to this report – which you can see when compare v$active_session_history with v$sql.

  1  select
  2     sql_id,
  3     round(cpu_time/1000000,2) cpu,
  4     round(elapsed_time/1000000,2) ela from v$sql
  5  where
  6     sql_text like '{some identifying text}'
  7* and        sql_text not like '%v$sql%'
SQL> /

SQL_ID               CPU        ELA
------------- ---------- ----------
2atyuc3vtpswy     285.81     516.13

The figures from v$sql don’t match very well with the summed results from ASH which has a total sample of 497 seconds and a CPU sample of 447 seconds. I think I can live with a statistical error of 4% ((516-497)/516) in a random sample for total time, but how do you explain the 36% error in the CPU time ?

The samples reporting “resmgr:cpu quantum” are a clue: the machine is overloaded; it’s trying to use far more CPU time than is available. As a result a process that gets pushed off the CPU by the operating system scheduler while it’s running can spend a long time in the run queue waiting to start running again. And if it’s an Oracle process that got pre-empted it doesn’t “know” that it’s not running, it didn’t put itself into a wait state so all it “knows” is that it’s not in a wait state.

So how do ASH and v$sql differ ? The code that derives the cpu_time for v$sql issues a call to the O/S asking “how much CPU have I used”. The code that takes an ASH sample says: “is this session active, if so is it in a wait state and if it’s not in a wait state then it’s either on the CPU or in the run queue waiting for the CPU”. So when we compare v$sql with ASH the difference in CPU is (statistically speaking) time spent in the run queue. So of our 447 seconds of CPU recorded by ASH, we spent 161 seconds in the CPU run queue waiting for CPU.

We still have to account for the difference between the 7 CPU seconds on a test system and the variation between 119 CPU seconds and 415 CPU seconds in optimisation on production. In a word – concurrency. Apart from everything else going on at the time there were, in the worst case, 16 slightly different versions of the same statement being called at the same time (on a machine with 24 cores) – all 16 statement were competing violently for the same resources at the same time, and the escalating conflict as more session joined in produced an exponential growth in time spent competing for resources rather than doing the job. (I managed to demonstrate the effect quite nicely during the evening by limiting the batch to 4 concurrent executions – and got a typical parse time of 40 CPU seconds).

I’ve often warned people about the problems of concurrency and encouraged them to think about how much time is being spent in competition rather then doing the job; I think this is the most extreme case I’ve seen in a production system. Given how dramatic the variation is, I can’t help wondering if the problem has been exaggerated by some corner case of sessions spinning for mutexes or latches; perhaps even an error in the code that allows resource management to put a session into “resmgr:cpu quantum” while it’s holding a latch or mutex. (I wasn’t able to emulate such an extreme display of the problem on a slightly newer version of Oracle, but I was able to construct a test that demonstrated the effect with a much smaller wastage of CPU.)

The solution (almost certainly): the statements are extremely similar, varying in just one predicate that is using a literal constant. It ought to be a relatively safe and simple change to make the query use a bind variable in that predicate. If the solution is adopted I’d expect to see the (once only) parse time on production drop back to about 7 seconds. Of course, if any other session tries to call the same statement at the same time it ought to end up reporting 7 seconds waiting on “cursor: pin S wait on X” before it starts executing – but that 7 seconds wait is a lot better than an extra 493 CPU seconds trying to optimise the “same” statement at the same time.


Running a minor variation on my ASH query to report the sql_plan_hash_value along with the sql_exec_id, I found that the “fix-up” code that updates older ASH rows with information that only becomes available later (e.g. plan_hash_value when optimising, or long time_waited values for a single wait event) only goes back 255 rows – so when I queried ASH for the statements that took 500 seconds to optimizer only 255 of the in_parse rows showed the final sql_plan_hash_value.


UKOUG Tech14 slides – Exadata Security Best Practices

I think 2 years is long enough to wait between posts!

Today I delivered a session about Oracle Exadata Database Machine Best Practices and promised to post the slides for it (though no one asked about them :). I’ve also posted them to the Tech14 agenda as well.

Direct download: UKOUG Tech14 Exadata Security slides

Day 1 of UKOUG Tech 14

Monday was the first day of the main part of the UKOUG Tech 14 conference, after SuperSunday’s additional content. I had a great night’s sleep and woke at 7 am, much to the disgust of Richard Foote who had been up for four hours already and didn’t seem to appreciate my #NoJetLagHere hashtag replying to his lack of sleep tweet. :) I managed to get to the conference relatively dry, but realized yet again that you don’t come to Liverpool in December for the weather. :)

The conference itself is at the Arena and Convention Centre in Liverpool. The conference and exhibition are spread over three floors of the conference centre, with the exhibition on floor 0 (it sounds better than the basement, I guess!) and most of the presentations and the speaker’s lounge on the 2nd floor. Some of the presentations are also held in rooms off the exhibition hall. The first session timeslot didn’t hold a lot to interest me specifically, so I went to the speaker lounge and caught up on a bit of email and the like. The second session included a presentation by Joze Senegacnik, a fellow Oak Table member that I had finally met in person the previous night at the Oracle ACE dinner. His presentation was titled “The Evolvement of Oracle Performance Troubleshooting”. When we met at the ACE dinner, he described his paper as a tour of performance since Oracle 2, so that sounded pretty interesting. Not many people have been around to use a lot of those early versions, so it’s always interesting to hear from someone who experienced just how limited some of that early stuff was!


Unfortunately, Joze’s presentation was in one of the rooms off the exhibition hall and was separated from another one of those rooms by a wall that didn’t reach the roof. The end result is that neither presenter could use a microphone (as they would have drowned out the presenter in the other room) and it was quite difficult to hear, particularly as my hearing isn’t the best anyway. Still, it gave me a heads-up of what it will be like for my two papers today, as both of them are in that same room. I think I’ll have a bit of an advantage over Joze as he is a fairly quietly spoken guy, whereas I’m much more of a “basso profundo”! :)

After Joze’s presentation I finally got to catch up with my lovely colleague Tania Le Voi. Tania and I had worked together in the group I was with in the Enterprise Manager product management team before I moved to the DBaaS team, so it was great to actually spend some together before sharing lunch with her, James Kao and Jeff Barber (also fellow product managers in the EM team. I also caught up with a customer who has been a reader of my blog, so that’s always fun! Thanks for the positive feedback, Jon! :)

Straight after lunch we had the EM Roundtable slot, which is particularly valuable to us as product managers as these sessions are where we get to hear directly from customers about what they’re doing with the EM product suite, what pain points they might still have and so on. Again, this session was in one of the rooms off the exhibition hall, and if there’s one piece of feedback I would offer the UKOUG Tech 14 organizers, it would be to definitely NOT use these rooms for roundtables, as I had a lot of difficulty hearing some of the issues customers were raising. In fact, it would be better to not use these rooms at all. Those of us who have a hearing loss will realize just how difficult it can be to hear in rooms with competing presenters! The other time when I had problems hearing were in sessions that were on the second floor. From time to time during the day, there were quite heavy periods of rainfall and even hail. I don’t know what sort of material the roof of the convention centre is made from, but the noise level when it was hailing was horrendous!

Unfortunately, I had to spend most of the rest of the afternoon working on my day job, which is one of the downsides of being at a conference. I managed to be finished in time for the exhibition drinks slot, which was held in the exhibition hall. Once again, I enjoyed catching up with a few people over drinks that I hadn’t had the chance to meet in person before. I had to leave after that, so I missed the community drinks session while I went off for dinner with some customers from Barclays, an event organized by Ian Carney, a colleague from my days back in the late 90’s / early 2000’s with Oracle in the US.

All in all, a fairly enjoyable day at UKOUG Tech 14. Looking forward to tomorrow when I have two presentations to do myself, as well as more time catching up with good friends old and new!

Cardinality Change

Here’s an entertaining little change across versions of Oracle, brought to my attention by Tony Hasler during UKOUG Tech 14. It’s a join cardinality estimate, so here are a couple of tables to demonstrate the issue – the only columns needed are the alpha_06 columns, but I reused some code from other demonstrations to create my test case, so there are lots of irrelevant columns in the create table script:

create table t1 nologging as
with generator as (
        select rownum id
        from dual
        connect by rownum <= 1000
        rownum                                          id,
        mod(rownum-1,200)                               mod_200,
        trunc(dbms_random.value(0,300))                 rand_300,
        mod(rownum-1,10000)                             mod_10000,
        trunc(sysdate) +
                trunc(dbms_random.value(0,1000))        date_1000,
        dbms_random.string('l',6)                       alpha_06,
        dbms_random.string('l',20)                      alpha_20
        rownum <= 1e6

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

create table t2 nologging as select * from t1;
execute dbms_stats.gather_table_stats(user,'t2',method_opt=>'for all columns size 1')

I’m going to join t1 to t2 with a predicate based on the alpha_06 columns – using a LIKE predicate. Before I do so I’ll point out that there are are 1,000,000 rows in the table, and (checking the column stats) 985,920 distinct values for alpha_06. Here’s my query, with the execution plan I got from

        t1, t2
        t2.alpha_06 like t1.alpha_06

| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT    |      |     1 |    14 |  1122M  (6)|999:59:59 |
|   1 |  SORT AGGREGATE     |      |     1 |    14 |            |          |
|   2 |   NESTED LOOPS      |      |    50G|   651G|  1122M  (6)|999:59:59 |
|   3 |    TABLE ACCESS FULL| T1   |  1000K|  6835K|  1123   (6)| 00:00:06 |
|*  4 |    TABLE ACCESS FULL| T2   | 50000 |   341K|  1122   (6)| 00:00:06 |

Predicate Information (identified by operation id):
   4 - filter("T2"."ALPHA_06" LIKE "T1"."ALPHA_06")

The 50,000 cardinality estimate for t2 looks like the standard 5% guess for “column >= {unknown value}”, following which the join cardinality of 50G is the same 5% guess applied to the Cartesian join between t1 and t2 (1M * 1M * 0.05). It’s not a good estimate in my case because the right answer happens to be close to 1M rows, specifically 1,003,176. So let’s upgrade to and see what we get instead:

| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT    |      |     1 |    14 |  1050M  (6)|999:59:59 |
|   1 |  SORT AGGREGATE     |      |     1 |    14 |            |          |
|   2 |   NESTED LOOPS      |      |  2014K|    26M|  1050M  (6)|999:59:59 |
|   3 |    TABLE ACCESS FULL| T1   |  1000K|  6835K|  1051   (6)| 00:00:06 |
|*  4 |    TABLE ACCESS FULL| T2   |     2 |    14 |  1050   (6)| 00:00:06 |

Predicate Information (identified by operation id):
   4 - filter("T2"."ALPHA_06" LIKE "T1"."ALPHA_06")

The estimate has dropped from 50 Billion rows down to 2 Million – a factor of about 25,000: possibly an indicator that the algorithm has changed, and that a few people might find execution plans changing as they upgrade to a newer version of Oracle. The change occurred at as revealed by fix control 9303766 which has the description: “use 1/NDV+1/NROWS for col1 LIKE col2 selectivities”.

Just as a quick check on the arithmetic: there are 1 million rows in table t2, with (as noted above) 985,920 distinct values in the column, so the selectivity should be: 1/1000000 + 1/985920 = 2.014281 * e-6. Multiply the selectivity by 1e6 and you get 2, the cardinality estimate for t2; multiply the selectivity by 1M*1M (the Cartesian join) and you get 2,014,281, the cardinality estimate of the join. QED.

There are workarounds, of course. One would be to reverse out the fix control, either as an initialisation parameter or in a session logon trigger, another might be to modify the SQL – I think the following would be equivalent:

from    t1, t2
        t2.alpha_06 like substr(t1.alpha_06,1,length(t1.alpha_06))||'%'
and     t1.alpha_06 is not null
and     t2.alpha_06 is not null

This changes the critical predicate from the form “col1 like col2″ to “col1 like {unknown value from function}” i.e. back to a case where the optimizer uses the 5% guess, and the cardinality estimates go back the original values.

Days -1 and 0 of UKOUG Tech 14

The UKOUG user group conference is one conference I have wanted to get to for a long time, but never managed to before. This year, with the change in job roles to being a database architect in the DBaaS team within the Enterprise Manager product management group, I finally got the go ahead to submit some papers. I was notified some time back that one of them had been accepted (Snap Clone) and one shortlisted (DBaaS), and I was asked by one of my colleagues to help out with the Enterprise Manager round table. Just before I came over to the UK, the UKOUG folks asked me to also present the DBaaS presentation, so now I have something on each of the 3 days the conference is scheduled for.

Day -1

This blog post, however, is about getting to the conference, and the SuperSunday session before the main conference starts (hence the Day -1 and 0 part of the title). As you can imagine, the only part of the conference I really wasn’t looking forward to was getting here. Being from Canberra in Australia, the trip was supposed to be about an hour Canberra to Sydney, 15 hours or so Sydney to Dubai, and nearly another 8 from Dubai to Manchester.

The first leg wasn’t too bad. I had about an hour in Qantas Club before getting on the plane to Sydney, so I managed a couple of beers there. :) I must say I was quite surprised by the flight to Sydney. As is often the case around this time of year, afternoon storms were sweeping through so I was expecting either delays or a bumpy ride. However, the plane left on time and it was a pretty smooth flight, so full credit to the Qantas pilots for getting me there relatively comfortably. The plane was unfortunately a Dash 8. The downside of that is I had to check my laptop bag as premium luggage (that is, leave it at the foot of the stairs) and carry my laptop and iPad on board. That’s a bit more problematic at the moment as I’m on crutches with my ongoing knee and back problems, so that slowed down a lot of things.

Once I got the laptop bag back again in Sydney, it was a bit easier going on the crutches, as I could put everything in the laptop bag and hobble around – slow but I get there. Transferred across to the international terminal at Sydney, and hit Qantas Club again. Thankfully I asked what gate the plane to Dubai was from – it was about as far from Qantas Club as you could get! So I asked them to organize a chair to get me there. The plane was delayed by about half an hour because of the storms, so I had time for a couple more beers. :) A lovely young lady arrived with a wheelchair for me, and I was glad she did! I might still be walking there if I had to get there under my own speed!

Once I got on the plane, I found out (as did a lot of other people!) that the plane wasn’t going to Dubai – at least not directly. We flew to Bangkok first. This time it was pretty bumpy most of the way, because of the storms and presumably the back end of the Philippines hurricane. We lost another half hour along the way, so I suppose they were finding their way around the storms as well as they could. I still managed to get quite a bit of sleep along the way (no doubt the mixture of the beers and painkillers helped ;) ).

When we got to Bangkok, we were told we had about an hour stopover, and given the option of either getting off the plane or staying on board. If you left, you not only had to take all your bags with you but you also had to check through Security again. Since I had a good bottle of Australian red in my luggage to share with Lothar Flatz, I decided not to risk losing it at Security (as has happened before) and elected to stay on the plane. A lot of people got off, so there was plenty of room to stretch out on the plane as well. I also managed a reasonable amount of sleep on the Bangkok to Dubai leg. It turned out that the stopover in Bangkok worked fairly well for me. With both back and knee problems raising their ugly heads, two 8 hour flights (well, three if you count Dubai to Manchester as well!) is a much better option than 15 hours plus in one go which it would have been if we’d flown direct to Dubai.

The downside of the two delays getting to Dubai, though, was I now had 24 minutes to get off the plane and get to the gate for the next leg. It’s quite a hike through Dubai airport if you’ve never been there – up two escalators, a fairly circuitous walk and then down two escalators again – but I made it! At a fairly fast trot on crutches, which wasn’t great for either my back or knee, so the first thing I did when I got on the plane was down some more drugs (no time for beers in Qantas Club this time!)

The flight from Dubai to Manchester was probably the worst leg of the lot. I was seated in the pointy end of an A380, which was quite comfortable, but the rows beside me and behind me both had parents with one child each who seemed to have no clue whatsoever how to keep their babies quiet. As a father of three kids, I know how hard that can be but these people didn’t even seem to try to keep the kids quiet! The mother directly behind me was so loud talking to her child that I could hear her (and the little boy) even when I had ear plugs in, headphones on and the music turned up fairly loud. So much for any chance of sleep on that leg!

As I expected when I got to Manchester, my name was called out and I was directed to the baggage counter. I had made the connecting flight but of course the bag hadn’t. Still, 24 minutes isn’t much time to get bags from one plane to another, so I wasn’t exactly surprised. The lady who took all my details was very pleasant, which must be a hard thing to achieve when you probably deal mostly with quite irate people (some of the other people that were having their details taken were much less polite about it than I was!)

Anyway, I wandered out to be met by a good friend who had volunteered to pick me up and take me from Manchester to Liverpool (about an hour’s drive west from Manchester, depending on traffic). It was great to catch up with Phil, and find out what a number of people I hadn’t seen for a while on the Oracle scene in the UK were now up to. He dropped me off fairly close to the hotel as it turned out (the hotel I’m at is in Liverpool One, a pedestrian only region). I think all the rooms are actually apartments, so it’s quite comfortable and more roomy than the normal hotel room you get. I had booked it through the UKOUG web site as it was the closest one (in terms of physical distance) offered, so I was quite annoyed when I got out and about to find there were other hotels MUCH closer still. Normally a 10 minute walk from the hotel doesn’t bother me much, but it’s all a bit harder when you’re on crutches. Of course, being the time of year it is, the weather in Liverpool isn’t fantastic. As I walked around the place today, there were times when I wasn’t sure if it was snowing, hailing, raining, or all three and the damn wind is nearly too strong to walk into. In Australia, we’d call it a lazy wind – it cuts right through you because it’s too darn lazy to go around you. :)

When I got here, the hotel was still getting my apartment ready, so I left my laptop bag with them and wandered off to see where the conference was (always good to know before you have to there in earnest). It wasn’t open, so I went down to Albert Dock and had a look round there for a while. Nice place, as you can see from the pictures:






And of course, being Liverpool you’ve got to have a mention of the Beatles. :) Here’s a picture of them made out of jelly beans.


I stopped while I was at the Albert Docks for lunch – fish and chips and mushy peas and beer. How English was that! :)



After some more touristing around Albert Dock, it was back to the hotel, and time to catch up on some sleep. Here endeth Day -1. :)

Day 0

Being a good Catholic lad, day 0 started out with breakfast while I googled for the nearest Catholic church with mass at a reasonable time, which today meant sometime in the morning as SuperSunday at the conference started at 12:30. I found there was a mass at 8:30 at the Liverpool Metropolitan Cathedral, supposedly 20 minutes walk from the hotel. I left 45 minutes before mass, knowing full well I would get totally lost and I was gonna be slow anyway on the crutches. And of course, both were true and I ended up finally getting into the cathedral 10 minutes after mass started. I must say the cathedral itself is quite magnificent. Unfortunately, it was a bit dark in the cathedral itself, so my photos didn’t turn out very well (you can see plenty on Google Images), but here’s one from outside:


When I got back to the hotel from the cathedral, my bag had arrived from the airport, complete with my gloves, scarf and hat that I could have done with on the walk back! I unpacked the suitcase, only to find my suit had a hole in it! Not sure if that was there before (the suit is getting a bit old now ;) ) or from the trip over. Anyway, I went back over to the conference to register and attend SuperSunday. SuperSunday is advertised as an additional day of free technical content for Tech 14 attendees. There were a couple of sessions I wanted to see, one on Profiling the Database Writer and Log Writer by Frits Hoogland and the other on Advanced Diagnostics Revisited by Julian Dyke, both co-Oak Table Members with me. I haven’t had a chance to see either present before, though I knew both of them were really good technical people with a deep knowledge of Oracle Database internals.

Frits was first cab off the rank at 12:30, and he warned us at the start that he had a lot of material to cover and had only been able to cover Log Write at DOAG a few weeks back. This time he at least managed to get a start on the Database Writer material, but still didn’t have enough time to finish it. Given the level of interest from attendees, I think he could have easily broken the presentation in two to cover the lot. Pity he only had one slot. :) However, what he did manage to cover was covered very well, and it was great to see someone present and backup everything they covered with facts (in this case largely from trace files and strace output). Far too often, you see people presenting “facts” with no backing – you’re just expected to take the presenter’s word for what they say.

Unfortunately, I was a few minutes late when I went to see Julian do his presentation. When I stuck my head in the back of the room, I realized that was all there was room for – my head! I’m almost certain the presentation would have been closed down by fire wardens in the US as there were certainly more people in the room than it was supposed to cater for. So I still haven’t seen Julian speak – oh well, maybe another time.

However, the best value of the day was not in the presentations, interesting though they undoubtedly were. The best part, as is often the case, was the networking you can do. I caught up with Richard Foote, David Kurtz, Marcin Przepiórowski and Bjoern Rost before Frits’s presentation, then spent some time talking with James Morle and Frits before people started streaming out from the second presentation for lunch. I also had time before lunch to say a quick hi to Jared Still and Mark Bobak before they headed into a presentation. While I was having lunch I managed to have a chat with Joel Goodman, Marc Fielding and a few others. Joel and I chatted about his plans for rolling some of the Oracle Expert Summits he’s been organizing throughout the UK and Europe into the Asia Pacific region, so look for some of those to start up sometime in the near future. After that I tried to get into Julian’s talk, but Michael Abbey and I both decided there wasn’t enough room left, so we chatted for some time. It was the first time I’d met Michael in person, but both of us knew of each other even though we hadn’t met before. One thing I hadn’t realized was Michael has been working for Pythian for the past 10 years or so. They certainly have some impressive people at Pythian! Just before I left, Debra Lilley (my Irish wife ;) ) arrived, so I stopped and chatted to her and James Haslam (UKOUG CEO) for a while. James is from a hospitality background rather than an Oracle one, but given his position hospitality is more important anyway!

There were no presentations in the final session at SuperSunday that I wanted to see, so I decided to head back to the hotel to stretch out my back. On the way, I stopped off at a couple of stores – one to get a lock for the laptop as I didn’t have a spare one to bring with me, and the other to get a replacement suit so I can look suitably dashing for my presentations. ;) Then it was back to the Spice Lounge for the Oracle ACE Dinner that Debra Lilley had kindly invited me to. Again, this was a great chance to catch up with some people I knew but hadn’t had the chance to physically meet yet, as well as meet some people that I wasn’t familiar with.

At this stage, it’s probably worth raising an issue that many of us who present regularly would be familiar with. A lot of people know us by name, because they’ve seen us presenting at some conference or another, or at some training event that those of us that have been instructors have run. The end result is that many people can come up to me and greet me by name. If you’re lucky, I’ll remember your face, but a lot of the time I’ll have no clue that I’ve interacted with you before. It’s a problem a lot of us face – I discussed it tonight with Maria Colgan and she hits exactly the same issue. If we’re lucky, people at conferences have their name tags on so we can surreptitiously glance at that to come up with a name. But if we don’t greet you by name, please don’t take it as a personal affront. It’s not meant that way by any means. So if you come up to someone you’ve seen present before, it’s probably worthwhile saying “Hi, I’m Joe Bloggs, I saw you present at XYZ conference in 20xx” so at least we have a means to identify where we met before. Otherwise, I might resort to the generic “Hi, how are you?” sort of greeting (not that there’s anything wrong with that!)

So that was Day -1 and day 0 of UKOUG TECH 14. Tomorrow, there’ll be a lot of interesting presentations to get to, and again, lots of interesting people to meet. I’m looking forward to it, so if you’re at the event, come up and say hi!

Announcing XtremIO Performance Engineering Lab Report: Facts About Redo Logging And NAND Flash.

I invite you to please read this report.

NAND Flash is good for a lot of things but not naturally good with write-intensive workloads. Unless, that is, skillful engineering is involved to mitigate the intrinsic weaknesses of NAND Flash in this regard. I assert EMC XtremIO architecture fills this bill.

Regardless of your current or future plans for adopting non-mechanical storage I hope this lab report will show some science behind how to determine suitability for non-mechanical storage–and NAND Flash specifically–where Oracle Database redo logging is concerned.

Please note: Not all lab tests are aimed at achieving maximum theoretical limits in all categories. This particular lab testing required sequestering precious lab gear for a 104 hour sustained test.

The goal of the testing was not to show limits but, quite to the contrary, to show a specific lack of limits in the area of Oracle Database redo logging. For a more general performance-focused paper please download this paper (click here).  With that caveat aside, please see the following link for the redo logging related lab report:

Link to XtremIO Performance Engineering Lab Report (click here).



Filed under: oracle