Having the Summer off. It’s something that quite a few IT contractors and some consultants say they intend to do…one year. It’s incredibly appealing of course, to take time off from your usual work to do some other things. Asking around, though, it is not something many of us self-employed types who theoretically could do, actually have done. I think it is because, although the theory is nice, the reality is a period not earning a living – and the background worry of “if I take a break, will I be able to step straight back into gainful employment afterwards”?
Remember, this was just the beginner's session. We will have intermediate and advanced ones in near future. Stay tuned through the AIOUG site.
A few weeks back one of the Vertica developers put up a blog post on counting triangles in an undirected graph with reciprocal edges. The author was comparing the size of the data and the elapsed times to run this calculation on Hadoop and Vertica and put up the work on github and encouraged others: “do try this at home.” So I did.
Vertica draws attention to the fact that their compression brought the size of the 86,220,856 tuples down to 560MB in size, from a flat file size of 1,263,234,543 bytes resulting in around a 2.25X compression ratio. My first task was to load the data and see how Oracle’s Hybrid Columnar Compression would compare. Below is a graph of the sizes.
For the next couple of weeks I'll be picking up various random notes I've made during the sessions that I've attended at OOW. This particular topic was also a problem discussed recently at one of my clients, so it's certainly worth to be published here.
In one of the optimizer related sessions it was mentioned that for highly volatile data - for example often found in Global Temporary Tables (GTT) - it's recommended to use Dynamic Sampling rather than attempting to gather statistics. In particular for GTTs gathering statistics is problematic because the statistics are used globally and shared across all sessions. But GTTs could have a completely different data volume and distribution per session so sharing the statistics doesn't make sense in such scenarios.
So using Dynamic Sampling sounds like a reasonable advice and it probably is in many such cases.
I have made a little mistake creating a RAC database for the OEM 12c repository-I now need a little more lightweight solution, especially since I’m going to do some fancy failover testing with this cluster soon! An 126.96.36.199 single instance database without ASM, that’s what I’ll have!
Now how to move the repository database? I have to admit I haven’t done this before, so the plan I came up with is:
Sounds simple enough, and it actually was! To add a little fun to it I decided to the use a NFS volume to backup to. My new database host is called oem12db, and it’s running Oracle 188.8.131.52 64bit on Oracle Linux 6.1 with UEK. I created the NFS export using the following entry in /etc/exports:
As far as the humans are concerned, Hugh Jackman is ok. The kid who plays his son is a little annoying, but to be fair, so are most of the kids in films. There are quite a few cheesy moments, but they are spread out so they aren’t like fingernails down a chalkboard.
I think the biggest problem with the film is the robots have no personalities. It’s just a giant and very expensive version of Rock’em Sock’em Robots. It’s hard to engage with a chunk of metal when it has no outward signs of personality. They are nothing like Transformers, which are totally real.
Having said that, its an OK bit of mindless fun. I tried to listen to other people talking on the way out to gauge the general reaction. It seemed to vary from “Awesome!” to “What a complete pile of xxxx!”. I guess I stand somewhere in the middle.
Was emailing with my esteemed college John Beresniewicz at Oracle in the OEM group. John and I worked together on OEM 10g and thank goodness he is still there as he is generally behind any good quantitative visualizations you might see in the product. Here is one cool example he sent me:
The database load, AAS, can be time selected and from the selection a load map is shown, in this case of which objects are creating the most I/O load, group by type of I/O. Super cool. Congrats JB and I look forward to exploring more in OEM 12c “cloud control”.
Alternative title “The lady from Patient Admin – she says YEEESSSS!!!!!!”
What must you always achieve for an IT system to be a success?
There is only one thing that an IT system must always achieve to be a success.
For an individual system other considerations may well be very important, but the user acceptance is, I think, non-negotiable.
I was chatting with the lady doing OCP Lounge registrations at OOW11. During this chat I mentioned I hadn’t received a certificate for the SQL Expert certification. It never crossed my mind to re-request it, since my certifications are visible on certview.oracle.com anyway. Yesterday, a DHL man delivered the missing certificate, which prompted me to look though my certifications and scan this image.
First, check out the card on the bottom right. I was unaware the “Expert” certifications had a different colour card.
Second, notice anything funny about the 9i DBA OCP certification?
It’s hard to believe it’s over 12 years since I first completed one of these certifications…