Despite the title, this is actually a technical post about Oracle, disk I/O and Exadata & Oracle In-Memory Database Option performance. Read on :)
If a car dealer tells you that this fancy new car on display goes 10 times (or 100 or 1000) faster than any of your previous ones, then either the salesman is lying or this new car is doing something radically different from all the old ones. You don’t just get orders of magnitude performance improvements by making small changes.
Perhaps the car bends space around it instead of moving – or perhaps it has a jet engine built on it (like the one below :-) :
I started looking into In-Memory on RAC this week. Data can be distributed across RAC nodes in a couple of different ways. The default is to spread it across the available nodes in the cluster. So if you had a 2 node cluster, roughly 50% of the data in your table or partition would be loaded into the column store in each of the 2 instances.
On the Maximum Availability Architecture website, there’s a paper on
This post looks like I am jumping on the bandwagon of IT orchestration like a lot of people are doing. Maybe I should say ‘except for (die hard) Oracle DBA’s’. Or maybe not, it up to you to decide.
Most people who are interested in IT in general will have noticed IT orchestration has gotten attention, especially in the form of Puppet and/or Chef. I _think_ IT orchestration has gotten important with the rise of “web scale” (scaling up and down applications by adding virtual machines to horizontal scale resource intensive tasks), in order to provision/configure the newly added machines without manual intervention, and people start picking it up now to use it for more tasks than provisioning of virtual machines for web applications.
Some more 12c articles have trickled out over the last few days.
In preparation for our upcoming 12c In-Memory Webcast @CaryMillsap, @TanelPoder, and I solicited questions from members of the universe at large on the interweb. We got a question about how In-Memory works with the 12c multi-tentant option and it got me thinking so I gave it a quick try. As it turns out, it works about as you would expect. The basic idea is to turn it on for the container DB (which is where the memory is actually allocated (ala the other main shared memory regions) and then decide which PDBs to allow to use it (and if so how much of it to use) or not. First, here are the steps necessary to allocate the memory in the container DB.
Prompted by a question on OTN I came up with a strategy for producing an ORA-00060 deadlock that DIDN’T produce a deadlock graph (because there isn’t one) and didn’t get reported in the alert log (at least, not when tested on 220.127.116.11). It’s a situation that shouldn’t arise in a production system because it’s doing the sorts of things that you shouldn’t do in a production system: but possibly if you’re trying to do some maintenance or upgrades while keeping the system live it could happen. Here’s the starting code:
Yesterday my colleague Alex and I had the pleasure to do some extra presentations during AMIS’s Oracle OpenWorld preview evening. While still not getting around...
Although 4.3.14 worked fine on OS X and Linux, I skipped it on Windows because it just wouldn’t start a VM. That issue is allegedly fixed in this release. The changelog includes the following entry.
“Windows hosts: fixed startup problems on certain Windows hosts due to conflicts with anti-virus software (4.3.14 regression, bug #13187)”
I’ll see you there!