This post looks like I am jumping on the bandwagon of IT orchestration like a lot of people are doing. Maybe I should say ‘except for (die hard) Oracle DBA’s’. Or maybe not, it up to you to decide.
Most people who are interested in IT in general will have noticed IT orchestration has gotten attention, especially in the form of Puppet and/or Chef. I _think_ IT orchestration has gotten important with the rise of “web scale” (scaling up and down applications by adding virtual machines to horizontal scale resource intensive tasks), in order to provision/configure the newly added machines without manual intervention, and people start picking it up now to use it for more tasks than provisioning of virtual machines for web applications.
Some more 12c articles have trickled out over the last few days.
In preparation for our upcoming 12c In-Memory Webcast @CaryMillsap, @TanelPoder, and I solicited questions from members of the universe at large on the interweb. We got a question about how In-Memory works with the 12c multi-tentant option and it got me thinking so I gave it a quick try. As it turns out, it works about as you would expect. The basic idea is to turn it on for the container DB (which is where the memory is actually allocated (ala the other main shared memory regions) and then decide which PDBs to allow to use it (and if so how much of it to use) or not. First, here are the steps necessary to allocate the memory in the container DB.
Prompted by a question on OTN I came up with a strategy for producing an ORA-00060 deadlock that DIDN’T produce a deadlock graph (because there isn’t one) and didn’t get reported in the alert log (at least, not when tested on 188.8.131.52). It’s a situation that shouldn’t arise in a production system because it’s doing the sorts of things that you shouldn’t do in a production system: but possibly if you’re trying to do some maintenance or upgrades while keeping the system live it could happen. Here’s the starting code:
Yesterday my colleague Alex and I had the pleasure to do some extra presentations during AMIS’s Oracle OpenWorld preview evening. While still not getting around...
Although 4.3.14 worked fine on OS X and Linux, I skipped it on Windows because it just wouldn’t start a VM. That issue is allegedly fixed in this release. The changelog includes the following entry.
“Windows hosts: fixed startup problems on certain Windows hosts due to conflicts with anti-virus software (4.3.14 regression, bug #13187)”
I’ll see you there!
I have a table with several indexes on it, and I have two versions of a query that I might run against that table. Examine them carefully, then come up with some plausible reason why it’s possible (with no intervening DDL, DML, stats collection, parameter fiddling etc., etc., etc.) for the second form of the query to be inherently more efficient than the first.
Here’s one that started off with a tweet from Kevin Closson, heading towards a finish that shows some interesting effects when you truncate large objects that are using ASSM. To demonstrate the problem I’ve set up a tablespace using system allocation of extents and automatic segment space management (ASSM). It’s the ASSM that causes the problem, but it requires a mixture of circumstances to create a little surprise.
(the details are investigated and specific to Oracle’s database implementation on Linux x86_64)
Exadata IO: This event is not used with Exadata storage, ‘cell single block physical read’ is used instead.
Despite p3 listing the number of blocks, I haven’t seen a db file sequential read event that read more than one block ever. Of course this could change in a newer release.