When: Mon-Tue, 29-30 September, 08:30 – 17:00 PDT
Ever since I was asked to improve the throughput of an actual general ledger posting job involving Oracle in December 1993 on some hardware where solid state disk (SSD) was available (at high cost relative to “spinning rust” or hard disk drives [HDD]), I have been trying to explain the overall advantage of placing different types of the different Oracle storage selectively on SSD.
When FLASH SSD arrived on the scene, studies quickly arose that writing to FLASH SSD is often not as fast as writing to disk drives dedicated to receiving those writes.
Today I’ll try to explain why I don’t care.
While there was some advantage to writing to SSD in my tests (which were to RAM based SSD on a VAX), the write speed to online REDO was not a significant part of the advantage of placing online REDO on SSD.
A recent question on the OTN database forum raised the topic of returning free space in a tablespace to the operating system by rebuilding objects to fill the gaps near the start of files and leave the empty space at the ends of files so that the files could be resized downwards.
This isn’t a process that you’re likely to need frequently, but I have written a couple of notes about it, including a sample query to produce a map of the free and used space in a tablespace. While reading the thread, though, it crossed my mind that recent versions of Oracle introduced a feature that can reduce the amount of work needed to get the job done, so I thought I’d demonstrate the point here.
Yet again the monster that is Oracle Open World is about to take over San Francisco. I won’t be making it this year, but considering we had something like 60,000 attendees last year I’d hate to see what the numbers are gonna be this year!
To try and make it a little easier for you to find all the Private Cloud and Lifecycle Management with EM12c material, here are the ones I know about. If you know of any I’ve missed, feel free to add them in the comments field below! Note that this is only the Private Cloud and Lifecycle Management material – the complete master list of all EM material can be found in the master Focus On EM12c document.
First up was Martin Widlake speaking about clustering data to improve performance. The cool and scary thing about Oracle is you often go into session like this thinking it’s all going to be stuff you already know, then you realise how much you either didn’t know in the first place, or had forgotten. A couple of times Martin asked questions of the audience and I felt myself shrinking back in my seat and chanting the mantra, “Don’t pick me!”, in my head.
Despite the title, this is actually a technical post about Oracle, disk I/O and Exadata & Oracle In-Memory Database Option performance. Read on :)
If a car dealer tells you that this fancy new car on display goes 10 times (or 100 or 1000) faster than any of your previous ones, then either the salesman is lying or this new car is doing something radically different from all the old ones. You don’t just get orders of magnitude performance improvements by making small changes.
Perhaps the car bends space around it instead of moving – or perhaps it has a jet engine built on it (like the one below :-) :
I started looking into In-Memory on RAC this week. Data can be distributed across RAC nodes in a couple of different ways. The default is to spread it across the available nodes in the cluster. So if you had a 2 node cluster, roughly 50% of the data in your table or partition would be loaded into the column store in each of the 2 instances.
On the Maximum Availability Architecture website, there’s a paper on