Search

Top 60 Oracle Blogs

Recent comments

July 2012

Compression Units – 2

When I arrived in Edinburgh for the UKOUG Scotland conference a little while ago Thomas Presslie, one of the organisers and chairman of the committee, asked me if I’d sign up on the “unconference” timetable to give a ten-minute talk on something. So I decided to use Hybrid Columnar Compression to make a general point about choosing and testing features. For those of you who missed this excellent conference, here’s a brief note of what I said.

Rename Oracle Managed File (OMF) datafiles in ASM

(Version edited after comments -> rman backup as copy)
(Version edited to include delete leftover datafile from rman)

Recently I was asked to rename a tablespace. The environment was Oracle version 11.2.0.3 (both database and clusterware/ASM).

This is the test case I build to understand how that works:
(I couldn’t find a clean, straightforward description how to do that, which is why I blog it here)

I created an empty tablespace ‘test1′ for test purposes:

SYS@v11203 AS SYSDBA> create bigfile tablespace test1 datafile size 10m;

(I use bigfile tablespaces only with ASM. Adding datafiles is such a labour intensive work, bigfile tablespaces elimenate that, when auto extent is correctly set)

A tablespace can be easily renamed with the alter tablespace rename command:

Book Review: Oracle Database 11gR2 Performance Tuning Cookbook (Part 2)

July 23, 2012 Oracle Database Performance Tuning Test Cases without Many “Why”, “When”, and “How Much” Filler Details http://www.amazon.com/Oracle-Database-Performance-Tuning-Cookbook/dp/184... (Back to the Previous Post in the Series) This blog article contains the review for the second half of the “Oracle Database 11gR2 Performance Tuning Cookbook” book.  It has been nearly five months since I posted [...]

Free Webinar

In a couple of days, on Wednesday, 1st of August, I'll be presenting another free webinar hosted at AllThingsOracle.com.

Although it is called "Oracle Cost-Based Optimizer Advanced Session", don't be mislead by the title.

It is not a truly "advanced" session, but rather I'll try to delve into various topics that I could only mention briefly or had to omit completely during the first webinar on the Cost-Based Optimizer.

In principle it's going to be a selection of the most recurring issues that I come across during my consultancy work:

- I'm going to spend some time on statistics and histograms in particular and what I believe are the most important aspects to understand regarding them

Karl Arao's Blog

Just another weblog about Oracle,Linux,Troubleshooting,Performance,etc..etc..

SLOB Is Not An Unrealistic Platform Performance Measurement Tool – Part III. Calibrate, What?

Parts I an II of this series can be found here and here.

My previous installments in this series delved into how Orion does not interact with the operating system in any way similar to a database instance. This is a quick blog entry about CALIBRATE in that same vein.
On a system with 32 logical CPUs (e.g., 2s8c16t) we see CALIBRATE spawn off 32 background processes called ora_cs* such as the following ps(1) output screenshot:

Compression Units

If you’re starting to work with Exadata you need to work out what how much impact you can have by doing the right (or wrong, or unlucky) sorts of things with the technology. Fortunately there are only a few special features that really need investigation: Hybrid Columnar Compression (HCC), Offloading (and related Smart Scan stuff) and Storage Indexes. These are the features that will have the biggest impact on the space taken up by your data, the volume of data that will move through your system as you load and query it, and the amount of CPU it takes to do something with your data.

There are other features that are important, of course, such as the features for affecting parallel execution, and the options for resource management, but the three I’ve listed above are the core of what’s (currently) unique to Exadata. In this note I’m just going to make a few comments about how Oracle implements HCC, and what side-effects this may have.

Gathering Aggregated Cost-Based Optimiser Statistics on Partitioned Objects

Recently, I have been looking into how to gather cost-based optimizer statistics on composite partitioned objects. 

Gathering Aggregated Cost-Based Optimiser Statistics on Partitioned Objects

Recently, I have been looking into how to gather cost-based optimizer statistics on composite partitioned objects. 

Gathering Aggregated Cost-Based Optimiser Statistics on Partitioned Objects

Recently, I have been looking into how to gather cost-based optimizer statistics on composite partitioned objects.  Collecting global statistics on a large partitioned object can be a time-consuming and resource intensive business as Oracle samples all the physical partitions or sub-partitions. Briefly, if you do not collect global statistics on a partitioned table, Oracle will aggegrate the statistics on the physical partitons or sub-partitions to calculate statistics on the logical table and partition segments.

Oracle 10g makes a number of mistakes in its calculation of these aggregated statistics.  In particular, the number of distinct values on columns by which the table is partitioned have impossibly low values.  This is can affect cardinality calculations and so lead the optimizer to choose the wrong execution plan.