Search

Top 60 Oracle Blogs

Recent comments

Oracle

Virtualizing Big Data in the Cloud

style="display:inline-block;width:320px;height:100px"
data-ad-client="ca-pub-5103295461547706"

Parallelism

Headline – if you don’t want to read the note – the /*+ parallel(N) */ hint doesn’t mean a query will use parallel execution, even if there are enough parallel execution server processes to make it possible. The parallel(N) hint tells the optimizer to consider the cost of using parallel execution for each path that it examines, but ultimately the optimizer will still take the lowest cost path (bar the odd few special cases) and that path could turn out to be a serial path.

The likelihood of parallelism appearing for a given query changes across versions of Oracle so you can be fooled into thinking you’re seeing bugs as you test new versions but it’s (almost certainly) the same old rule being applied in different circumstances. Here’s an example – which I’ll start off on 11.2.0.4:

255 Again!

There are so many things that can go wrong when you start using tables with more than 255 columns – here’s one I discovered partly because I was thinking about a client requirement, partly because I had a vague memory of a change in behaviour in 12c and Stefan Koehler pointed me to a blog note by Sayan Malakshinov when I asked the Oak Table if anyone remembered seeing the relevant note. Enough of the roundabout route, I’m going to start with a bit of code to create a table, stick a row in it, then update that row:

255 columns

This is one of my “black hole” articles – I drafted it six months ago, but forgot to publish it.

Insider View of a Delphix VDB Rewind

I love questions-  They give me something to write about that I don’t have to come up with from my experience or challenges…:)

So in my last post, Paul asked:

Quantum Space

Here’s a not very serious note that makes a serious point.  I’ve got a small tablespace made up of 4 files, and here’s a little report I can run against the data dictionary for that tablespace:


select 'File space' What, nvl(sum(user_bytes)/1048576,0) MB from dba_data_files where tablespace_name = 'LOB_TEST'
union all
select 'Free space',      nvl(sum(bytes/1048576),0)         from dba_free_space where tablespace_name = 'LOB_TEST'
union all
select 'Extents',         nvl(sum(bytes/1048576),0)         from dba_extents    where tablespace_name = 'LOB_TEST'
union all
select 'Segments',        nvl(sum(bytes/1048576),0)         from dba_segments   where tablespace_name = 'LOB_TEST'
;

The name of the tablespace isn’t significant – it happens to be a tablespace I created to do some tests relating to space allocation with securefile LOBs, and it’s been hanging around ever since.

Announcing SLOB 2.4! Integrated Short Scans and Cloud (DBaaS) Support, and More.

This post is to announce the release of SLOB 2.4!

VERSION

SLOB 2.4.0. Release notes (PDF): Click Here.

WHERE TO GET THE BITS

As always, please visit the SLOB Resources page. Click Here.

What is in a transportable tablespace dumpfile?

On 31st of May in Düsseldorf, at DOAG Datenbank, I’ll talk about transportable tablespaces and pluggable databases. Both methods are transporting data physically, the difference is in the transport of the metadata, which can be more flexible when transported logically, as with TTS, but faster when transported physically with PDB. I have a lot of demos to show transportable tablespaces with RMAN, and the different cloning features available in 12cR2. If I have time I’ll show what is inside the dumpfile when using Data Pump to export the metadata. Here is the idea.

expdp transport_tablespaces

Here is how we export metadata with Data Pump for transportable tablespaces.

AWS Database Blog – Added To My Blog Roll

This is just a brief blog post to share that I’ve added the AWS Database Blog to my blogroll.  I recommend you do the same! Let’s follow what’s going on over there.

Some of my favorite categories under the AWS Database Blog are:

 

 

Readers: I do intent to eventually get proper credentials to make some posts on that blog. All in proper time and with proper training and clearance.

Aliases

Here’s a performance problem that came up on OTN recently. The following query (reformatted) takes “ages” to run – how do you address the problem:

SELECT
	COUNT(*) 
FROM
	smp_dbuser2.workflow_step_report
WHERE
	report_ID IN (
		SELECT	report_id
		FROM	smp_dbuser2.workflow_report
		WHERE	trunc(start_time) = '28-Apr-2017'
		AND	user_id = 'nbi_ssc'
	)
;


Various pieces of relevant information were supplied (the workflow_report table holds 1.4M rows the workflow_step_report table holds 740M rows and some indexes were described), but most significantly we were given the execution plan: