Search

OakieTags

Who's online

There are currently 0 users and 57 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Oracle Database Security...

Oracle Database Security Best Practices Webcast Series
I’ll be kicking off a best practices webcast series this February 1st at 10 AM Pacific. Join me as we discuss the threats every IT Database and Security administrator needs to be aware of, as well as best practices for securing your databases. We’ll cover  how best to guard against SQL injection attacks, encrypt sensitive data, enforce least privilege and separation of duties, database auditing and masking non-production data.
Register here
#000099; font-family: "Arial","sans-serif"; font-size: 10.0pt;">http://event.on24.com/r.htm?e=390461&s=1&k=5F30645E675CBF55C8BDED9F3D28AE69&partnerref=blog1_sec_dbsecmulti

Display only the active archive log destinations

One thing I find annoying is when I want to see the archive log destinations in an oracle database.
I usually want to see only those that are enabled, and have a non null value for the destination.

show parameter log_archive_dest shows more than I care to look at.

Try this:


select name, value
from v$parameter
where name = 'log_archive_dest'
and value is not null
union all
select p.name, p.value
from v$parameter p where
name like 'log_archive_dest%'
and p.name not like '%state%'
and p.value is not null
and 'enable' = (
   select lower(p2.value)
   from v$parameter p2
   where p2.name =  substr(p.name,1,instr(p.name,'_',-1)) || 'state' || substr(p.name,instr(p.name,'_',-1))
)
union all
select p.name, p.value
from v$parameter p
where p.name like 'log_archive_dest_stat%'
and lower(p.value) = 'enable'
and (
   select p2.value
   from v$parameter p2
   where name = substr(p.name,1,16) || substr(p.name,instr(p.name,'_',-1))
) is not null
/

I wish

A few days ago I published an example of the optimizer failing to handle an updateable join view because it didn’t recognise that a particular type of aggregate subquery would guarantee key-preservation.  Here’s another example where the human eye can see key-preservation, but the optimizer can’t (even in 11.2.0.2). As usual we’ll start with some sample data – in this case two tables since I want to update from one table to the other.

create table t1 (
	id1	number,
	id2	number,
	val	number,
	constraint t1_pk primary key (id1, id2)
);

insert into t1 values (1,1,99);
commit;

create table t2 (
	id1	number,
	id2	number,
	id3	number,
	val	number,
	constraint t2_pk primary key (id1, id2, id3)
);

insert into t2 values (1,1,1,200);
insert into t2 values (1,1,2,200);
commit;

Note, particularly, that t1 has a two-part key, and t2 has a three-part key; and it’s perfectly reasonable to write a query like the following – and then I might use the query to define a view:

select
	t1.val	v1,
	t2.val	v2
from
	t1,t2
where
	t1.id1 = t2.id1
and	t1.id2 = t2.id2
and	t2.id3 = 1
;

You’ll note that table t2 is key-preserved. If I pick a row from t2, I use the primary key of table t1 to find a match – so any row that gets picked from t2 can appear at most once in the result set.

However, although the join itself doesn’t include all the columns in the primary key of t2, table t1 is also key-preserved in the view. If I pick a row from t1 the join condition may find several rows in t2 that match – but once the predicate t2.id3 = 1 is applied this will reduce the possible matches to at most one – so each row from t1 can appear at most once in the result set.

So what happens when you try these two updates ?

update
	(
	select
		t1.val	v1,
		t2.val	v2
	from
		t1,t2
	where
		t1.id1 = t2.id1
	and	t1.id2 = t2.id2
	and	t2.id3 = 1
)	iv
set
	iv.v2 = iv.v1
;

update
	(
	select
		t1.val	v1,
		t2.val	v2
	from
		t1,t2
	where
		t1.id1 = t2.id1
	and	t1.id2 = t2.id2
	and	t2.id3 = 1
)	iv
set
	iv.v1 = iv.v2
;

The first one works, you can update table t2 through the view; the second one fails with Oracle error “ORA-01779: cannot modify a column which maps to a non key-preserved table.”

You might want to try something clever with function-based indexes – after all, if we only want to do this update for the special case where id3 = 1 (or perhaps a limited number of special cases) we can create a unique index to help:

create unique index t2_fbi on t2(
	case when id3 = 1 then id1 end,
	case when id3 = 1 then id2 end
);

update
	(
	select
		t1.val	v1,
		t2.val	v2
	from
		t1,t2
	where
		case when t2.id3 = 1 then t2.id1 end = t1.id1
	and	case when t2.id3 = 1 then t2.id2 end = t1.id2
	)	iv
set
	iv.v1 = iv.v2
;

Even though the execution plan for the underlying query shows Oracle doing a unique scan on a unique index in a nested loop join, the update still fails with Oracle error ORA-01779.

I have to say that I’m not as disappointed with the example as I was with the aggregate subquery example. The aggregate example looks like a reasonable requirement, this one looks like an application design flaw (essentially, it has the flavour of an application that has stuck several entities into a single table) so I’ve not worried about it too much in the past.

Recently, though, I’ve seen an increasing number of people thinking about keeping old copies of data in the same table as the current copy – and one strategy for this is to have a flag that marks the current copy and uses the FBI trick I’ve just shown as a way to enforce uniqueness. The side effect may cause problems to a few people.

Should SQL have a compulsory ORDER BY clause?

How many times have you heard someone say, “The rows always came back in the correct order, but since the *event* it’s now wrong. The *event* may be an upgrade or some maintenance task on the table etc.

Oracle clearly say,

“Use the ORDER BY clause to order rows returned by the statement. Without an ORDER BY clause, no guarantee exists that the same query executed more than once will retrieve rows in the same order.”

This also applies for the GROUP BY clause, that haunted people during their 10g upgrades.

“The GROUP BY clause groups rows but does not guarantee the order of the result set. To order the groupings, use the ORDER BY clause.”

Both these statements can be found in the docs for the SELECT statements.

So why not make the ORDER BY clause compulsory? The statement could include an explicit NOORDER, like this.

SELECT * FROM emp e ORDER BY e.empname

SELECT * FROM emp e ORDER BY NOORDER

Of course, if that happened I would probably be in the group of people complaining about how ridiculous it is to force us to specify a clause we don’t need, but it would make people think… :)

Cheers

Tim…

Update: There seems to be a little confusion about what I am saying. I am not suggesting ordering should be mandatory. I am suggesting the ORDER BY clause could be mandatory, making you have to explicitly say if the output from the query is ordered or not. So these two statements would be functionally identical.

SELECT * FROM emp

SELECT * FROM emp ORDER BY NOORDER

 




The Most Brilliant Science Graphic I Have Ever Seen

The below link takes you to an absolutely fantastic interactive demonstration of the relative size of everything. Everything. Stop reading this and go look at it, when it finishes loading, move the blue blob at the bottom of the screen left and right.

The Relative_scale_of_everything

The raw web link is:

http://www.primaxstudio.com/stuff/scale_of_universe/scale-of-universe-v1...

The web page says scale_of_the_universe but it should be relative_scale_of_everything_in_the_universe. Did you go look at it? NO!?! If it’s because you have seen it before then fair enough – otherwise stop reading this stupid blog and Look At It! NOW! GO ON!!!

Yes, I do think it is good.

I have to thank Neil Chandler for his tweet about this web page which led me to look at it. Neil and I talked about relative sizes of things in the pub towards the end of last year, in one of the Oracle London Beers sessions. I think it was Neil himself who suggested we should convert MB, GB and TB into time to get a real feel for the size of data we are talking about, you know, when we chuck the phrases GB and TB around with abandon. Think of 1KB as a second. A small amount of time for what is now regarded as a small amount of data – This blog so far is around 1.2kb of letters. Given this scale:

1KB = 1 second. About the time it takes to blink 5, possibly 6 times, as fast as you can.
1MB = Just under 17 minutes. Time enough to cook fish fingers and chips from scratch.
1GB = 11 and a half days. 1KB->1GB is 1 second -> 1.5 weeks.
1TB = Just under 32 years. Yes, from birth to old enough to see your first returning computer fad.
1PB = pretty much all of known human history, cave paintings and Egyptian pyramids excepting, as the Phoenicians invented writing about 1150BC ago.

The wonderful thing about the web page this blog is about is that you can scan in and out and see the relative sizes of things, step by step, nice and slowly. Like how small our sun is compared to proper big ones and how the Earth is maybe not quite as small compared to Saturn as you thought. At the other end of the scale, how small a HIV virus is and how it compares to the pits in a CD and the tiniest of transistors on a silicon chip. I’m particularly struck by the size of DNA compared to a human red blood cell, as in how relatively large DNA is. Red blood cells are pretty big cells and yet all human cells (except, ahem, red blood cells) have 3.2 billion letters of DNA in each and every one of them. That’s some packaging, as cells have a lot of other stuff in there too.
{NB, do remember that the zooming in and out is logarithmic and not linear, so things that are close to each other in the graphic are more different than first appears, especially when the image becomes large and in effect covers a wide part of the screen}

Down at the sub-atomic scale there are a fair number of gaps, where one graphic is pretty much off the scale before the next one resolves from a dot to anything discernable, but that is what it’s like down that end of things. Besides. It’s so small it’s hard to “look around” as there is nothing small enough (like, lightwaves went by several orders of magnitude ago) to look around with.

My one criticism? It’s a shame Blue Whale did not make it into the show :-)

I actually had flashbacks looking at this web page. I remember, back in the mid-70′s I think, going to the cinema. Back then, you still had ‘B’ shows, a short film, cartoon or something before the main event. I no longer have a clue what the main event was, but the ‘B’ movie fascinated me. I think it started with a boy fishing next to a pond and it zoomed in to a mosquito on his arm, then into the skin and through the layers of tissue to blood vessels, to a blood cell… you get the idea, eventually to an atom. Some of the “zooming in” where it swapped between real footage was poor but it was 1970 or so and we knew no better. It then quickly zoomed back out to the boy, then to an aerial view of the field, out to birds-eye… satellite-like…the earth… solar system… I think it stopped at milky way. I wish I knew what that documentary was called or how to find it on the web…

{Update, see comments. Someone links to the film. I know I looked for this film a few years back and I did have a quick look again before I posted this message. I did not immediately find it but someone else did, in 10 seconds via Google. Shows how rubbish I am at using web searches…}

Happy (belated) New Year!

Another year, another list of 'resolutions'.  Welcome to twenty-twelve :)

Adding Datafiles

Yesterday I talked about some of the things to consider when adding space to a tablespace using auto-extend for datafiles. This of course isn’t the only way to add space to a tablespace. You could simply add more datafiles. Indeed nearly a decade ago (scary!) I wrote The final reason for multiple datafiles (to spread […]

Mission: Impossible – Ghost Protocol

Mission: Impossible – Ghost Protocol is OK for a Tom Cruise action vehicle. It is more or less the same as all the others in the franchise, which is not a bad thing. It just doesn’t bring anything new to the table. I knew it would be like this which is why it took me some time to work myself up to seeing it.

If you like the others go to see it. If you have not been won over to the franchise yet, I don’t think this one will do it. :)

Cheers

Tim…




Curious Case Of The Ever Increasing Index Solution (A Big Hurt)

Based on the excellent comments in the Quiz post, we have some clever cookies out there I guess the first thing to point out is that based in the basic scenario provided, the index shouldn’t ordinarily be continually growing in this fashion. Although the index values are monotonically increasing, the deletions are leaving behind fully emptied leaf blocks which [...]

Index size bug

Here’s a follow-up to a post I did some time ago about estimating the size of an index before you create it. The note describes dbms_stats.create_index_cost() procedure, and how it depends on the results of a call to explain plan. A recent question on the OTN database forum highlighted a bug in explain plan, however, which I can demonstrate very easily. I’ll start with a small amount of data to demonstrate the basic content that is used to calculate the index cost.

create table t1
as
with generator as (
	select	--+ materialize
		rownum id 
	from dual 
	connect by 
		level <= 10000
)
select
	rownum							n1,
	case when mod(rownum,2) = 0 then rownum end		n2,
	lpad(rownum,10,'0')					v1,
	case when mod(rownum,2) = 0 then rpad('x',10) end	v2,
	rpad('x',100)						padding
from
	generator	v1,
	generator	v2
where
	rownum <= 100000
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt	 => 'for all columns size 1'
	);
end;
/

explain plan for
create index t1_v1 on t1(v1);

select * from table(dbms_xplan.display);

You’re likely to get some variation on the results below, particularly in the note about the estimated index size, but here’s the result I got last time I ran the test:

Explained.

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------
Plan hash value: 2170349088

----------------------------------------------------------------
| Id  | Operation              | Name  | Rows  | Bytes | Cost  |
----------------------------------------------------------------
|   0 | CREATE INDEX STATEMENT |       |   100K|  1074K|   286 |
|   1 |  INDEX BUILD NON UNIQUE| T1_V1 |       |       |       |
|   2 |   SORT CREATE INDEX    |       |   100K|  1074K|       |
|   3 |    TABLE ACCESS FULL   | T1    |   100K|  1074K|   278 |
----------------------------------------------------------------

Note
-----
   - cpu costing is off (consider enabling it)
   - estimated index size: 3145K bytes

15 rows selected.

As you can see, I’ve got an accurate estimate of the rows that will go into the index, with a reasonably accurate estimate of the total volume of data in the rows (column lengths only) that will go into the index. The estimated index size then allows for things like rowids, row overhead, pctfree, space management blocks, and extent sizes, resulting in a larger figure (significantly larger in this case given the relatively small length of the column used in the index).

Let’s repeat the experiment a few times with increasing numbers of rows in the table, though. Rather than give you the full output for each test, I’ll just produced the Rows and Bytes figure and the estimated size – the last three sets of figures are a little clue about what’s gone wrong:

 Rows      Bytes     Est. Size
 ------    ------    ---------
     1M      10MB         24MB
    10M	    104MB        243MB
   100M	   1049MB       2415MB
  1000M      10GB       2483MB
 10000M     102GB       2818MB
 ------    ------    ---------
   176M    1846MB       4227MB
   177M    1856MB           0B
   180M    1888MB         67MB
 ------    ------    ---------

It looks like the estimated size is captured as a 32 bit number, so it rolls over to zero at roughly 4.3 billion. The bug is still there in 11.2.0.2, I haven’t yet checked 10.2.0.5 or 11.2.0.3

Footnote: Just in case you’re wondering, I didn’t actually create a table with 10 billion rows in it, I just used dbms_stats.set_table_stats() to tell Oracle that the table has 10 billion rows.