Search

OakieTags

Who's online

There are currently 0 users and 35 guests online.

Recent comments

Affiliations

November 2010

My current favourite iPad apps

Okay, okay, I admit it, I was one of those guys that bought himself an iPad during OOW and I thought maybe it’s cool to share some thoughts on those iPad apps that are currently on my iPad, after things have settled down a bit regarding the downloading, install, try-outs and deleting (zillion of times) most of those apps again and again and… Which of those apps are still on my pad and worth maybe a tryout…? Some I have bought, some are for free. Let’s have a look and I hope you enjoy the line up…

Current Setup

Currently I have setup my screens in multiple sections:

  • Main Screen
  • Social apps
  • Music and …
  • Games
  • Utilities
  • iWorks and alike

You could discuss if they are really belonging in those sections but currently those screens feel okay for me and that’s what counts.

Main Screen

Not that much exciting stuff here. Worth mentioning is the WordPress app that I can use to update, write and check my wordpress blog sites I post to. Twitterrific is a twitter app that I installed because my default one, the one that I like most, Tweetdeck, crashed initially to many times. Nowadays Tweetdeck is stable but lacks some functionality that I have on my desktop machines. Mentioning “lack”; The WordPress app doesn’t have the statistics tab, that his alternatives have for Windows or Android, because this page makes use of flash and being on a Apple environment, that kind of stuff doesn’t work anymore…

“Timezones” isn’t actually a app, but a HTML5 site saved on the “desktop”, which is very useful site really to quickly check timezones and avoids me calling people in the States while they just went to sleep.

WeatherProHD is a very extensive weather app. If I really need it, I don’t know, but it is really beatifully crafted, but you will have to pay for it.

Screenshots

iPad WordPress App

WordPress App

Twitteriffic

Everytimezone URL

WeatherPro HD

Social

Most social apps on this screen are apps for reading up news, manuals, twitter and file handling. Most of them you probably won’t use because they are refering or dealing with Dutch content.The Dutch content driven NU app is very neatly crafted as also is the Telegraaf HD app. They aren’t really my kind of info sources, especially the “Televaag”, but as said I can appreciate the way they have setup the apps. Clear and efficient without too much ennoying commercial stuff in there.

I use IMDB to look up, from time to time, some some info on movies and television series. Current version of TweetDeck, as said, is really stable and because I love the look and feel, this is my main twitter app. Flipboard is a funny application. You can create a sort of newspaper look and feel of your twitter accounts or groups and it would really be a great way to read up twitter messages or other info if it also would accept RSS feeds of, for example, blogsite aggregator feeds. Anyway, have a try, its freely downloadable from the App Store. Pulse is an app that maybe is a bit the same as Flipbook, but I can’t get it really to work for my info needs (it doesn’t refresh my sites), so anyway, that app is probably one of the next apps to be deleted if I need some room.

The “Instapaper” app is pretty smart. It gives you the extra functionality in Safari and other apps to link or save webpages to “read later” or just save them in a folder. So for example if I see a website that has interesting content then I can link it to be read later and this page will be saved in this Instapaper app.  In Instapaper I can now sort them into the appropriate folder and/or share it, for example via email.

GoodReader is more than just an eReader app. You can read for example, pdf’s but you also can move, manage, protect, upload docs to GoogReader with a server, see media content like pictures and movies. You are also able to created password protected folders or protect content via a password. You can download content directly via multiple sources and if needed pass it on to the Apple iBooks app.

Google Remote is just a funny app. If you have an iMac, for example, you can control the iTunes Music library remotely via WiFi, for example, picking the next song or remotely increasing volume on your iMac. Nothing more, nothing less.

Screenshots

TweetDesk

Flipboard Main Page

Flipboard – Generated article based on a twitterfeed

The Dutch NU app

Instapaper

Apple Remote

Pulse

Music and …

As far as I remember “Virtuoso” is a free app, KORG’s iELECTRIBE and Pianist Pro aren’t. iELECTRIBE is a serious drumbox machine app espeically cool if your into creating Dance, House and/or drum patterns. It is a very good music machine port to the iPad and definitely worth its money if you are into this kind of stuff. Pianist Pro is one of the more decent piano app’s with a good default sound for piano, organ and others. Also its keys are not that small so you can actually play the thing without hitting continuesly the wrong keys.

I downloaded VLC, a multi app regarding playing media files, as an alternative to the Apple defaults so when I am out of luck regarding support, VLC probably will play it. Until now, also GoodReader does a good job in playing mediafiles.

Tactilis is one of those small gems out there in the Apple store. It is a pad that can be used to draw pictures / sketches. It has a blank paper, one with regular lines on it so you can write on it, or a background “paper” with cubes for doing your numbers. Mainly it is used for sketching. You don’t have any colors but only “graphite” pencils. Straight lines and circles are very easily created via some magic moves. It also supports zoom functionality for very fine adjustments. It’s probably best to see it’s YouTube demonstration so you can see how amazing this app really is.

Screenshots

Korgs iELECTRIBE – Main interface

Korgs iELECTRIBE – Some menu options

Pianist Pro – Single view keys (two key rows are also an option)

Pianist Pro – A lot of extra’s like MIDI and drum & ARP patterns

Tactilis – best to see the demo on YouTube

Games

Wouldn’t the iPad not be a great environment for games? So why are there so few…? The demo app “Epic Citadel” is a real good show case that it can be a great environment for gaming. Until now, I am still waiting for the first one to arrive. It’s a shame that the better games, until now, on the iPad are mainly ports from very old PC games for the iPad. The better ones like Lucas Arts “Moneky Island” (the humor is still great after all these years) and the Broken Sword adventure games are beautifully restored for this iPad environment, but were are the new ones…?

I tried some games like the one still on the screen called Nathalie Brooks and/or Isabella, but they are very easy regarding the puzzles and it took me only 2 upto 4 hours to play through the whole game. Both of them had repeating elements which got boring at the end. There are also very bad ports from old PC games like “Dragon’s Lair”. I should have known it wasn’t a good game. It wasn’t one on PC and its still horrible on iPad. They didn’t invest any time improving it or making the game more interesting.

So while waiting for the real killer app regarding games on the iPad, as “Epic Citadel” proves can be done, we, you, I have to do with decent ones like Settlers, a decent port or Dungeon, which doens’t really make it for me because of the not so properly working controls, which are vital if you’re into these kind of role playing, dooms and dragons kind of stuff.

I got two racing games on the iPad. One, the NFSU (need for speed) is actually an iPhone/Touch app but also delivers on the iPad, although the graphics are not optimal. Asphalt 5 is a good racing game for the iPad. I am not such a racing game guy so I am not sure if they are really good ones from the perspective of racing game nerd. Because you “steer” via the gyro options buildin the iPad these kinds of games can become rather tiresome steering your car by using the iPad as your “driving wheel”…

Angry Birds and Cut the Rope are the current top games in the Apple store and both deserve it to be. They are simple but good designed games. I think I like “Cut the Ropes” more because it is more a brain teaser while Angry Birds is more about luck or at least there is a luck factor involved. Cut the Rope also reminds me of “The Incredible Machine“, an old PC game which made those kind of games more popular in the 1990′s. Sometime “clean and simple” can really work. Glow hockey is a good example of this and great fun playing while playing it with 2 people.

I downloaded and deleted an enormous amount of games and yet still waiting for that one newly crafted game for the iPad… It’s almost X-Mas so were are those cool decent games…?

Screenshots

Monkey Island 2 – Intro

Monkey Island 2 – Game interface

Broken Sword – Movie intro

Broken Sword – In between cartoon story movies

Broken Sword – Game interface

Epic Citadel – Main page

Epic Citadel – Realtime moving environment with incredible detail

Epic Citadel – Outside the castle

Epic Citadel – Realtime flying birds, changing sky’s, you name it. It can be done…

Angry Birds – Main menu

Angry Birds – Game interface

Cut the rope – Main menu

Cut the Rope – Game interface

Glow Hockey HD – Game interface

Settlers – Game interface

Asphalt 5 HD – Game Interface

Asphalt 5 HD – Oops, I crashed my car…

Utilities

The more serious, work related utilties. VNC, Telnet and RDP are free apps and are fairly decent while working on Linux/Unix environments and/or on Windows via VNC. iSSH was the only software app that I really bought hoping that it would be my new alround “Putty” app for the iPad and it comes really close. The iPad isn’t still a multithreaded environment so working with these apps can be tiresome. iSSH for example can only open one VNC connection at a given time, so switching from one to a different one isn’t that easy to get some work done. iSSH is highly configurable so it was worth those 10 bucks, it combines the mentioned 3 in one decent app.

Screenshots

iSSH – SSH, Telnet or VNC configuration menu

iSSH – Terminal look and Feel. The keyboard disappears if you have a Bluetooth keyboard

iWorks and alike

I bought the Apple “Pages”, “Keynote” and “Numbers” apps so I had an environment supported way of dealing with documents and/or MS Office related material. In all they feel a little shortened in functionality regarding the fact that you have to pay 10 bucks each for it.

Air Video is just a great app and I have had a lot of fun for those, what was it 2 or 3 bucks. Air Video can be used to stream movies and media directly to your iPad. It is able to stream it and do some realtime conversion to an audio and video format your iPad can handle or can convert the file and then stream/store it on your iPad as needed. But given that you are able to stream the content live without storing it also means that you don’t need the biggest most expensive iPad to see movies. It has also, currently still beta, option that securily streams your movies from your PC or iMac via the internet towards your iPad. I really like this one. It enables me to see some movies while being in bed or somewhere in the house on the couch. The server side software for Windows or iMac can be freely downloaded. The Air Video app is available for iPad or iPod Touch or iPhone and can be downloaded via the Apple store.

Screenshots

Air Video – Preview screen and part of the menu (of course full screen is also an option – default btw)

Air Video – Menu options where you can change, for example, audio track or subtitle, if available (mg4 for instance or mkv movies)

Book Review: Oracle Tuning: The Definitive Reference Second Edition

November 7, 2010 (Updated November 9, 2010, November 28, 2010) 1,100 Page True-False Quiz, “Definitive” is Left as an Exercise for the Student http://www.amazon.com/Oracle-Tuning-Definitive-Reference-Focus/dp/097979... (Edit November 28, 2010: Amazon.com permitted me to submit a review for the book, but only for the first 85 or so pages.  Much more detail has been added to this [...]

Automatic Range Partition Dropping Tool

You have probably used paritions on your tables, especially period based range partitions on date columns. In some applications older partitions become useless after a specified period of time and you have to drop them. But you have to manually drop them; there is no automatic drop utility. Oracle offers interval partitioning for automatic creation of range partitions; but there is no counterpart in dropping operation. It could be chore, especially when the partitions are defined based on different time intervals on different tables.

I have developed a tool to make this process automatic. The concept of the tool is simple - I use a driver table that stores the preferences such as the partitioning time intervals of individual tables I am interested in and how often then should be purged. The actual tool is a stored procedure that reads this driver table, determines if the oldest partition satisfies the condition of purge and drops it if is it the case. You can put this procedure in a Scheduler job and run it every day. It will compute the partitions automatically is decide on dropping the relevant partitions everyday. It will also update global indexes.

Disclaimer: this is being provided for educational purpose only; use at your own risk. The author assumes absolutely no responsibility for any issues caused by the use of this tool. 

First, I will need the driver table:

/* ---------------------------------------------------------------------
Name : partition_drop_schedule
Purpose : To store the retention period of partitions of
: indivisual tables. This data is used by partition_drop
: procedure to automatically drop partitions.
----------------------------------------------------------------------*/
create table partition_drop_schedule
(
OWNER VARCHAR2(30),
TABLE_NAME VARCHAR2(30),
PART_SCHEME VARCHAR2(10),
RETENTION_DURATION NUMBER(5),
RETENTION_UNIT VARCHAR2(10)
)
/
comment on table partition_drop_schedule is 'To store the retention period of partitions of individual tables. This data is used by partition_drop procedure to automatically drop partitions.'
/
comment on column partition_drop_schedule.owner is 'the owner of the table'
/
comment on column partition_drop_schedule.table_name is 'the name of the table'
/
comment on column partition_drop_schedule.part_scheme is 'the partitioning time scheme (DAILY/WEEKLY/MONTHLY/QUARTERLY/HALFYEARLY/YEARLY) of the table'
/
comment on column partition_drop_schedule.RETENTION_DURATION is 'the duration of retention of the partition in number'
/
comment on column partition_drop_schedule.RETENTION_UNIT is 'the unit of duration of retention (DAY/WEEK/MONTH/QUARTER/HALFYEAR/YEAR of the table'
/

Here are some sample inserts into the table:

insert into partition_drop_schedule values
(
'ARUP','TEST','MONTHLY',12,'MONTH'
)
/
commit
/

Next comes the stored procedure:

CREATE OR REPLACE procedure SYS.partition_drop
as
cursor sched_cur is
select *
from partition_drop_schedule
;
l_sched_rec sched_cur%rowtype;
l_cutoff_date date;
l_hv date;
l_sql varchar2(2000);
l_dis_cons_sql varchar2(2000);
l_ena_cons_sql varchar2(2000);
l_drop_sql varchar2(2000);
errm varchar2(2000);
l_out varchar2(2000);
l_full varchar2(32000);
l_recipients varchar2(2000) :=
'dba.ops@myowncompany.com';
errc number(10);
l_place varchar2(2);
l_cons_disabled boolean := FALSE;
type l_varchar2_30_ty is table of varchar2(30);
l_child_owner l_varchar2_30_ty;
l_child_table_name l_varchar2_30_ty;
l_constraint_name l_varchar2_30_ty;
CURSOR cons_cur (
p_owner in varchar2,
p_table_name in varchar2
) is
select owner, table_name, constraint_name
from dba_constraints
where constraint_type = 'R'
and status = 'ENABLED'
and r_constraint_name = (
select constraint_name
from dba_constraints
where owner = p_owner
and table_name = p_table_name
and constraint_type = 'P'
);
begin
-- if the email server is not set already, set it here
execute immediate 'alter session set smtp_out_server = ''smtp.myowncompany.com''';
l_out :=
rpad('Owner',10)||
rpad('Table',30)||
rpad('Retention',10)||
rpad('Cutoff Dt',10)
;
dbms_output.put_line(l_out);
l_full := l_out;
l_out :=
rpad('-',9,'-')||' '||
rpad('-',29,'-')||' '||
rpad('-',9,'-')||' '||
rpad('-',9,'-')
;
l_full := l_full||chr(12)||l_out;
dbms_output.put_line(l_out);
open sched_cur;
loop
fetch sched_cur into l_sched_rec;
exit when sched_cur%notfound;
l_out := rpad(l_sched_rec.owner,10);
l_out := l_out||rpad(l_sched_rec.table_name,30);
l_cutoff_date :=
case l_sched_rec.PART_SCHEME
when 'MONTHLY' then
case l_sched_rec.retention_unit
when 'MONTH' then
add_months(sysdate,-1*l_sched_rec.retention_duration)
else
null
end
when 'DAILY' then
case l_sched_rec.retention_unit
when 'DAY' then
sysdate-l_sched_rec.retention_duration
else
null
end
when 'HALFYEARLY' then
case l_sched_rec.retention_unit
when 'HALFYEAR' then
add_months(sysdate,-6*l_sched_rec.retention_duration)
when 'MONTH' then
add_months(sysdate,-1*l_sched_rec.retention_duration)
else
null
end
when 'YEARLY' then
null
when 'WEEKLY' then
null
when 'QUARTERLY' then
case l_sched_rec.retention_unit
when 'QUARTER' then
add_months(sysdate,-3*l_sched_rec.retention_duration)
when 'MONTH' then
add_months(sysdate,-1*l_sched_rec.retention_duration)
else
null
end
end;
l_out := l_out||rpad(l_sched_rec.retention_duration||' '||l_sched_rec.retention_unit,10);
l_out := l_out||rpad(l_cutoff_date,10);
dbms_output.put_line(l_out);
l_full := l_full||chr(12)||l_out;
for part_cur in
(
select partition_name, high_value
from dba_tab_partitions
where table_owner = l_sched_rec.owner
and table_name = l_sched_rec.table_name
order by partition_position
) loop
-- dbms_output.put_line('l_sched_rec.owner='||l_sched_rec.owner);
-- dbms_output.put_line('l_sched_rec.table_name='||l_sched_rec.table_name);
if part_cur.high_value != 'MAXVALUE' then
execute immediate 'select '||part_cur.high_value||
' from dual' into l_hv;
if l_cutoff_date > l_hv then
l_out := 'Partition '||
part_cur.partition_name||
' with high value '||
l_hv||
' to be dropped ...';
dbms_output.put_line(l_out);
l_full := l_full||chr(12)||l_out;
l_drop_sql := 'alter table '||l_sched_rec.owner||'.'||
l_sched_rec.table_name||
' drop partition '||part_cur.partition_name||
' update global indexes';
dbms_output.put_line('l_drop_sql='||l_drop_sql);
begin
--
-- Disable the FKs
--
l_cons_disabled := FALSE;
open cons_cur (l_sched_rec.owner, l_sched_rec.table_name);
fetch cons_cur
bulk collect
into l_child_owner, l_child_table_name, l_constraint_name;
close cons_cur;
if nvl(l_child_owner.COUNT,0) > 0 then
l_cons_disabled := TRUE;
for ctr in l_child_owner.FIRST .. l_child_owner.LAST loop
dbms_output.put_line('l_child_owner='||l_child_owner(ctr));
dbms_output.put_line('l_child_table_name='||l_child_table_name(ctr));
dbms_output.put_line('l_constraint_name='||l_constraint_name(ctr));
l_dis_cons_sql := 'alter table '||l_child_owner(ctr)||'.'||
l_child_table_name(ctr)||' disable constraint '||
l_constraint_name(ctr);
dbms_output.put_line('l_dis_cons_sql='||l_dis_cons_sql);
l_sql := l_dis_cons_sql;
execute immediate l_sql;
end loop;
end if;
l_sql := l_drop_sql;
execute immediate l_sql;
--
-- Enable the FKs
--
if (l_cons_disabled) then
for ctr in l_child_owner.FIRST .. l_child_owner.LAST loop
l_dis_cons_sql := 'alter table '||l_child_owner(ctr)||'.'||
l_child_table_name(ctr)||' enable novalidate constraint '||
l_constraint_name(ctr);
dbms_output.put_line('l_dis_cons_sql='||l_dis_cons_sql);
l_sql := l_dis_cons_sql;
execute immediate l_sql;
end loop;
end if;
l_out := l_out||'DONE';
dbms_output.put_line(l_sql);
dbms_output.put_line(l_out);
l_full := l_full||'DONE';
l_full := l_full||chr(12)||l_sql;
exception
when others then
errm := SQLERRM;
errc := SQLCODE;
l_out := l_out||'FAILED';
dbms_output.put_line(l_sql);
dbms_output.put_line('Place:'||l_place);
l_full := l_full||'FAILED';
l_full := l_full||chr(12)||chr(12)||l_sql;
end;
end if;
end if;
end loop;
end loop;
close sched_cur;
dbms_output.put_line(l_full);
utl_mail.send(
'oracle@prodb1',l_recipients,
null, null,
'Succeeded: PROPRD Partition Drop',
l_full
);
exception
when OTHERS then
errm := SQLERRM;
utl_mail.send(
'oracle@prodb1',l_recipients,
null, null,
'Failed: PROPRD Partition Drop',
l_full
);
raise_application_error(-20001,errm);
end;
/
show error

Now that the procedure is complete, you should do a quick test to make sure the procedure is working as expected. Update the record in the table PARTITION_DROP_SCHEDULE to reflect 1 month retention and execute this procedure. It should drop the partition and maintain the global indexes. You can confirm the actions by:

(1) checking the DBA_TAB_PARTITIONS to see that the partition is indeed gone
(2) checking the DBA_INDEXES to see that the global index is indeed in USABLE state

If all looks well, you should just put it in a DBMS_SCHEDULER job and set it to run every day at a time that will be considered off peak, e.g. 3 AM.

Important Considerations

  1. This is not intended for mass partition drops. For instance if you are planning to drop 100 partitions of a table, do not use it. This tool will update global indexes for each partition - a process highly inefficient for large numbers of partitions. You are much better off making the global indexes unusable, dropping all the partitions and then rebuilding global indexes. That will be much faster.
  2. At the time of the drop, the session will get an exclusive lock on the table for a fraction of a second. So you will see a slight performance degradation for that instant only.
  3. Never use this tool without adequately testing. Partition drop is a DDL operation - an irreversible process.

As always, I would appreciate comments and suggestions from you on this and any other blog entries. Thanks for reading and happy tooling.

Public Appereances - Past and Future

The Michigan OakTable Symposium (MOTS) in September has been a great event with so many outstanding speakers. It has been a pleasure to spend time with my fellow Oakies and meet many of them in person for the first time. My own two presentations about the Cost Based Optimizer ("Understanding System Statistics" and "Everything you wanted to know about FIRST_ROWS_N") were received pretty well I think.

My visit to the SIOUG 2010 was also very nice, in particular meeting again with Christian Antognini and Joze Senegazcnik.

I'll be presenting at DOAG 2010 mid of November in Nuremberg. This time my presentation will be given in German, which is interesting since I do my presentations mostly in English even here in Germany since there always seem to be some attendees that don't speak German...

My presentation is scheduled at 4 pm on Wednesday, the 17th November, in the room "St. Petersburg", and is called "Advanced Oracle Troubleshooting - Live Session". In this session I'll walk through some common and less common troubleshooting scenarios, in particular some scenerios where the usually helpful wait interface / session statistics / ASH / ADDM approach tends to be useless. This is mostly a live demonstration show along with a few slides describing the issue and the techniques applied.

I'll also be presenting at UKOUG 2010 end of November / beginning of December in Birmingham. I'll be doing again my "Understanding the different modes of System Statistics" and also give the "Advanced Oracle Troubleshooting - Live Session" presentation, this time however that presentation will be split into Part 1 and Part 2 each taking 60 min. which allows me to show even more troubleshooting cases and also spent more time on the "basic" troubleshooting skills which are probably more relevant in everyday life than the interesting but rare "advanced" stuff.

I’ll be speaking at HOTSOS 2011!

Hotsos is an annual conference that is devoted on Oracle system performance and the people presenting there are really passionate about their research and highly specialized about their subject area. Kinda geeky, but hey, performance is never boring.. and for me there’s still a lot of things to learn on each of the areas of performance and these are as follows: 

Build your own stretch cluster part V

This post is about the installation of Grid Infrastructure, and where it’s really getting exciting: the 3rd NFS voting disk is going to be presented and I am going to show you how simple it is to add it into the disk group chosen for OCR and voting disks.

Let’s start with the installation of Grid Infrastructure. This is really simple, and I won’t go into too much detail. Start by downloading the required file from MOS, a simple search for patch 10098816 should bring you to the download patch for 11.2.0.2 for Linux-just make sure you select the 64bit version. The file we need just now is called p10098816_112020_Linux-x86-64_3of7.zip. The file names don’t necessarily relate to their contents, the readme helps finding out which piece of the puzzle is used for what functionality.

I alluded to my software distribution method in one of the earlier posts, and here’s all the detail to come. My dom0 exports the /m directory to the 192.168.99.0/24 network, the one accessible to all my domUs. This really simplifies software deployments.

So starting off, the file has been unzipped:

openSUSE-112-64-minimal:/m/download/db11.2/11.2.0.2 # unzip -q p10098816_112020_Linux-x86-64_3of7.zip

This creates the subdirectory “grid”. Switch back to edcnode1 and log in as oracle. As I already explained I won’t use different accounts for Grid Infrastructure and the RDBMS in this example.

If not already done so, mount the /m directory on the domU (which requires root privileges). Move to the newly unzipped “grid” directory under your mount point and begin to set up the user equivalence. On edcnode1 and edcnode2, create RSA and DSA keys for SSH:

[oracle@edcnode1 ~]$ ssh-keygen -t rsa

Any questions can be answered with the return key, it’s important to leave the passphrase empty. Repeat the call to ssh-keygen with argument “-t dsa”. Navigate to ~/.ssh and create the authorized_keys file as follows:

[oracle@edcnode1 .ssh]$ cat *.pub >> authorized_keys

Then copy the authorized_keys file to edcnode2 and add the public keys:

[oracle@edcnode1 .ssh]$ scp authorized_keys oracle@edcnode2:`pwd`
[oracle@edcnode1 .ssh]$ ssh oracle@edcnode2

If you are prompted, add the host to the ~/.ssh/known_hosts file by typing in “yes”.

[oracle@edcnode2 .ssh]$ cat *.pub >> authorized_keys

Change the permissions on the authorized_keys file to 0400 on both hosts, otherwise it won’t be considered when trying to log in. With all of this done, you can add all the unknown hosts to each node’s known_hosts file. The easiest way is a for loop:

[oracle@edcnode1 ~]$ for i in edcnode1 edcnode2 edcnode1-priv edcnode2-priv; do  ssh $i hostname; don

Run this twice on each node, acknowledging the question if the new address should be added. Important: Ensure that there is no banner (/etc/motd, .profile, .bash_profile etc) writing to stdout or stderr or you are going to see strange error messages about user equivalence not being set up correctly.

I hear you say: but 11.2 can create user equivalence in OUI now-this is of course correct, but I wanted to run cluvfy now which requires a working setup.

Cluster Verification

It is good practice to run a check to see if the prerequisites for the Grid Infrastructure installation are met, and keep the output. Change to the NFS mount where the grid directory is exported, and execute runcluvfy.sh as in this example:

[oracle@edcnode1 grid]$ ./runcluvfy.sh stage -pre crsinst -n edcnode1,edcnode2 -verbose -fixup 2>&1 | tee /tmp/preCRS.tx

The nice thing is that you can run the fixup script now to fix kernel parameter settings:

[root@edcnode2 ~]# /tmp/CVU_11.2.0.2.0_oracle/runfixup.sh
/usr/bin/id
Response file being used is :/tmp/CVU_11.2.0.2.0_oracle/fixup.response
Enable file being used is :/tmp/CVU_11.2.0.2.0_oracle/fixup.enable
Log file location: /tmp/CVU_11.2.0.2.0_oracle/orarun.log
Setting Kernel Parameters...
fs.file-max = 327679
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.wmem_max = 262144
net.core.wmem_max = 1048576

Repeat this on the second node, edcnode2. Obviously you should fix any other problem cluvfy reports before proceeding.

In the previous post I created the /u01 mount point-double check that /u01 is actually mounted-otherwise you’d end up writing on your root_vg’s root_lv, not an ideal situation.

You are now ready to start the installer: type in ./runInstaller to start the installation.

Grid Installation

This is rather mundane, and instaed of providing print screens, I opted for a description of the steps needed to execute in the OUI session.

  • Screen 01: Skip software updates (I don’t have an Internet connection on my lab)
  • Screen 02: Install and configure Grid Infrastructure for a cluster
  • Screen 03: Advanced Installation
  • Screen 04: Keep defaults or add additional languages
  • Screen 05: Cluster Name: edc, SCAN name edc-scan, SCAN port: 1521, do not configure GNS
  • Screen 06: Ensure that both hosts are listed in this screen. Add/edit as appropriate. Hostnames are edcode{1,2}.localdomain, VIPs are to be edcnode{1,2}-vip.localdomain. Enter the oracle  user’s password and click on next
  • Screen 07: Assign eth0 to public, eth1 to private and eth2 to “do not use”.
  • Screen 08: Select ASM
  • Screen 09: disk group name: OCRVOTE with NORMAL redundancy. Tick the boxes for “ORCL:OCR01FILER01″, “ORCL:OCR01FILER02″ and “ORCL:OCR02FILER01″
  • Screen 10: Choose suitable passwords for SYS and ASMSNMP
  • Screen 11: Don’t use IPMI
  • Screen 12: Assign DBA to OSDBA, OSOPER and OSASM. Again, in the real world you should think about role separation and assign different groups
  • Screen 13: ORACLE_BASE: /u01/app/oracle, Software location: /u01/app/11.2.0/grid
  • Screen 14: Oracle inventory: /u01/app/oraInventory
  • Screen 15: Ignore all-there should only be references to swap, cvuqdisk, ASM device checks and NTP. If you have additional warnings, fix them first!
  • Screen 16: Click on install!

The usual installation will now take place. At the end, run the root.sh script on edcnode1 and after it completes, on edcnode2. The output is included here for completeness:

[root@edcnode1 u01]# /u01/app/11.2.0/grid/root.sh 2>&1 | tee /tmp/root.sh.out
Running Oracle 11g root script...

The following environment variables are set as:
 ORACLE_OWNER= oracle
 ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
 Copying dbhome to /usr/local/bin ...
 Copying oraenv to /usr/local/bin ...
 Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
 root wallet
 root wallet cert
 root cert export
 peer wallet
 profile reader wallet
 pa wallet
 peer wallet keys
 pa wallet keys
 peer cert request
 pa cert request
 peer cert
 pa cert
 peer root cert TP
 profile reader root cert TP
 pa root cert TP
 peer pa cert TP
 pa peer cert TP
 profile reader pa cert TP
 profile reader peer cert TP
 peer user cert
 pa user cert
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-2672: Attempting to start 'ora.mdnsd' on 'edcnode1'
CRS-2676: Start of 'ora.mdnsd' on 'edcnode1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'edcnode1'
CRS-2676: Start of 'ora.gpnpd' on 'edcnode1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'edcnode1'
CRS-2672: Attempting to start 'ora.gipcd' on 'edcnode1'
CRS-2676: Start of 'ora.gipcd' on 'edcnode1' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'edcnode1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'edcnode1'
CRS-2672: Attempting to start 'ora.diskmon' on 'edcnode1'
CRS-2676: Start of 'ora.diskmon' on 'edcnode1' succeeded
CRS-2676: Start of 'ora.cssd' on 'edcnode1' succeeded

ASM created and started successfully.

Disk Group OCRVOTE created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 38f2caf7530c4f67bfe23bb170ed2bfe.
Successful addition of voting disk 9aee80ad14044f22bf6211b81fe6363e.
Successful addition of voting disk 29fde7c3919b4fd6bf626caf4777edaa.
Successfully replaced voting disk group with +OCRVOTE.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   38f2caf7530c4f67bfe23bb170ed2bfe (ORCL:OCR01FILER01) [OCRVOTE]
 2. ONLINE   9aee80ad14044f22bf6211b81fe6363e (ORCL:OCR01FILER02) [OCRVOTE]
 3. ONLINE   29fde7c3919b4fd6bf626caf4777edaa (ORCL:OCR02FILER01) [OCRVOTE]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'edcnode1'
CRS-2676: Start of 'ora.asm' on 'edcnode1' succeeded
CRS-2672: Attempting to start 'ora.OCRVOTE.dg' on 'edcnode1'
CRS-2676: Start of 'ora.OCRVOTE.dg' on 'edcnode1' succeeded
ACFS-9200: Supported
ACFS-9200: Supported
CRS-2672: Attempting to start 'ora.registry.acfs' on 'edcnode1'
CRS-2676: Start of 'ora.registry.acfs' on 'edcnode1' succeeded
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@edcnode2 ~]# /u01/app/11.2.0/grid/root.sh 2>&1 | tee /tmp/rootsh.out
Running Oracle 11g root script...

The following environment variables are set as:
 ORACLE_OWNER= oracle
 ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
 Copying dbhome to /usr/local/bin ...
 Copying oraenv to /usr/local/bin ...
 Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node edcnode1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@edcnode2 ~]#

Congratulations! You have a working setup! Check if everything is ok:

[root@edcnode2 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.OCRVOTE.dg
 ONLINE  ONLINE       edcnode1
 ONLINE  ONLINE       edcnode2
ora.asm
 ONLINE  ONLINE       edcnode1                 Started
 ONLINE  ONLINE       edcnode2
ora.gsd
 OFFLINE OFFLINE      edcnode1
 OFFLINE OFFLINE      edcnode2
ora.net1.network
 ONLINE  ONLINE       edcnode1
 ONLINE  ONLINE       edcnode2
ora.ons
 ONLINE  ONLINE       edcnode1
 ONLINE  ONLINE       edcnode2
ora.registry.acfs
 ONLINE  ONLINE       edcnode1
 ONLINE  ONLINE       edcnode2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
 1        ONLINE  ONLINE       edcnode2
ora.LISTENER_SCAN2.lsnr
 1        ONLINE  ONLINE       edcnode1
ora.LISTENER_SCAN3.lsnr
 1        ONLINE  ONLINE       edcnode1
ora.cvu
 1        ONLINE  ONLINE       edcnode1
ora.edcnode1.vip
 1        ONLINE  ONLINE       edcnode1
ora.edcnode2.vip
 1        ONLINE  ONLINE       edcnode2
ora.oc4j
 1        ONLINE  ONLINE       edcnode1
ora.scan1.vip
 1        ONLINE  ONLINE       edcnode2
ora.scan2.vip
 1        ONLINE  ONLINE       edcnode1
ora.scan3.vip
 1        ONLINE  ONLINE       edcnode1
[root@edcnode2 ~]#

[root@edcnode1 ~]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   38f2caf7530c4f67bfe23bb170ed2bfe (ORCL:OCR01FILER01) [OCRVOTE]
 2. ONLINE   9aee80ad14044f22bf6211b81fe6363e (ORCL:OCR01FILER02) [OCRVOTE]
 3. ONLINE   29fde7c3919b4fd6bf626caf4777edaa (ORCL:OCR02FILER01) [OCRVOTE]
Located 3 voting disk(s).

Adding the NFS voting disk

It’s about time to deal with this subject. If not done so already, start the domU “filer03″. Log in as openfiler and ensure that the NFS server is started. On the services tab click on enable next to the NFS server if needed. Next navigate to the shares tab, where you should find the volume group and logical volume created earlier. The volume group I created is called “ocrvotenfs_vg”, and it has 1 logical volume, “nfsvol_lv”. Click on the name of the LV to create a new share. I named the new share “ocrvote” – enter this in the popup window and click on “create sub folder”.

The new share should appear underneath the nfsvol_lv now. Proceed by clicking on “ocrvote” to set the share’s properties. Before you get to enter these, click on “make share”. Scroll down to the host access configuration section in the following screen. In this section you could set all sorts of technologies-SMB, NFS, WebDAV, FTP and RSYNC. For this example, everything but NFS should be set to “NO”.

For NFS, the story is different: ensure you set the radio button to “RW” for both hosts. Then click on Edit for each machine. This is important! The anonymous UID and GID must match the Grid Owner’s uid and gid. In my scenario I entered “500″ for both-you can check your settings using the id command as oracle: it will print the UID and GID plus other information.

The UID/GID mapping then has to be set to all_squash, IO mode to sync, and write delay to wdelay. Leave the default for “requesting origin port”, which was set to “secure < 1024″ in my configuration.

I decided to create /ocrvote on both nodes to mount the NFS export:

[root@edcnode2 ~]# mkdir /ocrvote

Edit the /etc/fstab file to make the mount persistent across reboots. I added this line to the file on both nodes:

192.168.101.52:/mnt/ocrvotenfs_vg/nfsvol_lv/ocrvote /ocrvote nfs rw,bg,hard,intr,rsize=32768,wsize=32768,tcp,noac,nfsvers=3,timeo=600,addr=192.168.101.51

The “addr” command instructs Linux to use the storage network to mount the share. Now you are ready to mount the device on all nodes, using the “mount /ocrvote” command.

I changed the export on the filer to the uid/gid combination of the oracle account (or, on an installation with separate grid software owner, to its uid/gid combination):

[root@filer03 ~]# cd /mnt/ocrvotenfs_vg/nfsvol_lv/
[root@filer03 nfsvol_lv]# ls -l
total 44
-rw-------  1 root    root     6144 Sep 24 15:38 aquota.group
-rw-------  1 root    root     6144 Sep 24 15:38 aquota.user
drwxrwxrwx  2 root    root     4096 Sep 24 15:26 homes
drwx------  2 root    root    16384 Sep 24 15:26 lost+found
drwxrwsrwx  2 ofguest ofguest  4096 Sep 24 15:31 ocrvote
-rw-r--r--  1 root    root      974 Sep 24 15:45 ocrvote.info.xml
[root@filer03 nfsvol_lv]# chown 500:500 ocrvote
[root@filer03 nfsvol_lv]# ls -l
total 44
-rw-------  1 root root  7168 Sep 24 16:09 aquota.group
-rw-------  1 root root  7168 Sep 24 16:09 aquota.user
drwxrwxrwx  2 root root  4096 Sep 24 15:26 homes
drwx------  2 root root 16384 Sep 24 15:26 lost+found
drwxrwsrwx  2  500  500  4096 Sep 24 15:31 ocrvote
-rw-r--r--  1 root root   974 Sep 24 15:45 ocrvote.info.xml
[root@filer03 nfsvol_lv]#

ASM requires zero padded files asm “disks”, so create one:

[root@filer03 nfsvol_lv]# dd if=/dev/zero of=ocrvote/nfsvotedisk01 bs=1G count=2
[root@filer03 nfsvol_lv]# chown 500:500 ocrvote/nfsvotedisk01

Add the third voting disk

Almost there! Before performing any change to the cluster configuration it is always a good idea to take a backup.

[root@edcnode1 ~]# ocrconfig -manualbackup

edcnode1     2010/09/24 17:11:51     /u01/app/11.2.0/grid/cdata/edc/backup_20100924_171151.ocr

You only need to do this on one node. Recall that the current state is:

[oracle@edcnode1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   38f2caf7530c4f67bfe23bb170ed2bfe (ORCL:OCR01FILER01) [OCRVOTE]
 2. ONLINE   9aee80ad14044f22bf6211b81fe6363e (ORCL:OCR01FILER02) [OCRVOTE]
 3. ONLINE   29fde7c3919b4fd6bf626caf4777edaa (ORCL:OCR02FILER01) [OCRVOTE]
Located 3 voting disk(s).

ASM sees it the same way:

SQL> select mount_status,header_status, name,failgroup,library
 2  from v$asm_disk
 3  /

MOUNT_S HEADER_STATU NAME                           FAILGROUP       LIBRARY
------- ------------ ------------------------------ --------------- ------------------------------------------------------------
CLOSED  PROVISIONED                                                 ASM Library - Generic Linux, version 2.0.4 (KABI_V2)
CLOSED  PROVISIONED                                                 ASM Library - Generic Linux, version 2.0.4 (KABI_V2)
CLOSED  PROVISIONED                                                 ASM Library - Generic Linux, version 2.0.4 (KABI_V2)
CLOSED  PROVISIONED                                                 ASM Library - Generic Linux, version 2.0.4 (KABI_V2)
CACHED  MEMBER       OCR01FILER01                   OCR01FILER01    ASM Library - Generic Linux, version 2.0.4 (KABI_V2)
CACHED  MEMBER       OCR01FILER02                   OCR01FILER02    ASM Library - Generic Linux, version 2.0.4 (KABI_V2)
CACHED  MEMBER       OCR02FILER01                   OCR02FILER01    ASM Library - Generic Linux, version 2.0.4 (KABI_V2)

7 rows selected.

Now here’s the idea: you add the NFS location to the ASM diskstring in addition with “ORCL:*” and all is well. But that didn’t work:

SQL> show parameter disk  

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskgroups                       string
asm_diskstring                       string      ORCL:*
SQL> 

SQL> alter system set asm_diskstring = 'ORCL:*, /ocrvote/nfsvotedisk01' scope=memory sid='*';
alter system set asm_diskstring = 'ORCL:*, /ocrvote/nfsvotedisk01' scope=memory sid='*'
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-15014: path 'ORCL:OCR01FILER01' is not in the discovery set

Regardless of what I tried, the system complained. Grudgingly I used the GUI – asmca.

After starting asmca, click on Disk Groups. Then select diskgroup “OCRVOTE”, and right click to “add disks”. The trick is to click on “change discovery path”. Enter “ORCL:*, /ocrvote/nfsvotedisk01″ (without quotes) to the dialog field and close it. Strangely, now the NFS disk now appears. Make two ticks: before disk path, and in the quorum box. A click on the OK button starts the magic, and you should be presented with a success message. The ASM instance reports a little more:

ALTER SYSTEM SET asm_diskstring='ORCL:*','/ocrvote/nfsvotedisk01' SCOPE=BOTH SID='*';
2010-09-29 10:54:52.557000 +01:00
SQL> ALTER DISKGROUP OCRVOTE ADD  QUORUM DISK '/ocrvote/nfsvotedisk01' SIZE 500M /* ASMCA */
NOTE: Assigning number (1,3) to disk (/ocrvote/nfsvotedisk01)
NOTE: requesting all-instance membership refresh for group=1
2010-09-29 10:54:54.445000 +01:00
NOTE: initializing header on grp 1 disk OCRVOTE_0003
NOTE: requesting all-instance disk validation for group=1
NOTE: skipping rediscovery for group 1/0xd032bc02 (OCRVOTE) on local instance.
2010-09-29 10:54:57.154000 +01:00
NOTE: requesting all-instance disk validation for group=1
NOTE: skipping rediscovery for group 1/0xd032bc02 (OCRVOTE) on local instance.
2010-09-29 10:55:00.718000 +01:00
GMON updating for reconfiguration, group 1 at 5 for pid 27, osid 15253
NOTE: group 1 PST updated.
NOTE: initiating PST update: grp = 1
GMON updating group 1 at 6 for pid 27, osid 15253
2010-09-29 10:55:02.896000 +01:00
NOTE: PST update grp = 1 completed successfully
NOTE: membership refresh pending for group 1/0xd032bc02 (OCRVOTE)
2010-09-29 10:55:05.285000 +01:00
GMON querying group 1 at 7 for pid 18, osid 4247
NOTE: cache opening disk 3 of grp 1: OCRVOTE_0003 path:/ocrvote/nfsvotedisk01
GMON querying group 1 at 8 for pid 18, osid 4247
SUCCESS: refreshed membership for 1/0xd032bc02 (OCRVOTE)
2010-09-29 10:55:06.528000 +01:00
SUCCESS: ALTER DISKGROUP OCRVOTE ADD  QUORUM DISK '/ocrvote/nfsvotedisk01' SIZE 500M /* ASMCA */
2010-09-29 10:55:08.656000 +01:00
NOTE: Attempting voting file refresh on diskgroup OCRVOTE
NOTE: Voting file relocation is required in diskgroup OCRVOTE
NOTE: Attempting voting file relocation on diskgroup OCRVOTE
NOTE: voting file allocation on grp 1 disk OCRVOTE_0003
2010-09-29 10:55:10.047000 +01:00
NOTE: voting file deletion on grp 1 disk OCR02FILER01
NOTE: starting rebalance of group 1/0xd032bc02 (OCRVOTE) at power 1
Starting background process ARB0
ARB0 started with pid=29, OS id=15446
NOTE: assigning ARB0 to group 1/0xd032bc02 (OCRVOTE) with 1 parallel I/O
2010-09-29 10:55:13.178000 +01:00
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=1
2010-09-29 10:55:15.533000 +01:00
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 1/0xd032bc02 (OCRVOTE)
GMON updating for reconfiguration, group 1 at 9 for pid 31, osid 15451
NOTE: group 1 PST updated.
2010-09-29 10:55:17.907000 +01:00
NOTE: membership refresh pending for group 1/0xd032bc02 (OCRVOTE)
2010-09-29 10:55:20.481000 +01:00
GMON querying group 1 at 10 for pid 18, osid 4247
SUCCESS: refreshed membership for 1/0xd032bc02 (OCRVOTE)
2010-09-29 10:55:23.490000 +01:00
NOTE: Attempting voting file refresh on diskgroup OCRVOTE
NOTE: Voting file relocation is required in diskgroup OCRVOTE
NOTE: Attempting voting file relocation on diskgroup OCRVOTE

Superb! But did it kick out the correct disk? Yes it did-you now see OCR01FILER01 and ORC01FILER02 plus the NFS disk:

[oracle@edcnode1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   38f2caf7530c4f67bfe23bb170ed2bfe (ORCL:OCR01FILER01) [OCRVOTE]
 2. ONLINE   9aee80ad14044f22bf6211b81fe6363e (ORCL:OCR01FILER02) [OCRVOTE]
 3. ONLINE   6107050ad9ba4fd1bfebdf3a029c48be (/ocrvote/nfsvotedisk01) [OCRVOTE]
Located 3 voting disk(s).

Preferred Mirror Read

One of the cool new 11.1 features allowed administrators to instruct administrators of stretch RAC system to read mirrored extents rather than primary extents. This can speed up data access in cases where data would otherwise have been sent from the remote array. Setting this parameter is crucial to many implementations. In preparation of the RDBMS installation (to be detailed in the next post), I created a disk group consisting of 4 ASM disks, two from each filer. The syntax for the disk group creation is as follows:

SQL> create diskgroup data normal redundancy
  2  failgroup sitea disk 'ORCL:ASM01FILER01','ORCL:ASM01FILER02'
  3* failgroup siteb disk 'ORCL:ASM02FILER01','ORCL:ASM02FILER02'
SQL> /

Diskgroup created.

As you can see all disks from sitea are from filer01 and form one failure group. The other disks, originating from filer02 form the second failure group.

You can see the result in v$asm_disk, as this example shows:

SQL> select name,failgroup from v$asm_disk;

NAME                           FAILGROUP
------------------------------ ------------------------------
ASM01FILER01                   SITEA
ASM01FILER02                   SITEA
ASM02FILER01                   SITEB
ASM02FILER02                   SITEB
OCR01FILER01                   OCR01FILER01
OCR01FILER02                   OCR01FILER02
OCR02FILER01                   OCR02FILER01
OCRVOTE_0003                   OCRVOTE_0003

8 rows selected.

Now all that remains to be done is to instruct the ASM instances to read from the local storage if possible. This is performed by setting an instance-specific init.ora parameter. I used the following syntax:

SQL> alter system set asm_preferred_read_failure_groups='DATA.SITEB' scope=both sid='+ASM2';

System altered.

SQL> alter system set asm_preferred_read_failure_groups='DATA.SITEA' scope=both sid='+ASM1';

System altered.

So I’m all set for the next step, the installation of the RDBMS software. But that’s for another post…

The Losers, Salt and The A-Team…

I haven’t got the the cinema much recently, but having been on a lot of planes, I got a chance to catch up on a few films I missed at the cinema.

The Losers – A pretty good action flick. There’s some cheese and a liberal dose of action cliche, but all in all a very enjoyable film. While I was waching it I couldn’t help thinking this is what The Expendables should have been like.

Salt – It’s an OK sort-of film. Not astoundingly good, but not astoundingly bad either. I wasn’t digging Angelina Jolie as a blonde, but then I guess she probably wouldn’t like the look of me either. :)

The A-Team – I remember seeing the opeing credits, and I think I may have seen the final credits, but I can’t remember a thing about the film. I’m guessing I slept through it. Not sure if the sleep was caused by the quality of the film or the long flight. :)

I must admit that I watched the last scene of The Last Airbender about 5 times. I love it when Aang does his thing, the tottoos on his head start to glow and he raises a huge wall of water. I could consider buying the DVD just for this scene. I must by the cartoon series Avatar: The Last Airbender.

Cheers

Tim…

How Big is a Person?

How big are you in the digital world?

By this, I mean how much space do you (as in, a random person) take up in a database? If it is a reasonably well designed OLTP-type database a person takes up 4K. OK, around 4K.

If your database is holding information about people and something about them, then you will have about 4K of combined table and index data per person. So if your database holds 100,000 customers, then your database is between 200MB and 800MB, but probably close to 400MB. There are a couple of situations I know of where I am very wrong, but I’ll come to that.

How do I know this? It is an accident of the projects and places I have worked at for 20 years and the fact that I became strangely curious about this. My first job was with the NHS and back then disk was very, very expensive. So knowing how much you needed was important. Back then, it was pretty much 1.5K per patient. This covered personal details (names, addresses, personal characteristics), GP information, stays at hospitals, visits to outpatient clinics etc,. It also included the “reference “ data, ie the information about consultants, wards and departments, lookups etc. If you included the module for lab tests it went up to just over 2K. You can probably tell that doing this sizing was a job I handled. This was not Oracle, this was a database called MUMPS and we were pretty efficient in how we held that data.

When I moved to work on Oracle-based hospital systems, probably because I had done the data sizing in my previous job and partly because I was junior and lacked any real talent, I got the job to do the table sizings again, and a laborious job it was too. I did it very conscientiously, getting average lengths for columns, taking into account the length bytes, row overhead, block overhead, indexes etc etc etc. When we had built the database I added up the size of all the tables and indexes, divided by the number of patients and… it was 2K. This was when I got curious. Had I wasted my time doing the detailed sizings?

Another role and once again I get the database sizing job, only this time I wrote a little app for it. This company did utilities systems, water, gas, electricity. My app took into account everything I could think of in respect of data sizing, from the fact that the last extent would on average be 50% empty to the tablespace header. It was great. And pointless. Sum up all the tables and indexes on one of the live systems and divide by the number of customers and it came out at 2-3K per customer. Across a lot of systems. It had gone up a little, due to more data being held in your average computer system.

I’ve worked on a few more person-based systems since and for years I could not help myself, I would check the size of the data compared to the number of people. The size of the database is remarkably consistent. It is slowly going up because we hold more and more data, mostly because it is easier to suck up now as all the feeds are electronic and there is no real cost in taking in that data and holding it. Going back to the hospital systems example, back in 1990 it used to be that you would hold the fact a lab test had been requested and the key results information – like the various cell counts for a blood test. This was because sometimes you had to manually enter the results. Now the test results come off another computer and you get everything.

I said there were exceptions. There are three main ones:

  • You are holding a very large number of transaction records for the person. Telephony systems are one of the worst examples of this. Banking, credit cards and other utility systems match the 4K rule.
  • You hold images or other “unstructured” chunks of data for people. In hospital systems this would cover x-rays, ultrasound scans etc. But if you drop them out of the equation (and this is easy as they often are held in separate sub-systems) it remains a few K per person. CVs push it up as they are often in that wonderfully bloaty Word format.
  • You are holding mostly pointers to another system, in which case it can be a lot less than 4K per person. I had to size a system recently and I arrogantly said “4K per person”. It turned out to be less than 1K, but then this system turned out to actually hold most person data in one key data store and “my” system only held transaction information. I bet that datastore was about 4K per person

I have to confess that I have not done this little trick of adding up the size of all the tables and indexes and dividing by the number of people so often over the last couple of years, but the last few times I checked it was still 3-4K – though a couple of times I had to ignore a table or two holding unstructured data.
{The massive explosion in the size of database is at least partly down to holding pictures – scanned forms, photos of products, etc, but when it comes down to the core part of the app for handling people, it seems to have stayed at 4K. The other two main aspects driving up database size seem to me to be the move from regional companies and IT systems to national and international ones, and that fact that people collect and keep all and every piece of information, be it any good for anything or not}.

I’d love to know if your person-based systems come out at around 4K per person but I doubt if many of you would be curious enough to check – I think my affliction is a rare one.

The right tool for the job…

I wrote a post yesterday extolling the virtues of UltraEdit (my favourite editor). Yesterday evening I was at a friends house who was struggling to clean up some data prior to loading it into a system. I showed him the macro functions and column based editing in UltraEdit, which went some way towards easing his pain. Unfortunately the data was so random it was not possible to fully automate the clean up process, but that’s life.

That got me thinking about a blog post I wrote 5 years ago where I referenced a company called Net 2000 Ltd and said, “The thing I like about this company is that they have specific tools for specific jobs. They’re not throwing out the one-size-fits-all type of stuff that everyone else is doing.”

That in turn got me thinking about the guys from Red Gate, who I met at Open World 2010. They’ve been writing SQL Server tools for ages, but have now started moving into the Oracle space. They did a demo of their Schema Compare for Oracle tool, as well as a Data Compare tool that was still in beta. Both tools looked great and did what they claimed to do and did it well. That’s exactly what I like.

So what am I rambling on about? Well, I’m just repeatedly reminded that I like tools that perform a specific task well. I find myself avoiding products with too much bloat and feature creep in favour of simpler alternatives. With the continuing rise of portable devices, apps are getting smaller and more focused again, which I think is a good thing. Hopefully this will have a knock-on effect and maybe we will get back to something a little more sane than 40,000 tabs on a page.

Notice I’ve conveniently forgiven Oracle for their continual feature creep and bloat in the database. It’s my blog and I’ll be as inconsistent as I like. :)

Cheers

Tim…

Applying Exadata Bundle Patch 6 (BP6)

Well November came around quickly - when I started blogging I promised myself that I would at least try to share my Exadata findings once a month.  So it is time to share my experience and findings from applying the bundle patch 6. If you are new to Exadata patching you should hopefully find some important points below. The patching took place on a ¼ Rack (2 compute nodes). The high level approach for applying BP6 is
1.       Apply patch 10114705 - Database Bundle Patch 6 for Exadata first