Search

OakieTags

Who's online

There are currently 0 users and 27 guests online.

Recent comments

Affiliations

Exadata

Do you like to watch TV? Watch Enkitec.TV from now on! :-)

I’m happy to announce Enkitec TV which is a video channel/aggregator of some of the coolest stuff we do here at Enkitec. I have uploaded some of my videos there, including the previously unpublished Oracle parameters infrastructure hacking session and Kerry’s & Cary’s E4 Exadata interview is available there as well!

E4 Wrap Up – Part II – Cary Millsap Interview

The 2012 Enkitec Extreme Exadata Expo is behind us now. Our video guy (Bob) has been working diligently for the last week or so to get the presentations edited. They will be made available to the attendees shortly. We have already posted a video of the opening session. It is me interviewing Cary Millsap about his impressions of Exadata. One of the things I have found most interesting about Exadata is how it makes very experienced Oracle performance guys re-think things. It’s fun watching them being exposed to Exadata in an intimate way (not just Power Point). The reactions are interesting. There is usually a desire to try to break it although  it’s generally harder than it appears, at least initially.

Exadata vs. IBM P-Series

Earlier this year I participated in a Total Cost of Ownership (TCO) study that was run by a company called the FactPoint Group. The goal was to compare the cost of purchasing and running Oracle on Exadata vs. the cost of purchasing and running Oracle on IBM P-Series hardware. The findings are published here:

Exadata vs. IBM P-Series

Compression Units – 5

The Enkitec Extreme Exadata Expo (E4) event is over, but I still have plenty to say about the technology. The event was a great success, with plenty of interesting speakers and presentations. As I said in a previous note, I was particularly keen to hear  Frits Hoogland’s comments  on Exadata and OLTP, Richard Foote on Indexes, and Maria Colgan’s comments on how Oracle is making changes to the optimizer to understand Exadata a little better.

All three presentations were interesting – but Maria’s was possiby the most important (and entertaining). In particular she told us about two patches for 11.2.0.3, one current and one that is yet to be released (unfortunately I forgot to take  note of the patch numbers).

E4 Wrap Up – Part I – OLTP Bashing

Well the Enkitec Extreme Exadata Expo (E4) is now officially over. I thoroughly enjoyed the event. I personally think Richard Foote stole the show with his clear and concise explanation of why a full table scan is not a straight forward operation on Exadata, and why that makes it so difficult for the optimizer to properly cost it. But Maria Colgan came out with a fiery talk on the optimizer that gave him a good run for his money (she actually had the highest average rating from the attendees that filled out evaluation forms by the way – so congratulations Maria!). Of course there were many excellent presentations from many very well known Oracle practitioners. Overall it was an excellent conference (in my humble opinion) due in large part to the high quality of the speakers and the effort they put into the presentations.

Using LVM snapshots for backup and restore of filesystems

This post really is about using LVM (Logical Volume Manager; an abstraction layer for disk devices) snapshots. A snapshot is a frozen image of a logical volume, which simply means “filesystem”. It’s not really “frozen”, LVM2 snapshots are read/write by default. But you can freeze a filesystem in time with a LVM snapshot.

The background of this really is Exadata (computing node) and upgrading, but has nothing unique to Exadata. So don’t let this bother you. But the idea of using LVM snapshots popped up when dealing with Exadata computing nodes and upgrades.

First of all: LVM is in development, which mean different Linux versions have different LVM options available to them. I am using the Exadata X2 Linux version: RHEL/OL 5u7 x86_64. I guess OL6 has more and more advanced features inside LVM, but with X2, OL5u7 is what I have to use. So the steps in this blogpost are done with this version. Any comments are welcome!

Friday Philosophy – I Am An Exadata Expert

(Can I feel the angry fuming and dagger looks coming from certain quarters now?)

I am an Exadata Expert.

I must be! – I have logged onto an Exadata quarter rack and selected sysdate from Dual.

The pity is that, from some of the email threads and conversations I have had with people over the last 12 months, this is more real-world experience than some people I have heard of who are offering consultancy services. It’s also more experience than some people I have actually met, who have extolled their knowledge of Exadata – which is based solely on the presentations by Oracle sales people looking at the data sheets from 10,000 feet up and claiming it will solve world hunger.

Heck, hang the modesty – I am actually an Exadata Guru!

Compression Units – 4

Following up a suggestion from Kerry Osborne that I show how I arrived at the observation I made in an earlier posting about the size of a compression unit, here’s a short note to show you what I did. It really isn’t rocket science (that’s just a quick nod to NASA and Curiosity – the latest Mars rover).

Step 1: you can access rows by rowid in Oracle, so what happens when you try to analyze rowids on Exadata for a table using HCC ? I created a table with the option “compress for archive high” and then ran the following query:

Future Appearances 2012

Here’s the list of public events where I’ll be speaking this year:

Enkitec’s Extreme Exadata Expo in Dallas, TX (13-14. August 2012):

Additionally I’m going to be around to participate at Q&A sessions, panels, random Exadata hacking and just for fun! :)

 

UKOUG Conference in Birmingham, UK  (3-5. December 2012):

I will deliver 2 Exadata speeches (one of them a 2-hour masterclass) and the “part 2″ of my Troubleshooting Most Complex Performance Issues war stories…

Compression Units – 3

For those who have read the previous posting of how I engineered an Exadata disaster and want to reproduce it, here’s the script I used to generate the data.