Search

OakieTags

Who's online

There are currently 0 users and 30 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Friday Philosophy – Human Tuning Issues

Oracle Tuning is all about technical stuff. It’s perhaps the most detail-focused and technical aspect of Oracle Administration there is. Explain Plans, Statistics, the CBO, database design, Physical implementation, the impact of initialisation variables, subquery factoring, sql profiles, pipeline functions,… To really get to grips with things you need to do some work with 10046 and 10053 traces, block dumps, looking at latching and queueing…

But I realised a good few years ago that there is another, very important aspect and one that is very often overlooked. People and their perception. The longer I am on an individual site, the more significant the People side of my role is likely to become.

Here is a little story for you. You’ll probably recognise it, it’s one that has been told (in many guises) before, by several people – it’s almost an IT Urban Myth.

When I was but a youth, not long out of college, I got a job with Oracle UK (who had a nice, blue logo back then) as a developer on a complex and large hospital system. We used Pyramid hardware if I remember correctly. When the servers were put in place, only half the memory boards and half the CPU boards were initiated. We went live with the system like that. Six months later, the users had seen the system was running quite a bit slower than before and started complaining. An engineer came in and initiated those other CPU boards and Memory boards. Things went faster and all the users were happy. OK, they did not throw a party but they stopped complaining. Some even smiled.

I told you that you would recognise the story. Of course, I’m now going to go on about the dishonest vendor and what was paid for this outrageous “tuning work”. But I’m not. This hobbling of the new system was done on purpose and it was done at the request of “us”, the application developers. Not the hardware supplier. It was done because some smart chap knew that as more people used the system and more parts of it were rolled out, things would slow down and people would complain. So some hardware was held in reserve so that the whole system could have a performance boost once workload had ramped up and people would be happy. Of course, the system was now only as fast as if it had been using all the hardware from day one – but the key difference was that rather than having unhappy users as things “were slower than 6 months ago”, everything was performing faster than it had done just a week or two ago, and users were happy due to the recent improvement in response time. Same end point from a performance perspective, much happy end point for the users.

Another aspect of this Human side of Tuning is unstable performance. People get really unhappy about varying response times. You get this sometimes with Parallel Query when you allow Oracle to reduce the number of parallel threads used depending on the workload on the server {there are other causes of the phenomena such as clashes with when stats are gathered or just random variation in data volumes}. So sometimes a report comes back in 30 minutes, sometimes it comes back in 2 hours. If you go from many parallel threads to single threaded execution it might be 4 hours. That really upsets people. In this situation you probably need to look at if you can fix the degree of parallelism that gives a response time that is good enough for business reasons and can always be achieved. OK, you might be able to get that report out quicker 2 days out of 5, but you won’t have a user who is happy on 3 days and ecstatic with joy on the 2 days the report is early. You will have a user who is really annoyed 3 days and grumbling about “what about yesterday!” on the other 2 days.

Of course this applies to screens as well. If humans are going to be using what I am tuning and would be aware of changes in performance (ie the total run time is above about 0.2 seconds) I try to aim for stable and good performance, not “outright fastest but might vary” performance. Because we are all basically grumpy creatures. We accept what we think cannot be changed but if we see something could be better, we want it!

People are happiest with consistency. So long as performance is good enough to satisfy the business requirements, generally speaking you just want to strive to maintain that level of performance. {There is one strong counter-argument in that ALL work on the system takes resource, so reducing a very common query or update by 75% frees up general resource to aid the whole system}.

One other aspect of Human Tuning I’ll mention is one that UI developers tend to be very attuned to. Users want to see something happening. Like a little icon or a message saying “processing” followed soon by another saying “verifying” or something like that. It does not matter what the messages are {though spinning hour glasses are no longer acceptable}, they just like to see that stuff is happening. So, if a screen can’t be made to come back in less than a small number of seconds, stick up a message or two as it progresses. Better still, give them some information up front whilst the system scrapes the rest together. It won’t be faster, it might even be slower over all, but if the users are happier, that is fine. Of course, Oracle CBO implements this sort of idea when you specify “first_n_rows” as the optimizer goal as opposed to “all_rows”. You want to get some data onto an interactive screen as soon as possible, for the users to look at, rather than aim for the fastest overall response time.

After all, the defining criteria of IT system success is that the users “are happy” -ie accept the system.

This has an interesting impact on my technical work as a tuning “expert”. I might not tune up a troublesome report or SQL statement as much as I possibly can. I had a recent example of this where I had to make some batch work run faster. I identified 3 or 4 things I could try and using 2 of them I got it to comfortably run in the window it had to run in {I’m being slightly inaccurate, it was now not the slowest step and upper management focused elsewhere}. There was a third step I was pretty sure would also help. It would have taken a little more testing and implementing and it was not needed right now. I documented it and let the client know about it, that there was more that could be got. But hold it in reserve because you have other things to do and, heck, it’s fast enough. {I should make it clear that the system as a whole was not stressed at all, so we did not need to reduce system load to aid all other things running}. In six months the step in the batch might not be fast enough or, more significantly, might once more be the slowest step and the target for a random management demand for improvement – in which case take the time to test and implement item 3. (For those curious people, it was to replace a single merge statement with an insert and an update, both of which could use different indexes).

I said it earlier. Often you do not want absolute performance. You want good-enough, stable performance. That makes people happy.

SIOUG 2011 Conference

Sandwiched in the middle of a busy month or so for me - Cary Millsap's seminar the other week (more on that later) and Openworld from the end of next week - was my long-awaited trip to the Slovenian Oracle User Group Conference  in Portoroz. I've been promising Joze Senegacnik I would go to Slovenia for several years now because with it being so close to Openworld, it's quite a few days away from client work. I finally made it and I'm delighted I did. It's a shame that I couldn't make it for the weekend before and maybe got to fly in Joze's plane, but the conference venue, atmosphere and attendees made up for it.

I arrived fairly late on Monday evening via a car from Trieste airport that was laid on by the user group and very much appreciated and the minor hiccup of being delivered to the wrong hotel was soon sorted by the concierge who ran me over to the correct one, where the food might have finished but the wine certainly hadn't! I met and had fun with some Bulgarian visitors, including the soon-to-be-married Svetoslav Gyurov (@sgyurov) who I'd recently met on Twitter and the Finns, who always seem to end up everywhere ;-)

The next day I'd planned to give the "Statistics on Partitioned Objects" presentation over two 45 minute sessions and also agreed to help out by standing in as a replacement for a late cancellation in the slot following those two which meant I'd have to talk for a couple of hours in the afternoon. On the basis that people would probably be bored to tears of me by then, I picked the "How I Learned To Love Pictures" which is always fun.

The morning was spent finalising the demos but when I tried getting the 'How I Learned To Love Pictures' to run, they were far too temperamental and I decided that it might be safer to make a late change to a Real Time SQL Monitoring presentation that I've given a few times and is much easier to do because it only requires a few Active reports. Best of all, it was still about performance and pictures, so I didn't feel I would be short-changing people, as long as I made it clear what I was going to do!

I think the presentations went pretty well. Statistics is a pretty dry subject to listen to someone talk about for an hour and a half straight after lunch, but the fact that the majority seemed to come back after the coffee break was encouraging! Invitations to speak from both the Bulgarian and Serbian User Groups was probably another good sign.

One slight disappointment was that, because I switched to SQL Monitoring at the last moment and didn't tell Joze, I covered a small part of the material for his next presentation on "How to get the best from the Cost-Based Optimiser" but I don't think it affected his presentation too much as he was covering a wide variety of subjects and I thought it was a great reminder of some of the new features to consider, including SQL Plan Management, which is what I'll be talking about at Openworld.

By now I was really tired so had a whale of a time with a small cold beer and a couple of smokes on the balcony of my room, watching an amazing sunset. Which put me right in the mood for dinner and some partying but, whilst the company may have been excellent again, their partying skills were shabby as we handed over something like 12 unused free drinks vouchers to another lucky attendee and then everyone retired to their rooms to finish off their presentations over bottles of water! Sad, really sad.

It was such a flying visit that the next morning I only really had time to catch up on some work before Joze drove Debra and I to Trieste airport for our Ryanair flight home (which wasn't too bad really).

I had a great time, was treated extremely well and look forward to going back - Portoroz is a lovely place and the weather was beautiful, which made such a difference from the likes of London or Birmingham! I think I may have also agreed to speak at the upcoming Bulgarian conference too ;-)

Flash Cache

Have you ever heard the suggestion that if you see time lost on event write complete waits you need to get some faster discs.
So what’s the next move when you’ve got 96GB of flash cache plugged into your server (check the parameters below) and see time lost on event write complete waits: flash cache ?

db_flash_cache_file           /flash/oracle/flash.dat
db_flash_cache_size           96636764160

Here’s an extract from a standard 11.2.0.2 AWR report:

                           Total
Event                      Waits  <1ms  <2ms  <4ms  <8ms <16ms <32ms  <=1s   >1s
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
write complete waits: flas    32                     3.1              21.9  75.0


                           Waits
                           64ms
Event                      to 2s <32ms <64ms <1/8s <1/4s <1/2s   <1s   <2s  >=2s
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
write complete waits: flas    11   3.1   3.1   3.1   3.1   6.3   6.3  12.5  62.5


                           Waits
                            4s
Event                      to 2m   <2s   <4s   <8s  <16s  <32s  < 1m  < 2m  >=2m
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
write complete waits: flas    20  37.5  18.8   9.4  12.5   9.4   9.4   3.1

It’s interesting to see the figures for single and multiblock reads from flash cache. The hardware is pretty good at single block reads – but there’s a strange pattern to the multiblock read times. The first set of figures is from the Top N section of the AWR, the second set is from the event histogram sections (the 11.2 versions are more informative than the 11.1 and 10.2 – even though the arithemtic seems a little odd at the edges). Given the number of reads from flash cache in the hour the tiny number of write waits isn’t something I’m going to worry about just yet – my plan is to get rid of a couple of million flash first. (Most of the read by other session waits are waiting on the flash cache read as well – so I’ll be aiming at two birds with one stone.)

                                                           Avg
                                                          wait   % DB
Event                                 Waits     Time(s)   (ms)   time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
db flash cache single block ph    3,675,650       5,398      1   35.2 User I/O
DB CPU                                            4,446          29.0
read by other session             1,092,573       1,407      1    9.2 User I/O
direct path read                      6,841       1,371    200    8.9 User I/O
db file sequential read             457,099       1,046      2    6.8 User I/O




                                                    % of Waits
                                 -----------------------------------------------
                           Total
Event                      Waits  <1ms  <2ms  <4ms  <8ms <16ms <32ms  <=1s   >1s
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
db flash cache multiblock  22.1K   7.6  12.3   7.7   6.0   8.8  24.7  32.8
db flash cache single bloc 3683K  66.6  22.6   3.6   2.4   3.9    .8    .0



                           Waits
                           64ms
Event                      to 2s <32ms <64ms <1/8s <1/4s <1/2s   <1s   <2s  >=2s
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
db flash cache multiblock   7255  67.2  21.5  10.4    .9    .0
db flash cache single bloc  1587 100.0    .0    .0    .0    .0    .0


Answers to the following question on a postcard please: Why do we get a “double hump” on the distribution of multiblock reads ?

Oracle Database Appliance–Bringing Exadata To The Masses. And, No More Patching!

I just googled ‘Oracle Database Appliance’ +Exadata and got offered 446,000 goodies to click on. There are only two problems with that:

1.     Exadata is not an appliance.

2.     Oracle Database Appliance has no Exadata software in it.

Get Out Of Jail Free Card
In this Computerworld article, Mark Hurd is quoted as saying the Oracle Database Appliance  brings “the benefits of Exadata to entry-level systems.” So, I googled ‘this brings the benefits of Exadata to entry level systems’ and was offered 36,300 nuggets of wisdom to read.

I have only one thing to say about this big news. There is a huge difference between a pre-configured system and an appliance.

I’ve never had to apply a patch to my toaster.  The Oracle Database Appliance is not an appliance, it is a pre-configured Real Application Clusters system.

SMB (Small/Medium Business) + Real Application Clusters? Who is handing out the get out of jail free cards?  Who briefed Oracle’s Executives on what this thing actually is before they started talking about it?

Filed under: oracle

Oracle XMLDB XQuery Update in Database Release 11.2.0.3.0

I just made use of the very cool OTN Virtual Developer Day Database site. In this environment you can follow OTN Developer Day sessions, for example, at home, while making use of all the material available on that site plus the downloadable Virtualbox OTN Developer Day Appliance. Despite you can choose for tracks like Java, .Net, APEX, there is also a database section which handles (as you might expect it from me regarding interest) Oracle XMLDB functionality.

There is a 1 hour webcast available from Mark Drake, Oracle Product Manager, that takes you along all the basics / general overview (basic due too it is too extensive to show it in only an hour) of possibilities of Oracle XMLDB functionality. For convenience there is also a PDF document that has most of the slide info of the webcast. It doesn’t contain the demo’s or extra Virtualbox OTN Developer Day Appliance first steps info, of course, and what to do to reset the training XMLDB environment in this appliance.

This PDF and the webcast starts with the Oracle safeguard (legal disclaimer), with the general product outline slide saying among others that Oracle is not responsible or obliged to actually build-in features mentioned in that release or in that form…etc…so in that context it looks like…


Sorry for the distortion, couln’t get a better screen shot of the flash based webcast.

The next release of Oracle 11.2.0.3.0 will contain the XQuery Update facility. With this in place, the XQuery language build-in the Oracle Database version since database version 10.2.0.1.0, will now come to its full extend supporting the whole range from not only SELECT functionality, but also REPLACE, UPDATE and DELETE functionality. In an SQL update statement you can make use of this by using the XQuery XMLQUERY operator to make full use of this XQuery Update Facility functionality. Of course it doesn’t have to be SQL, also the normal XQuery Oracle 11g operators can be utilized in JDBC/Java or other flavored environments or PL/SQL can now make use of this. Other means of XML interfacing would be in using the Native Database Web Service support, amongst others based on PL/SQL input, or via XQuery for Java (XQJ) means.

This also means that you now can use XQuery to its full potential, following the standard (or should I say “W3C Recommendation” – I am never sure if standard = recommendation but anyway…) and no longer have to use SQL/XML operators like updateXML(), insertChildXML() and other operators like them and go “full XQuery”.

Regarding recommendations… I really can, regarding the OTN Virtual Developer Day Database site. If you want to brush up your knowledge or start a new topic, this is really a good way to start. I know there are also OTN Developer Day sessions you can actually visit somewhere in your “neighborhood” or otherwise during the Hands-on Labs of the upcoming Oracle Open World in San Francisco.

HTH

Marco

RAC and HA SIG meting Royal Institute of British Architects September 2011

I have been looking forward to the RAC & HA SIG for quite some time. Unfortunately I wasn’t able to make the spring meeting which must have been fantastic. For those who haven’t heard about it, this was the last time the SIG met under its current name-as Dave Burnham, the chair pointed out in his welcome note.

RAC & HA SIG is going to merge with the management & infrastructure SIG to form the availability management and infrastructure SIG, potentially reducing the number of meetings to 3 for the combined SIG. This is hopefully going to increase the number of attendees and also offer a larger range of topics. I am looking forward to the new format and am hoping for a wider number of topics and greater appeal.

Partly down to the transport problems that hit London today (Victoria Line was severely delayed and apparently overground services were impacted as well) the number of attendees was lower than expected.

The following are notes I have taken during the sessions, and as I’m not the best multi-tasking person in the world there may be some grammatical errors and typos in this post for which I apologise in advance.

Support Update-Phil Davies

The first presentation was Phil Davies’s support update which provided the usual good overview of what is currently relevant in Oracle support. My personal highlight was the fact that you can limit the number of child cursors per statement via an underscore parameter. This worked will for his customer who had to use CURSOR_SHARING set to FORCE.

Also there is an interesting problem related with Data Guard, the RFS process and overwriting of arbitrary files on the standby.

Plugging in the Database Machine-Joel Goodman

Joel delivered a very good presentation about monitoring the Exadata Database Machine. What’s great about Joel is his depth of knowledge and his ability to enrich the presentation with annotations both from the classroom as well as real life. If you haven’t bookmarked his blog yet, it’s well worth doing so from http://dbatrain.wordpress.com/ .

I personally have seen this presentation internally at a customer site, but still learned new things. Especially about the SNMP traps being routed back into the MS process on the cells, which can then be checked via cellcli.

Every so often the current metrics are flushed to disk, and move from metriccurrent to metrichistory. The metric history is kept for 7 days by default, and I think I’ll look at extending my monitoring solution to heave them into the database into a statspack-like schema.

Another interesting fact to know is that ADR is also available on the cells, including the adrci command with all its options.

Plugins for OEM 11.1 include

  • Infiniband plug-in
  • Cisco plug-in
  • ILOM (only for database nodes)
  • Exadata plug-in
  • PDU plug-in
  • KVM plug-in

Of course for these to work you have to install the agents on the database servers (only!). Once the agent is deployed, the plug-ins need to be deployed to the Grid Control infrastructure first before they are passed on to the agents.

The Exadata plug in requires the database server’s agent software owner to use passwordless authentication to the cell’s cellmonitor accounts. Also, the cells must be configured to report SNMP traps to Grid Control. I guess a thorough read of the plugin installation documentation might be needed.

I personally regarded the other plugins to be of less importance and decided not to record them here-I’m sure there is a white paper on Oracle’s website somewhere.

An interesting side note on the KVM which is missing in the x2-8 is the fact that you could still access the KVM if the internal Cisco switch failed. This is simply because the KVM does NOT go through the Cisco Ethernet switch, but rather directly connects to the corporate network.

High availability for agent monitoring a target is described in MOS note 1110675.1-a sure candidate for further investigation.

After Joel’s presentation we had a great discussion with Sally and Jason about disk failures in Exadata and the quarantine. In certain situations, if multiple disks fail only high redundancy can prevent complete disaster.

Also one should really be careful to not have negative numbers in V$ASM_DISKGROUP.USABLE_FILE_MB. If you do, it’s not an immediate problem, but an imminent danger as soon as a failgroup goes offline-there simply isn’t enough space for an ASM rebalance operation. Summary: you should not run your ASM mirrored disk group at full capacity to avoid trouble. Oh yes, and you should have at least 3 failgroups in a normal redundancy diskgroup.

I suggest you read Joel’s blog entry “mirror mirror on the Exadata” for a more thorough discussion of ASM mirroring in Exadata.

Exadata Storage and Administration-Corrado Mascioli

Corrado is a colleague of mine working in engineering on the same site. He has got great experience in patching Exadata and automating the process.

The cells are shipped with the software pre-installed, based on Oracle Linux. The most important accounts available are

  • root
  • celladmin
  • cellmonitor

These have various degrees of power, listed here in descending order.

Cellcli is the main interface to the storage cell allowing the user to perform administrative tasks.

The main cell processes are:

  • CELLSRV: mainly uses iDB to communicate with the RDBMS nodes and satisfies the I/O requests.
  • Management Server – MS
  • Restart Server – RS

Flash storage is something I blogged about earlier, see here:

Flash disks can be used either as Exadata Smart Flash Cache or Grid Disks, i.e. “ASM disks”. I haven’t created flash grid disks yet but suppose you would want to group the grid disks on a cell group to create failure groups.

David Burnham raised an interesting question about differentiating the flash cache in Exadata from the one available to the mere mortals, available with a patch or 11.2.0.x on Linux and Solaris.

The PCI cards you put into a database server are like another level of buffer cache, whereas the Exadata Smart Flash Cache is a) unique to Exadata and b)

Next Corrado explained the link between the physical disk, LUN, cell disk and grid disks. Especially the 30G taken away from the first 2 cell disks cause an interesting dilemma when it comes to the allocation of space for the DBFS disk group (former SYSTEMDG). For each cell, cell disks 3-12 reserve the last 30G on the innermost tracks of the disk for DBFS_DG.

The DATA diskgroup will by default use the fastest, outermost tracks of the disks, +RECO will take the middle of the disk whereas DBFSDG uses the innermost like I just said.

DBFSDG is mostly used for the database file system but also for the OCR and the voting files. DBFS looks like a normal file system for the end user.

I wonder if you could create a grid disk on a specific number of cell disks? I’d have to check the create griddisk command in cellcli …

All the settings are easily accessible with the cellcli commands list {lun,physicaldisk,celldisk,griddisk}.

The grid disks are visible to ASM using the CELL library (V$ASM_DISK.LIBRARY), and use the path 0//. By the nature of the technology all 14 x 12 disks are visible in V$ASM_DISK.

Each cell is its own failure group-which makes sense, given the fact that all disks share a single point of failure. Also worth remembering that since there is no storage array mirroring hence we resort to ASM redundancy.

Corrado shared lots of practical advice about creating grid disks and reconfiguring a storage cell using cellcli.

Panel Session-all speakers and The private cloud-Martin Bach

Well that’s me in the middle of the action-I hope someone else covers these.

Managing ASM redundancy-Julian Dyke

Julian started the RAC SIG in summer 2004, and surely had to have the honour to have the last slot on the current designation.

His opening theme has been a comparison of single threaded CPU performance for different architectures including Intel, AMD, SPARC, and IBM Power. Contact him personally if you are interested in the real results, it suffices to say that the 5600 Xeons are the fastest. I wonder if anyone has a recent Itanium processor willing to run the benchmark.

Interesting twist about a two node cluster and ASM tablespace creation with different allocation units-setting the AU size to 32M reduced the tablespace creation time for a 20G tablespace to a few seconds. There seems to be a lot of inter-ASM instance message exchange.

Following this Julian continued with the discussion of the ASM utility kfed to dump disk group metadata followed by a graphic visualisation about extent allocation and maintenance during ASM rebalance operations.

The nugget for today was to learn why the ACD uses 42 entries and ASM_POWERLIMIT used to be 11. 42 is self-explanatory. The power limit of 11 is a true classic-it’s one faster (louder), in honour to Spinal Tap.

There are a number of new features in 11.2 Julian mentioned which I covered in Chapter 8 of “Pro Oracle Database 11g RAC on Linux”. Of particular interest was the location of the 3rd voting file in stretched and non-stretched RAC if you used two different SANs for the first 2 failgroups. Oracle now supports the iSCSI/NFS approach for the third voting disks previously recommended for stretched RAC in “normal” RAC as well.

One of the features Julian didn’t mention was the location of the snapshot controlfile which since 11.2.0.2 also has to reside on shared storage-there is a note on MOS for this.

Oh yes, and then there was the demo about ASM normal redundancy which continued the discussion started at the last RAC SIG.

Summary

I enjoyed today a lot, met lots of interesting people and had many technical discussions about all sorts of things. One thing I’m looking forward to is the change of the SIG format, especially hoping for more attendees to make it even more attractive.

Unfortunately, due to the change of date for the Management and Infrastructure SIG which I will now miss I couldn’t see Piet de Visser whom I haven’t seen since Birmingham last year. Maybe I need to get to work on the same site as he does for a few weeks to catch up properly.

Favorite Ted Talks (and other online lectures)

It’s not about technology. It’s about people and stories.  - Dean Kamen

A friend recently ask what are the best Ted Talks, and I  thought “what an awesome questions.” Why? Because Ted talks are some of the most engaging, cutting edge and insightful lectures I know of and they are free.  The talks dive into new data that is changing the way we  perceive and interact with our world from technology to morality. For example some talks show how the changes happening in biotech will eclipse the revolution of the computer  and other talks show how the way we makes decisions is flawed from choosing dating partners to investing money, as well as other talks on topics which may sound trite or hackneyed but may be more profound such as how most religions are based upon the golden rule and not upon converting the world. Speaking of religion and spirituality, these talks are my idea of a perfect Sunday morning sermon because there are so many deep  and profound ideas that could change our world for the better.

But if these talks are so good where should one start? There are over 600 talks! I’m not the first person to wonder. Some one actually tried to rate  the talks. The results are in a spreadsheet here.

Another place to start is of course the Ted talks web site, here.

How did I start? I started with a few recommendations from friends and then I downloaded all the audio recordings I could and listened to them on the drive to work. Here is a list just the audio tracks.

Of the talks I’ve listened to the, the following pop into my mind as my favorite (followed by rating # from above spreadsheet) :

Religion , Morality  and Humanity

How we think, reason, see

Biotech

UI

Marketing and Product design

Optical

A few words about the favorite talks listed above.

Section 1, Religion , Morality  and Humanity: The first talk I’ve listed is  the most important.  Though science and technology may be the most exciting, the majority of the world is religious and  and thus the way religions are organized, operate and ask us the think may be the most important area to examine in humanities daily lives. The first talk by Karen Armstrong beautifully clarifies that the worlds major religions are founded upon one teaching : for us to love one another. The second talk from Richard Dawkins on “militant atheism” might sound a bit confrontational but I highly recommend it. The lecture  is an erudite, sharp, scathing and funny treatise which puts religion in perspective.  The third talk is wonderful on audio – it has some fun turns of perspective and is a rollick through human perspectives that makes humanity easier to love and encourages tolerance. Finally the fourth talk on the roots of liberals and conservatives again encourages tolerance and shows the benefits of boths sides : liberalism which fights for justice and independence and conservatism which builds community and safety. (speaking of safety and security there is a good talk on that as well )

Section 2,  How we think, reason, see: The first 7 talks give surprising evidence that the way we reason is often not in our best interest with specific ramifications  that are detrimental for the way business and wall street is run and are the roots of the recent financial crises. The last two, Jill Botle and Sebastian Seung, share fascinating insights to the way our brain works.

Sectionn 3, Biotech:  Biotech is quick becoming the revolution to change all revolutions and Angela Belcher’s talk gives us an inkling why.

—————————————————————————————————

While on the subject of awesome audio lectures, here are some other good sources:

Stanford University’s Entrepreneurship Corner

These lectures are geared to startups and entrepreneurship.  One of my favorite which dovetails with Ted biotech talks is

NPR’s  This American Life 

These talks are  give a personal view on experiences with  humanity, American in particular. For a place to start with, a couple of good ones  that dovetails with the above Ted talks on religion are

One good trick to know with This American  Life is that the mp3′s are available at

http://audio.thisamericanlife.org/jomamashouse/ismymamashouse/[episode #].mp3

like

http://audio.thisamericanlife.org/jomamashouse/ismymamashouse/290.mp3

 Santa Fe Institute

good source of interesting intellectual lectures.  A good place to start with the Santa Fe Institute is

another interesting resource I’ve yet to check out from Google:  http://www.zeitgeistminds.com/

 

 

Oracle Database Appliance (ODA) Installation / Configuration

Earlier Oracle announced the Oracle Database Appliance which is a really cool RAC-in-a-box. And here at Enkitec office we are very lucky to get our hands dirty and play with this new beast ;) On the photo below you will see the Oracle Database Appliance.

Andy Colvin has some detailed reviews about the Oracle Database Appliance.. check out these links if you want to see the internals of the machine
http://blog.oracle-ninja.com/2011/09/inside-the-oracle-database-appliance-part-1/
http://blog.oracle-ninja.com/2011/09/oracle-announces-oracle-database-appliance/

But this post will walk you through the installation and configuration of the Oracle Database Appliance.. well all I can say.. at the end of the ODA installation, all I had was 8 screenshots and that’s it. Complete install of the Grid infrastructure, ASM diskgroups, RDBMS software, and a fully functional clustered database in just two hours. Compared to a similar recent RAC project, the installation alone required about 80 screenshots.. and took a couple of days.. and involved multiple teams in the IT group. That’s the wonder of the super simplified installation using the Oracle Appliance Manager or the OAK Configurator which you will see below:

And again all of that took just 2hours to have a 2node RAC.. ;)

Hope I’ve shared you some good stuff .. ;) Keep posted for more ODA at Enkitec Blogs!





Oracle Database Appliance – (Baby Exadata?)

Oracle today announced a new database appliance product. It’s called Oracle Database Appliance (ODA). I’m not crazy about the name, but I really like the product. Here’s a picture of the one in Enkitec’s lab:
|
|
The project was code named “Comet” – thus the yellow sticky note. ;)

I really like that name better than ODA, so I think I will just stick with Comet.

Enkitec participated in the beta test program for the product and we were very impressed, particularly with the speed at which the product could be deployed and configured. There is a new tool called the “OAK Configurator” that is sort like the Exadata OneCommand for configuring the system. Keep an eye out for Karl Arao‘s upcoming post with screen shots of the tool in action.

I’m sure there will be plenty of people talking about the specs so I won’t get carried away with that. But I will tell you that it’s basically 4 terrabytes of usable storage, 2 node RAC with up to 24 cores and a SSD component that is used for improving redo write speeds (more on that later), all in a 4U chassis. Andy Colvin has already got a really good post on the  hardware components that are included in the Oracle Database Appliance (along with pictures of the bits and bobs inside the chassis).

I should point out that while I have heard people refer to Comet as a “Baby Exadata”, I really don’t view it that way. That’s because it DOES NOT have separate storage and compute tiers. So there is no Exadata Smart Scan / Offloading secret sauce here. It also does not provide the ability to utilize Exadata’s Hybrid Columnar Compression. On the other hand, like Exadata, it is a pre-configured and tested system that can be dropped in a data center and be ready for use almost immediately (it took us only a couple of hours to set it up and create a database). Pretty unbelievable really.

So much like my favorite Bill Clinton quote, whether ODA is a “Baby Exadata” or not, really depends on your definition of the word “is”. It is a hardware platform that is built specifically to run Oracle databases, but it does not embed any of the unique Exadata software components. Nevertheless, it is an extremely capable platform that will appeal to wide variety of companies running Oracle databases.

And best of all, the list price for this puppy is only $50K. I predict this thing is going to sell like hot cakes!

Oracle Database Appliance…

The Oracle Database Appliance has been released. It looks like a pretty neat bit of kit for the SMB market. It’s listed in a couple of locations, each page with links to different technical docs, so it’s worth looking at both:

Interesting point’s include:

  • It’s a 2 node 11.2.0.2 RAC on Oracle Linux 5.5 implementation.
  • Two 6-core Xeons per node.
  • 96G Memory per node.
  • 12 TB shared disk, but triple-mirrored, so  you have 4TB of storage.
  • Mirrored “Solid-state disks for redo logs to boost performance.” I can see that point generating some discussion. :)
  • No hardware upgrade/expansion options.
  • Pay-as-you-grow licensing available.

For the full lowdown, check the technical docs under the top-level links.

If you like the one-vendor-supplies-all approach, this is kinda neat and a lot less complex than a full blow Exadata system.

Cheers

Tim…