I’m looking forward to ODTUG KScope 11 in Long Beach next month! KScope 11 (used to be called “Kaleidoscope”) is the only Oracle conference devoted to developers. That’s right, it’s aimed square at you and me; the people who write code that makes systems work. Yes, we get Applications “Functional” people and DBAs, but the focus is on getting the job done. KScope has whole tracks for ADF and Fusion Middleware, APEX, database development with SQL & PL/SQL, MySQL, and Hyperion/Essbase. KScope 11 has sessions for .NET developers, Java developers, and people using LAMP tools too; check it out at http://kscope11.com/. You might know that I’m on the ODTUG (Oracle Development Tools User Group) Board of Directors (http://www.odtug.com). You might not know that I’ve been attending ODTUG since it was the “CASE SIG” at IOUW many years ago. I started attending because the people who speak at the conference are the leading lights in the development community.
It’s been too long since I updated the blog. My wife and I are in the midst of a move from Denver, Colorado (where we’ve lived for 28 years) to Scottsdale, Arizona (we both grew up in the Phoenix area). Things have been hectic and writing blog entries has been low on the priority list. I’ll try to make up for it in the coming months. I’ve been learning lots about Oracle’s Edition Based Redefinition (EBR) and will be adding a white paper to the web site soon!
Here’s a little gem I hadn’t come across before (because I hadn’t read the upgrade manuals). Try running the following pl/sql block in 9i, and then 10g (or later):
declare v1 number(38); begin v1 := 256*256*256*256; dbms_output.put_line(v1); end; /
In 9i the result is 4294967296; but for later versions the result is:
declare * ERROR at line 1: ORA-01426: numeric overflow ORA-06512: at line 4
It’s not a bug, it’s expected behaviour. The expression consists of integers only, so Oracle uses INTEGER arithmetic that limits the result to roughly 9 significant figures. If you want the block to work in newer versions of Oracle you have to add a decimal point to (at least) one of the operands to make Oracle use NUMBER arithmetic that takes it up to roughly 38 significant figures.
The BGOUG conference started today. It’s a pity the recent family stuff meant I couldn’t submit a paper for this event. I’m gonna miss all my Bulgarian friends. Hope it all goes well and see you soon.
PS. Sve: I would have loved to sit in the front row of your presentation to make you nervous. Have a good one. :)
May 13, 2011 In the last couple of days I have been busy with a couple of non-Oracle Database specific tasks, and I started wondering what types of wait events I might be posting during this time period if I had to report what I was doing. Waiting on the server to reboot for the 20th time [...]
Addmittedly I haven’t checked for a little while, but an email by my co-author Steve Show prompted me to go to the Amazon website and look it up.
And yes, it’s reality! Our book is now finally available as a kindle version, how great is that?!?
There isn’t really a lot more to say about this subject. I’ll wonder how many techies are intersted in the kindle version after the PDF has been out for quite a while. If you read this and decide to get the kindle version, could you please let me know how you liked it? Personally I think the book is well suited for the Amazon reader as it’s mostly text which suits the device well.
One thing I’ve been wishing was more prominent in OEM or other tools is the displays of the fluctuation in I/O times for Oracle. I/O times can be come long for reasons such as load or hardware issues. Know that something is up with the I/O latency is less than obvious using OEM. Luckily the data is easily query-able form AWR repository. Here is a query that gets the hourly average I/O latency for all User I/O and System I/O:
|#ffffff; padding-right: .5em;">||#ffffff; padding-right: .5em;">
set pagesize 100 col event_name format a30 col avg_ms format 99999.99 select btime, event_name, (time_ms_end-time_ms_beg)/nullif(count_end-count_beg,0) avg_ms, (count_end-count_beg) ct from ( select e.event_name, to_char(s.BEGIN_INTERVAL_TIME,'DD-MON-YY HH24:MI') btime, total_waits count_end, time_waited_micro/1000 time_ms_end, Lag (e.time_waited_micro/1000) OVER( PARTITION BY e.event_name ORDER BY s.snap_id) time_ms_beg, Lag (e.total_waits) OVER( PARTITION BY e.event_name ORDER BY s.snap_id) count_beg from DBA_HIST_SYSTEM_EVENT e, DBA_HIST_SNAPSHOT s where s.snap_id=e.snap_id and s.begin_interval_time > sysdate -2 and e.wait_class in ( 'User I/O', 'System I/O') order by e.event_name, begin_interval_time ) where (count_end-count_beg) > 0 order by event_name,btime /
|#ffffff; padding-right: .5em;">||#ffffff; padding-right: .5em;">
18-APR-11 18:00 db file scattered read 8.24 4404 18-APR-11 19:00 db file scattered read 7.85 3256 18-APR-11 20:00 db file scattered read 8.22 3511 18-APR-11 21:00 db file scattered read 7.40 5025 18-APR-11 22:00 db file scattered read 6.01 290640 18-APR-11 23:00 db file scattered read 8.99 6775 19-APR-11 00:00 db file scattered read 22.08 30126 19-APR-11 01:00 db file scattered read 8.63 105273 19-APR-11 02:00 db file scattered read 5.62 24064 19-APR-11 03:00 db file scattered read 8.32 3368 19-APR-11 04:00 db file scattered read 9.51 3218 19-APR-11 05:00 db file scattered read 5.64 28113 19-APR-11 06:00 db file scattered read 7.75 11014 19-APR-11 07:00 db file scattered read 8.80 18040 19-APR-11 08:00 db file scattered read 8.00 11443 19-APR-11 09:00 db file scattered read 5.51 11543 19-APR-11 10:00 db file scattered read 5.48 19038
It would be nice to see these I/O times broken down by histograms. Something to look into with the wait histograms.
I had this comment today related to RAC installation.
“thanks for the feedback, but for newbies this is where it gets confusing. No clear guidelines”
This post is not specifically about this comment, but it does bring up the issue I keep going back to again and again…
One of the things that annoys me about the Oracle marketing machine is they still try to make out all Oracle products are accessible for newbies. Oh really? Are you seriously telling me that Oracle RAC and Oracle Grid Control 11g are accessible for newbies?
I’ve been using Oracle products for about 17 years. I’ve been using Linux for about 13 years. I’ve been administering RAC for about 10 years. I don’t claim to be an international consultant to the stars, but I have a long history with this stuff. I’m not saying this to brag, just to put this into context. With all this experience I still don’t think this stuff is easy.
Check out the Oak Table Members list. Excluding myself, this is a “who’s who” of the people you would love to have on your site to show you how Oracle stuff really works. If you were part of the Oak Table mailing list you would see these people are still struggling with the idiosyncracies of some of this Oracle stuff. There are lots of RAC related issues under discussion all the time.
Knowing all this, do you really think you can roll up off the street and do a good job of installing and administering this stuff in a production environment? Do you think it is OK to be an SQL Server DBA on Windows today and start a job as an Oracle DBA on Linux tomorrow? I see this happening all the time because bosses don’t understand how complicated this technology can be. People do one Oracle installation on Windows and think the logical next step is RAC or Exadata.
I’m happy that Oracle have invested time and money in making Oracle *easier* to install and administer, but trying to tell people that it is easy is totally the wrong message. A week long course or a 2-Day DBA manual is not going to get someone up to speed.
For the next marketing slogan I suggest,
“Oracle. It’s f*ckin’ complicated, but it’s really cool!”
Rant over … until the next time…
One of the questions that pops up on the internet from time to time is the one about finding SQL that’s doing full tablescans.
Since the appearance of the dynamic performance view v$sql_plan in Oracle 9i this has been quite easy – provided you can check memory soon enough. A query like the following will give you the sql_id (hash_value, if you’re still on 9i) and child_number of any query that has a full tablescan (or index fast full scan) in its execution path. (Note – just because there is a full tablescan in the plan this doesn’t mean that it will definitely happen, and you don’t necessarily know how often it might happen each time the query executes: so finding the SQL isn’t the whole story.)
select distinct sql_id /* hash_value */ , child_number from v$sql_plan where (operation = 'TABLE ACCESS' and options = 'FULL') or (operation = 'INDEX' and options = 'FAST FULL SCAN') ;
Once you have the sql_id (hash_value) and child_number you can always check v$sql (or v$sqlstats in 10g) for the SQL text and any other details you might want. If you’re on 10g, though, there’s an interesting variation on this theme. It’s easy (though resource-intensive on big busy systems) to use dbms_xplan to print the SQL text and execution plans for the suspect queries.
The function dbms_xplan.display_cursor() is a “pipelined function”, which means that it can behave like a table if you apply the table() operator to it; and table() operators can appear in lateral joins in Oracle. This means we can take the previous query, put it into an inline view, and (laterally) join it to dbms_xplan.display_cursor() as follows:
select plan_table_output -- (the column of the pipelined function) from ( select distinct sql_id, child_number from v$sql_plan where (operation = 'TABLE ACCESS' and options = 'FULL') or (operation = 'INDEX' and options = 'FAST FULL SCAN') ) v, table(dbms_xplan.display_cursor(v.sql_id, v.child_number)) ;
Notice that the table() operator has to appear after the view that the pipelilned function is referencing, which alluws us to pass columns from the view into the pipelined function. This query gives us the dbms_xplan.display_cursor() output – with all the variations that allows – for every query that’s still in memory that has done a tablescan or index fast full scan.
You may find this a useful thing to run occasionally. But do be cautious – it WILL hammer the library cache for some time, especially if you have a large shared pool and a busy system.
Went to see Priest last night. It was only on in 3D at the cinema I went to. As usual, the 3D is pretty crappy and doesn’t add to the film at all. I’ve seen films that were shot in 3D (like Avatar) and those that were made 3D in post production (like Clash of the Titans). They all look crap to me and paying £3 extra for the added “benefit” of flinching a couple of times when things are thrown at you is not my idea of value. Enough of my anti-3D ranting…
The film does a lot of things right, but doesn’t pack enough punch to make me think it is good. Things I like include:
I really hope there is a sequel. They only need a bit of fine tuning to turn it from meh… to awesome. Having said that, there were only 3 people in the cinema including me and two of them really should have used a hotel room, so the chances of a sequel seem pretty slim…