Here’s an oddity that I ran into a little while ago while trying to prepare a sample trace file showing a particular locking pattern; it was something that I’d done before, but trace files can change with different versions of Oracle so I decided to use a copy of 126.96.36.199 that happened to be handy at the time to check if anything had changed since the previous (11gR1) release. I never managed to finish the test; here are the steps I got through:
I don’t have much time for a thorough blog post, so I’ll just paste in an example output of my asqlmon.sql script, which uses ASH sql_plan_line columns for displaying where inside your execution plan response time has been spent. Why not just use Oracle’s own SQL Monitoring reports? Well, SQL monitoring is meant for “long running” queries, which are not executed very frequently. In other words, you can’t use SQL Monitoring for drilling down into your frequently executed OLTP-style SQL. I am copying my recent post to Oracle-L mailing list here too:
The main performance impact of the old GATHER_PLAN_STATISTICS / statistics_level = ALL instrumentation came from the fact that expensive timing (gettimeofday()) system calls were used for getting A-Times of row sources.
Oracle Database implements a family of STDDEV functions for computing the standard deviation from the mean. If you think of the mean as beginning to paint a picture of the underlying data, then standard deviation is another brush-stroke toward a fuller picture that will help you draw meaning from the data you're studying.
AMIS is spending a lot of effort keeping our people up-to-date with the latest knowledge needed to help our customers the best way we can. Traditionally we also always try to share our knowledge with customers and others, via social media or conferences, and while abroad learning from others at the same time. It is …
Were you thinking, “I’ve got nothing better to do this weekend than to download the latest version of VirtualBox and update the guest additions on all my VMs”? Well your luck is in!
We recently received our 3rd Exadata machine into Enkitec’s exalab. Now we have a V2, X2 and X3 there, in addition to ODA, Big Data Appliance (which comes with a beer-holder built in!) and an Exalytics box! So you understand why Karl Arao is so excited about it :-)
This occasion demands that we hack the hell out of all this kit soon! So, let’s have another (super-secret) hacking session!
This time, let’s see how the Exadata Smart Flash Cache works! (both for reads and writes). Note that we won’t cover Smart Flash Logging in this session (otherwise we’ll end up spending half a day on it :)
With some friends from the Netherlands and Estonia at the Hard Eight BBQ at 688 Freeport Parkway in Coppell, Texas (+1) for food and another (+1) for the Texas experience. What a fun way to decompress after a brain stuffing week at Hotsos!
Spring is a very active conference season for me. I might be going to half a dozen conferences in 3-4 months. That’s a lot of travel, but I look forward to all of them. The conference I’m probably looking forward to the most this year is IOUG COLLABORATE. It might be because:
…and with a blog post title like that who would bother to read on? Only those who find modern platforms interesting…
This is just a short, technically-light blog post to point out an oddity I noticed the other day.
This information may well be known to everyone else in the world as far as I know, but it made me scratch my head so I’ll blog it. Maybe it will help some wayward googler someday.
AWR Reports – Sockets, Cores, CPUs
I’m blogging about the Sockets/Cores/CPUs reported in the top of an Oracle AWR report.
Consider the following from a Sandy Bridge Xeon (E5-2680 to be exact) based server.
Note: These are AWR reports so I obfuscated some of the data such as hostname and instance name.