Search

OakieTags

Who's online

There are currently 0 users and 33 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Oracle Database 12.1.0.2.0 – Turning OFF the In-Memory Database option

So how to turn it the option off/disabled…As a privileged database user: > Just don’t set the INMEMORY_SIZE parameter to a non zero value…(the default...
class="readmore">Read More

Impugn My Character Over Technical Points–But You Should Probably Be Correct When You Do So. Oracle 12c In-Memory Feature Snare? You Be The Judge ‘Cause Here’s The Proof.

Executive Summary

This blog post offers proof that you can trigger In-Memory Column Store feature usage with the default INMEMORY_* parameter settings. These parameters are documented as the approach to ensure In-Memory functionality is not used inadvertently–or at least they are documented as the “enabling” parameters.

During the development of this study, Oracle’s Product Manager in charge of the In-Memory feature has cited Bug #19308780 as it relates to my findings.

Index of Related Posts

This is part 4 in a series: Part I, Part II, Part III, Part IV.

Blog Updates

Please note, blog updates are listed at the end of the article.

What Really Matters?

This is a post about enabling versus using the Oracle Database 12c Release 12.1.0.2 In-Memory Column Store feature which is a part of the separately licensed Database In-Memory Option of 12c. While reading this please be mindful that in this situation all that really matters is what actions on your part effect the internal tables that track feature usage.

Make Software, Not Enemies–And Certainly Not War

There is a huge kerfuffle regarding the separately licensed In-Memory Column Store feature in Oracle Database 12c Release 12.1.0.2–specifically how the feature is enabled and what triggers  usage of the feature.

I pointed out a) the fact that the feature is enabled by default and b) the feature is easily accidentally used. I did that in Part I and Part II in my series on the matter.  In Part III I shared how the issue has lead to industry journalists quoting–and then removing–said quotes. I’ve endured an ungodly amount of shameful backlash even from some friends on the Oaktable Network list as they asserted I was making a mole hill out of something that was a total lark (that was a euphemistic way of saying they all but accused me of misleading my readers).  I even had friends suggesting this is a friendship-ending issue. Emotion and high-technology are watery-oil like in nature.

About the only thing that hasn’t happened is for anyone to apologize for being totally wrong in their blind-faith rooted feelings about this issue. What did he say? Please read on.

From the start I pointed out that the INMEMORY_QUERY feature is enabled by default–and that it is conceivable that someone could use it accidentally. The back lash from that was along the lines of how many parameters and what user actions are needed for that to be a reality.  Maria Colgan–who is Oracle’ s PM for the In-Memory Column Store feature–tweeted that I’m confusing people when announcing her blog post on the fact that In-Memory Column Store usage is controlled not by INMEMORY_QUERY but instead INMEMORY_SIZE. Allow me to add special emphasis to this point. In a blog post on oracle.com, Oracle’s PM for this Oracle database  feature explicitly states that INMEMORY_SIZE must be changed from the default to use the feature.

If I were to show you everyone else was wrong and I was right, would you think less of me? Please, don’t let it make you feel less of them. We’re just people trying to wade through the confusion.

The Truth On The Matter

Here is the truth and I’ll prove it in a screen shot to follow:

  1. INMEMORY_QUERY is enabled by default. If it is set you can trigger feature usage–full stop.
  2. INMEMORY_SIZE is zero by default.  Remember this is the supposedly ueber-powerful setting that precludes usage of the feature and not, in fact, the top-level-sounding INMEMORY_QUERY parameter. As such this should be the parameter that would prevent you for paying for usage of the feature.

In the following screenshot I’ll show that INMEMORY_QUERY is at the default setting of ENABLE  and INMEMORY_SIZE is at the default setting of zero. I prove first there is no prior feature usage. I then issue a CREATE TABLE statement specifying INMEMORY.  Remember, the feature-blocking INMEMORY_SIZE parameter is zero.  If  “they” are right I shouldn’t be able to trigger In-Memory Column Store feature usage, right? Observe–or better yet, try this in your own lab:

proof-mu

So ENABLED Means ENABLED? Really? Imagine That.

So I proved my point which is any instance with the default initialization parameters can trigger feature usage. I also proved that the words in the following three screenshots are factually incorrect:

MariaCallingMeOut

Screenshot of blog post on Oracle.com:

maria-on-inmemory_size_assertion

Screenshot of email to Oracle-L Email list:

 

kerry

 

Summary

I didn’t want to make a mountain out of this mole hill. It’s just a bug. I don’t expect apologies. That would be too human–almost as human as being completely wrong while wrongly clinging to one’s wrongness because others are equally, well, wrong on the matter.

 

BLOG UPDATE 2014.07.31: Click here to view an article on The Register regarding Oracle Database In-Memory feature usage.

BLOG UPDATE 2014.07.30: Oracle’s Maria Colgan has a comment thread on her blog on the In-Memory Column Store feature. In the thread a reader reports precisely the same bug behavior you will see in my proof below. Maria’s comment is that feature usage is tracked in spite of the supposed disabling feature INMEMORY_SIZE set to the default value. While this agrees with what I  already knew about this feature it is in my opinion not sufficient to speak of a bug of such consequence without citing the bug number. Furthermore, such a bug must be visible to users with support contracts. Click here for a  screenshot of the Oracle blog. In case Oracle changes their mind on such an apparently sensitive topic I uploaded the blog to the Wayback Machine here

BLOG UPDATE 2014.07.29: Oracle’s Maria Colgan issued a tweet stating “What u found in you 3rd blog is a bug [...] Bug 19308780.”  Click here for a screenshot of the tweet. Also, click here for a Wayback Machine (web.archive.org) copy of the tweet.

 

 

Sundry References

 Print out of Maria’s post on Oracle.com and link to same: Getting started with Oracle Database In-Memory Part I

Franck Pachot 2014.07.23 findings reported here:  Tweet , screenshot of tweet.

 

 

 

Filed under: oracle

Oracle Database 12c In-Memory Feature. Enabled, Used or Confused? Don’t Be.

Enabled By Default. Not Usable By Default.

Series Links: Part I, Part II.

It was my intention to only write 2 installments on my short series about Oracle Database 12c In-Memory Column Store feature usage. My hopes were quickly dashed when the following developments occurred:

1. A quote from an Oracle spokesman cited on informationweek.com was pulled because (I assume) it corroborated my assertion that the feature is enabled by default. It is, enabled by default.

Citations: Tweet about the quote, link to the July 26, 2014 version of the Informationweek.com article showing the Oracle spokesman quote: Informationweek.com 26 July 2014.

The July 26, 2014 version of the Informationweek.com article quoted an Oracle spokesman as having said the following:

Yes, Oracle Database In-Memory is an option and it is on by default, as have been all new options since Oracle Database 11g

2. An email from an Oracle Product Manager appeared on the oracle-l email list and stated the following:

So, it is explicitly NOT TRUE that Database In-Memory is enabled by default – and it’s (IMHO) irresponsible (at best) to suggest otherwise

Citation: link to the oracle-l list server copy of the email, screenshot of the email.

 

Features or Options, Enabled or Used

I stated in Part I that I think the In-Memory Column Store feature is a part of a hugely-important release.  But, since the topic is so utterly confusing I have to make some points.

It turns out that neither of the Oracle folks I mention above are correct. Please allow me to explain. Yes, the Oracle spokesman spoke the truth originally to Informationweek.com as reported by Doug Henschen. The truth that was spoken is, yes indeed, the In-Memory Column Store feature/option  is enabled by default. Now don’t be confused. There is a difference between enabled and usable and  in-use.

In Part II of the series I showed an example of the things that need to be done to put the feature into use–and remember, you’re not charged for it until it is used. I believe that post made it quite clear that there is a difference between enabled and in-use. What does the Oracle documentation say about In-Memory Column Store feature/option default settings? It says it is enabled by default. Full stop. Citation: Top-level initialization parameter enabled by default. I’ve put a screenshot of that documentation here for education sake:

enable-by-default

 

This citation of the documentation means the Oracle spokesman was correct.  The feature is enabled by default.

The problem is with the mixing of the words enabled and “use” in the documentation.

Please consider the following screenshot of a session where the top-level INMEMORY_QUERY parameter is set to the default (ENABLE) as well as the INMEMORY_SIZE parameter to grant some RAM to the In-Memory Column Store feature. In the screenshot you’ll see that I didn’t trigger usage of the feature just by enabling it. I did, however, show that you don’t have to “use” the feature to trigger “usage” so please visit Part II on that matter.

img1-mu

So here we sit with wars over words.

Oracle’s Maria Colgan just posted a helpful blog (or, practically a Documentation addendum) going over the initialization parameters needed to fully, really-truly enable the feature–or more correctly how to go beyond enabled to usable.  I’ve shown that Oracle’s spokesman was correct in stating the feature is enabled by default (INMEMORY_QUERY enabled by default). Maria and others showed that you have to set 2 parameters to really, truly, gosh-darnit use the feature that is clearly ENABLE(d) by default. I showed you that enabling the feature doesn’t mean you use the feature (as per the DBA_FEATURE_USAGE_STATICS view). I also showed you in Part II how one can easily, accidentally use the feature.  And using the feature is chargeable and that’s why I assert INMEMORY_QUERY should ship with the default value of DISABLE. It is less confusing and it maps well to prior art such as the handling of Real Application Clusters.

Trying To Get To A Summary

So how does one summarize all of this?  I see it as quite simple. Oracle could have shipped Oracle Database 12c 12.1.0.2 with the sole,  top-level enabling parameter disabled (e.g., INMEMORY_QUERY=DISABLE). Doing so would be crystal clear because it nearly maps to a trite sentence–the feature is DISABLE(d). Instead we have other involved parameters that are not top level adding to the confusion. And confusion indeed since the Oracle documentation insinuates INMEMORY_SIZE is treated differently when Automatic Memory Management is in play:

amm-issue

Prior Art

And what is that prior art on the matter? Well, consider Oracle’s (presumably) most profitable separately-licensed feature of all time–Real Application Clusters. How does Oracle treat this desirable feature? It treats it with a crystal-clear top-level, nuance-free disabled state:

cluster_database_default

So, in summary, the In-Memory feature is not disabled by default. It happens to be that the capacity-sizing parameter INMEMORY_SIZE is set to zero so the feature is unusable. However, setting both INMEMORY_QUERY and INMEMORY_SIZE does not constitute usage of the feature.

Confused? I’m not.

 

Filed under: oracle

To subset or not to subset

There was a problem at a customer in application development where using full copies for developers and QA was causing excessive storage usage and they wanted to reduce costs , so they decided to use subsets of the production development and QA
  • Data growing, storage costs too high, decided to roll out subsetting
  • App teams and IT Ops teams had to coordinate and manage the complexity of the  shift to subsets in dev/test
  • Scripts had to be written to extract the correct and coherent data, such as correct date ranges and respect referential integrity
  • It’s difficult to get 50% of data 100% of skew instead of  50% of data 50% of skew
  • Scripts were constantly breaking as production data evolved requiring more work on the subsetting scripts
  • QA teams had to rewrite automated test scripts to run correctly on subsets
  • Time lost in ADLC, SDLC to enable subsets to work (converting CapEx into higher OpEx) put pressure on release schedules
  • Errors were caught late in UAT, performance, and integration testing, creating “integration or testing hell” at the end of development cycles
  • Major incidents occurring post deployment, forcing more detailed tracking of root cause analysis (RCA)
  • Production bugs causing downtime were  due 20-40% to non-representative data sets and volumes.
Moral of the story, if you roll out subsetting,  it’s worth holding the teams accountable and tracking the total cost and impact across teams and release cycles. What is the real cost impact of going to subsetting? How much extra time goes into building and maintaining the subsets and more importantly what is the cost impact of letting bugs slip into production because of the subsets?
A robust, efficient and cost savings alternative solution would be to use database virtualization. With database virtualization, database copies take up almost no space, can be made in minutes and all the over head and complexities listed above go way. In addition database virtualization will reduce CapEx/OpEx in many other areas such as
  • Provisioning  operational reporting environments
  • Providing  controlled backup/restore  for DBAs
  • Full scale test environments.
And subsets do not provide the data control features that database virtualization provides to accelerate application projects (including analytics, MDM, ADLC/SDLC, etc.). Our customers repeatedly see 50% acceleration on project timelines and cost, which generally dwarf the CapEx, OpEx storage expense lines, due to the features we make available in our virtual environments:
  • Fast data refresh
  • Integration
  • Branching (split a copy of dev database off for use in QA in minutes)
  • Automated secure branches (masked data for dev)
  • Bookmarks for version control or compliance preservation
  • Share (pass errors + data environment from test to dev, if QA finds a bug, they can pass a copy of db back to dev for investigation)
  • Reset/rollback (recover to pre-test state or pre-error state)
  • Parallelize all steps: have multiple QA databases to run QA suites in parallel. Give all developers their own copy of the database so they can develop without impacting other developers.
VDBs

Remote Cloning of PDBs and Non-CDBs (12.1.0.2)

I mentioned in a previous post that I would be revisiting some of my existing multitenant articles to include some of the features introduced in the 12.1.0.2 patch. Here’s one of them.

Not only have Oracle fixed the bug in 12.1.0.1 that prevented the remote cloning, but they’ve also added the ability to clone directly from non-CDB style instances, giving you another option for migrating from non-CDBs to PDBs. Pretty darn cool if you ask me! :)

Some more stuff will be amended over the coming days. If the changes result in some major rewrites I’ll probably blog about them. If not, I’ll just slip them into the articles and make a reference to the fact the specific feature was introduced in 12.1.0.2…

Cheers

Tim…


Remote Cloning of PDBs and Non-CDBs (12.1.0.2) was first posted on July 28, 2014 at 3:40 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Analogy

So 12.1.0.2 is out with a number of interesting new features, of which the most noisily touted is the “in-memory columnar storage” feature. As ever the key to making best use of a feature is to have an intuitive grasp of what it gives you, and it’s often the case that a good analogy helps you reach that level of understanding; so here’s the first thought I had about the feature during one of the briefing days run by Maria Colgan.

“In-memory columnar storage gives you bitmap indexes on OLTP systems without the usual disastrous locking side effects.”

Obviously the analogy isn’t perfect … but I think it’s very close:  for each column stored you use a compression technique to pack the values for a large number of rows into a very small space, and for each stored row you can derive the rowid by arithmetic.  In highly concurrent OLTP systems there’s still room for some contention as the session journals are applied to the globally stored compressed columns (but then, private redo introduces some effects of that sort anyway); and the “indexes” have to be created dynamically as tables are initially accessed (but that’s a startup cost, it’s timing can be controlled, and it’s basically limited to a tablescan).

Whatever the technical hand-waving it introduces – thinking of the in-memory thing as enabling real-time bitmaps ought to help you think of ways to make good use of the feature.

 

 

Oracle Licensing : It’s Your Responsibility to Comply

This little moan was inspired by some posts by Kevin Closson.

Kevin is awesome (I love you Kevin!), but I don’t happen to agree with him in this specific case*. Having said that, I do understand where he is coming from.

Earlier in the year my current employer had an Oracle license audit, which I wrote about here. I must admit I was quite nervous going into it. Oracle licensing is a tricky business and Oracle tend to ship a lot of functionality out-of-the-box, so it is easy for people to use things “by accident”. Having said that, Oracle do provide tools (DBMS_FEATURE_USAGE_STATISTICS) and documentation (Feature Availability by Edition) to help you keep track of things. You can also speak to Oracle License Management Services.

So now to the point of this moan and the title of this post. In my opinion, a DBA should have a basic awareness of licensing. I’m not talking about a profound knowledge of the licensing manual. I just mean a rough idea about database editions and options and how to find out what options you are using.

I think there are two distinct issues here:

  1. Some database options have an on/off switch and are turned off by default (like the In-Memory Column Store). If your DBA turns one of these on “by accident”, they should be fired. They are an idiot. Simple as that!
  2. Other database options are turned on by default, or even mandatory, but you are not breaking your licensing unless you use them. It’s very easy for a DBA or developer to copy something from a website or a MOS note and accidentally breach the license agreement. I have a lot of sympathy for people who fall into this trap. Anyone who was around in the early 10g days, when AWR, ADDM and ASH were first introduced, will remember the massive stink that caused. Click on the wrong page in EM or query the wrong DBA view and you’re in trouble. This was just plain wrong!

Anyway, enough of this moaning. I would be interested to know other people’s opinions though!

Cheers

Tim…

* So it turns out Kevin was correct in so far as the feature does get tracked as being used, but this is down to a bug. Presumably when that bug gets patched it will flush this usage and everything will be back to normal.


Oracle Licensing : It’s Your Responsibility to Comply was first posted on July 26, 2014 at 2:38 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Beauty of command line - OEM 12c

Why all software should have a command line and automation plugin ? Answer is simple - if you have to repeat number of operation for different targets - scripts can help you save your precious time.

I really enjoy a fact that Oracle added a command line to Oracle Enterprise Manager line, and now you can script lot of boring tasks like adding new administrator to list of users who can access Named Credentials.

To add new admin (przepiorom) it's enough to run the following script
 

add_privs.sh przepiorom

This is first draft of this script (no error handling but it's doing his work)

#!bin/bash

NEW_ADMIN=$1

TPID=$$
PRIV_LIST=`emcli list_named_credentials | awk '{ print $1; }' | grep -v Credential > /tmp/priv_$TPID`


while read LINE ; do
echo $LINE
emcli grant_privs -name="${NEW_ADMIN}" -privilege="FULL_CREDENTIAL;CRED_NAME=${LINE}:CRED_OWNER=sysman"
done > /tmp/priv_$TPID

rm /tmp/priv_$PPID



The next example is an another script which is refreshing a Weblogic domain components.
When a new version of application is deployed a previous one are still registered as a targets and you will see it as down in your OEM.



There is a domain refresh command in OEM menu but if you have more systems going through all of those is not what you want. Using a command line and configuration file you can be done with one line.

emcli login -username=sysman -password=xxxxxxx -force
emcli refresh_wls -input_file=domain_refresh_file:/home/oracle/bin/domain_refresh_file.csv –debug


Content of domain_refresh_file.csv looks like this:

/xxx_soa_mot_domain_soa/soa,R
/xxx_soa_mot_domain_soa/soa,E


There is a one line per target split into two parts.

First part of line is a target name and domain name, ex. /xxx_soa_mot_domain_soa/soa Second part is operation: 
R - remove target which doesn't exist in domain anymore 
E - enable refresh of domain (aka. add monitoring targets)


regards,
Marcin

Oracle Database 12c Release 12.1.0.2 – My First Observations. Licensed Features Usage Concerns – Part II.

Preface

In this post you’ll see that I provide an scenario of accidental paid-feature “use.”  The key elements of the scenario are: 1) I enabled the feature (by “accident”) but 2) I didn’t actually use the feature because I neither created nor altered any tables.

In Part I of this series I aimed to bring to people’s attention what I see as a significant variation from the norm when it comes to Oracle licensed-option usage triggers and how to prevent them from being triggered. Oracle Database Enterprise Edition supports several separately licensed options such as Real Application Clusters, Partitioning, and so on.  A feature like Real Application Clusters is very expensive but if  “accidental usage” of this feature is a worry on an administrator’s mind there is a simple remedy: unlink it. If the bits aren’t in the executable you’re safe. Is that a convoluted procedure? No. An administrator simply executes make -d ins_rdbms.mk rac_off and then relinks the Oracle executable. Done.

What about other separately licensed options like Partitioning?  As I learned from Paul Bullen, once can use the Oracle-supplied chopt command to remove any chance of using Partitioning if, in fact, one does not want to use Partitioning. I thought chopt might be the solution to the issue of possible, accidental usage of the In-Memory Column Store feature/option. However, I found that chopt, as of this point, does not offer the ability to neutralize the feature as per the following screenshot.

img5

Trivial Pursuit of the Ignoramus or Mountainous Mole Hill?

There is yet no way I know of to prevent accidental use of the In-Memory Column Store feature/option. Am I just making a mountain out of a mole hill? I’ll let you be the judge. And if you side with folks that do feel this is a mountainous-mole hill you’d be in really good company.

Lest folks think that we Oaktable Network Members are a blind, mutual admiration society, allow me to share the rather sizzling feedback I got for raising awareness to this aspect of Oracle Database 12c:

oaktable-email-calls-bs

Geez!

No, I didn’t just want to dismiss this feedback. Instead  I pushed the belt-sander off of my face and read the words a couple of times. The author of this email asserted I’m conveying misinformation ( aka “BS”) and to fortify that position it was pointed out that one must:

  1. Set a database (instance initialization) parameter.
  2. Bounce the instance.
  3. Alter any object to use the feature. I’ll interpret that as a DDL action (e.g., ALTER TABLE, CREATE TABLE).

Even before I read this email I knew these assertions were false. We all make mistakes–this I know!  I should point out that unlike every release of Oracle from 5.1.17 to 11gR2 I was not invited to participate in the Beta for this feature. I think a lot of Oaktable Network members were in the program–perhaps even the author of the above email snippet–but I don’t know that for certain. Had I encountered this during a Beta test I would have raised it to the Beta manager as an issue and maybe, just maybe, the feature behavior might have changed before first customer ship. Why am I blabbering on about the Beta program? Well, given the fact that even Oaktable Network members with pre-release experience with this feature evidently do not know what I’m about to show in the remainder of this post.

What Is An Accident?

Better yet, what is an accident and how full of “BS” must one be to fall prey? Maybe the remainder of the post will answer that rhetorical question. Whether or not  it does, in fact, answer the question I’ll be done with this blog series and move on to the exciting work of performance characterization of this new, incredibly important feature.

Anatomy of a “Stupid Accident.”

Consider a scenario. Let’s say a DBA likes to use the CREATE DATABASE statement to create a database. Imagine that!  Let’s pretend for a moment that DBAs can be very busy and operate in chaotic conditions. In the fog of this chaos, a DBA could, conceivably, pull the wrong database instance initialization file (e.g., init.ora or SPFILE) and use it when creating a database. Let’s pretend for a moment I was that busy, overworked DBA and I’ll show you what happens in the following:

  1. I executed sqlplus from the bash command prompt.
  2. I directed sqlplus to execute a SQL script called cr_db.sql. Many will recognize this as the simple little create script I supply with SLOB.
  3. The cr_db.sql script uses a local initialization parameter file called create.ora
  4. sqlplus finished creating the database. NOTE: this procedure does not create even a single user table.
  5. After the database was created I connected to the instance and forced the feature usage tracking views to be updated (thanks to Morgan’s Library for that know-how as well…remember, I’m a database platform engineer not a DBA so I learn all the time in that space).
  6. I executed a SQL script to report feature usage of only those features that match a predicate such as “In-%’

 

img1

This screen shot shows that the list of three asserted must-happen steps (provided me by a fellow Oaktable Network member) were not, in fact, the required recipe of doom.  The output of the features.sql script proves that I didn’t  need to create even a single a user table to trigger the feature.

The following screen shot shows what the cr_db.sql script does:

img2

The following screenshot shows the scripts I used to update the feature usage tracking views and to report against same:

img4

The “Solution” To The “Puzzle”

Stepping on a landmine doesn’t just happen. You have to sort of be on your feet and walking around for that to happen. In the same vein, triggering usage of the separately licensed Oracle Database 12c Release 12.1.0.2 In-Memory Column Store feature/option required me to be “on my feet and walking around” the landmine–as it were. Did I have to jump through hoops and be a raging, bumbling idiot to accidentally trigger usage of this feature? No. Or, indeed, did I issue a single CREATE TABLE or ALTER TABLE DDL statement? No. What was my transgression? I simply grabbed the wrong database initialization parameter file from my repository–in the age old I’m-only-human sort of way these things can  happen.

To err to such a degree would certainly not be human, would it?

The following screenshot shows the parameter file I used to prove:

  1. You do not need to alter parameters and bounce an instance to trigger this feature usage in spite of BS-asserting feedback from experts.
  2. You don’t even have to create a single application table to trigger this feature usage.

img3

Summary

This blog thread has made me a feel a little like David Litchfield must have surely felt for challenging the Oracle9i-era claims of how Oracle Database was impenetrable by database security breaches. We all know how erroneous those claims where. Unbreakable, can’t break it, can’t break in?

Folks, I know we all have our different reasons to be fans of Oracle technology–and, indeed, I am a fan. However, I’m not convinced that unconditional love of a supposed omnipotent and omniscient god-like idol are all that healthy for the IT ecosystem. So, for that reason alone I have presented these findings. I hope it makes at least a couple of DBAs aware of how this licensed feature differs from other high-dollar features like Real Application Clusters in exactly what it takes to “use” the feature–and, moreover, how to prevent stepping on a landmine as it were.

 

…and now, I’m done with this series.

 

 

Filed under: oracle

Finding the blocking SQL in a lock wait

#555555;">One of my pet peeves on Oracle is the inability to find out what SQL took out a lock that another user is waiting. It’s easy to find the waiting user and their SQL with v$session by looking at v$session.event where the event is an “enqueue” (v8 and v9) or “enq: TX – row lock contention” and then looking up their SQL via the v$session.sql_hash_value which joins to v$sql.hash_value for the v$sql.sql_text.

#555555;">So far so good and easy.
#555555;">Second step of finding the blocker is really easy starting in 10g because Oracle has a new field v$session.blocking_session which can be joined back to v$session.sid to find information on that user.
#555555;">The rub is that there is no way to find the SQL text that the blocking session ran that took out the original blocking lock.
#555555;">For the 2 day course I teach on Active Session History (ASH) and Oracle wait events, I wanted to show students how to actually get the blocking SQL text if they really had to.
I went as far as looking at log miner to try and get the blocking SQL text and this works sometimes and sometimes it doesn’t. At that point I gave up, knowing the next step was dumping the redo logs which was more research than I felt like doing at the time.Luckily someone has picked up the torch – Doug Burns!
On the Oaktable email list I shared my research with Doug and Doug took it even farther and posted it on his blog:
#555555;">
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1477-Diagnosing-Locking-Problems-using-ASH-Part-1.html">Diagnosing Locking Problems using ASH – Part 1#555555;"> – try OEM 10g on Lock Problem
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1478-Diagnosing-Locking-Problems-using-ASH-Part-2.html">Diagnosing Locking Problems using ASH – Part 2#555555;"> – look at raw ASH data, missing blocker SQL
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1480-Diagnosing-Locking-Problems-using-ASH-Part-3.html">Diagnosing Locking Problems using ASH – Part 3#555555;"> – look at raw ASH data, find the blocker SQL
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1481-Diagnosing-Locking-Problems-using-ASH-Part-4.html">Diagnosing Locking Problems using ASH – Part 4#555555;"> – misleading data from ASH
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1484-Diagnosing-Locking-Problems-using-ASH-Part-5.html">Diagnosing Locking Problems using ASH – Part 5#555555;"> – logminer undependable
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1487-Diagnosing-Locking-Problems-using-ASH-Part-6.html">Diagnosing Locking Problems using ASH – Part 6#555555;"> – overview of all the parts (table of contents)
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1488-Diagnosing-Locking-Problems-using-ASHLogMiner-Part-7.html">Diagnosing Locking Problems using ASH/LogMiner – Part 7#555555;"> – Redo log dump
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1491-Diagnosing-Locking-Problems-using-ASHLogMiner-Part-8.html">Diagnosing Locking Problems using ASH/LogMiner – Part 8#555555;"> – Redo log dump lacks “select for update”
  • #2970a6;" href="http://www.blogger.com/Diagnosing%20Locking%20Problems%20using%20ASH/LogMiner%20%E2%80%93%20Part%209">Diagnosing Locking Problems using ASH/LogMiner – Part 9#555555;"> – Redo log dump – search for traces of “select for update”
  • Final Post  (thanks to JL for pointing out I’d left this off)
#555555;">
#555555;">Long story short, the best way to try and see what changed (when there was a change and not a “select for update”) to cause the lock is to use flashback information. For example if or contention table was TEST_TAB and our field that we knew was modified “VAL1″ then we could try to find what it was changed from:

Session 1

update test_tab set val1=’aa’ where id=1;

Session 2

update test_tab set val1=’aaa’ where id=1;

Blocking info from ASH where wait is enq: TX – row lock contention

select
      distinct
      lpad(ltrim(to_char(p2,'XXXXXX')),6,'0')||'00'||
      ltrim(to_char(mod(p3,256),'XX'))||
      ltrim(to_char(trunc(p3/256),'XX'))||'0000'
      block_xid,
      to_char(p2,'XXXXXXXX') p2hex,
      to_char(p3,'XXXXXXXX') p3hex,
      trunc(p2/65536) usn,
      mod(p2,65536) slot,
      p3 sqn, xid wait_xid
from v$active_session_history
where event like 'enq: T%'
and sample_time > sysdate - &v_minutes/(60*24)
;

BLOCK_XID	      P2HEX     P3HEX	    USN         SLOT      SQN  WAIT_XID
----------------  --------- --------- ---------- ---------- ---------- ----------------
0A0001007264000       A0001      6472	      10          1      25714

Data from flashback, after session 1 commits (before the commit there is no data returned)

SELECT        VERSIONS_XID
       ,      VERSIONS_STARTTIME
       ,      VERSIONS_ENDTIME
       ,      VERSIONS_STARTSCN
       ,      VERSIONS_ENDSCN
       ,      VERSIONS_OPERATION
       ,      id
       ,      val1
       FROM   TEST_TAB 
              VERSIONS BETWEEN
              TIMESTAMP MINVALUE AND MAXVALUE
     where VERSIONS_XID=HEXTORAW('0A0001007264000')
      ORDER  BY  VERSIONS_STARTTIME
/
VERSIONS_XID	 VERSIONS_STARTTIME    ENDTIME STARTSCN  ENDSCN  V          VAL1
---------------- --------------------- ------- -------- -------  - --------------
0A00010072640000 15-OCT-13 06.46.30 PM         17042888	         U            aa


Now that’s not the blocking SQL but at least you can see what the value of the field was that the blocker changed it to, so you can guess to some degree what the actual SQL was. Not great, but better than nothing.