Search

OakieTags

Who's online

There are currently 0 users and 24 guests online.

Recent comments

Oakies Blog Aggregator

Video : XMLTABLE : Convert XML into Rows and Columns using SQL

Here’s an Easter present for everyone out there. </p />
</p></div>

    	  	<div class=

GNW01: In-Memory Processing for Databases

Hi, it took a bit longer than I had planned, but here’s the first Gluent New World webinar recording!

You can also subscribe to our new Vimeo channel here – I will announce the next event with another great speaker soon ;-)

A few comments:

  • Slides are here
  • I’ll figure a good way to deal with offline follow-up Q&A later on, after we’ve done a few of these events

If you like this stuff, please share it too – let’s make this series totally awesome!

 

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

#Oracle University Expert Summit Berlin 2016

Join us in Berlin, 18th – 20th April. The event will take place at the Adlon Hotel with Jonathan Lewis, Pete Finnigan, Christian Antognini, Javier de la Torre Medina and myself as speakers. We have over 70 enrollments already, so take action to secure your seat in time!

Expert Summit Berlin 2016

My topic will be Rolling Upgrade from 11g to 12c, with a special focus on doing it with Transient Logical Standby. In a live demonstration I will start with a Primary and a Physical Standby Database both running 11.2.0.4 in Maximum Availability protection mode using the Data Guard Broker. This is likely one of the most common setups today. We will then see how to upgrade this configuration to 12.1.0.2 with minimum downtime.

When I did this seminar before during another Expert Summit in Dubai, I was still using 11.2.0.1 as the initial release, then upgrading it to 11.2.0.3. It took me some time and effort to update my demo environment and the lessons to cover more recent and meanwhile more relevant versions. Hope to see you there:-)

WebLogic Stuff

glasses-272399_1280-smallI’ve written a bunch of stuff about WebLogic over the last few years, but it’s kind-of scattered throughout the site. When I was gearing up for my session at the Middleware SIG recently I figured it was about time I collected it all together, so I added a new WebLogic page to the site. It’s on the menu and the quick links.

Much the same as the SQL and PL/SQL pages, it really is just a bunch of links to other stuff. I tend to put my articles in version-specific locations, because it suits me, but I realise it’s not ideal for everyone. I think this gives me the best of both worlds.

I’m always a little nervous about bringing attention to articles on certain subjects on my website. If you’ve seen me present, you’ll know I’m quick to point out I’m not “the WebLogic guy”. It’s something I’m involved in, but I would never dream of making out I’m the man. If WebLogic is your thing, there are better people to follow than me!

Having said that, the website has always been me writing about the stuff I’m doing, and that’s the way it’s going to stay. If it helps you, that’s great. If not, I’m sorry, but there are lots of other websites to read. </p />
</p></div>

    	  	<div class=

Are your SQL Plus scripts going to ‘ell ?

Excuse the hideous pun in the title.  All will become clear shortly.

Often we use SQL Plus to run scripts on a scheduled basis in our database.  The “problem” is, they need to connect to that database.

Here’s a simple script I want to run:

image

and here’s my very sophisticated Smile batch file (which could just as easily be a Unix shell script) to run it:

image

In my case, I’m planning on using a wallet to not reveal the password (see here for details ).

But what if my wallet is wrong or missing?  Or if I’m explicitly coding a username / password and either are incorrect ?

Then this happens when I run my batch file:

image

Yes my connection failed…but notice that the script did not complete. 

It…..just….sat….there.

That’s no good…from an observer’s perspective, it is still “running”.

So here’s a quick tip.  Whenever you launch SQL Plus from within a shell or script, always include the -L parameter (“minus elle”)

image

If you can’t connect, SQL Plus will exit and at least you’ll get some feedback that all is not OK.

So use minus elle, to stop your scripts from going to hell Smile

Up to date text indexes

If you really need your text indexes to be “super dooper” up to date, that is, always in sync with your data, you can set the SYNC level to be ON COMMIT.  For example,


SQL> create table t1 (
  2    x int,
  3    y varchar2(1000) )
  4  ;

Table created.

SQL>
SQL> create index t1_iz on T1  ( y )
  2  indextype is ctxsys.context
  3  parameters ('sync (on commit)');

Index created.

SQL>
SQL> insert into t1 values ( 1, 'The quick brown fox jumped over the lazy dog');

1 row created.

SQL> insert into t1 values ( 2, 'Pack my box with five dozen liquor jugs');

1 row created.

SQL> insert into t1 values ( 3, 'Several fabulous dixieland jazz groups played with quick tempo');

1 row created.

SQL> insert into t1 values ( 4, 'Back in my quaint garden, jaunty zinnias vie with flaunting phlox');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from t1
  2  where contains(y,'quick') > 0 ;

         X
----------
Y
----------------------------------------------------------------------------------------------------
         1
The quick brown fox jumped over the lazy dog

         3
Several fabulous dixieland jazz groups played with quick tempo

But don’t forget. There is no such thing as a free lunch. As has always been the case, the more you can index “in batch” the less fragmentation you get with your text index, and obviously, if you’re planning on doing a stack of activities when you commit…then your commit performance will be impacted. Let’s compare the insert’s above, with and without a text index in place.


--
-- With index
--
SQL> insert into t1 values ( 1, 'The quick brown fox jumped over the lazy dog');

1 row created.

SQL> insert into t1 values ( 2, 'Pack my box with five dozen liquor jugs');

1 row created.

SQL> insert into t1 values ( 3, 'Several fabulous dixieland jazz groups played with quick tempo');

1 row created.

SQL> insert into t1 values ( 4, 'Back in my quaint garden, jaunty zinnias vie with flaunting phlox');

1 row created.

SQL> set timing on
SQL> commit;

Commit complete.

Elapsed: 00:00:00.02

-- 
-- Without index 
-- 
SQL> insert into t1 values ( 1, 'The quick brown fox jumped over the lazy dog');

1 row created.

SQL> insert into t1 values ( 2, 'Pack my box with five dozen liquor jugs');

1 row created.

SQL> insert into t1 values ( 3, 'Several fabulous dixieland jazz groups played with quick tempo');

1 row created.

SQL> insert into t1 values ( 4, 'Back in my quaint garden, jaunty zinnias vie with flaunting phlox');

1 row created.

SQL> set timing on
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00

Not too shabby…but noticeable.

Interview with PeopleSoft Administrator Podcast

I recently recorded an interview with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast. It has been spread over three episodes. There is lots of other good stuff on the website and other episodes that are well worth listening to.

(25 March 2016) #21 - Temporary Tables


(8 April 2016) #23 - The Application Server


(15 April 2016) #24 - Application Server Tuning

You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

OTNYathra2016

Over the last many years, some of you have invited me to attend conferences in India, and talk about Oracle RAC and performance. I have not had an opportunity to make it to conferences in India, until now:)

I am excited to announce that I will be participating in OTN sponsored Oracle ACE Director’s tour in India (April 23rd to May 2nd 2016), and presenting ( with deep dive demos ) about RAC, performance, and in-memory. This is a golden opportunity for you to learn some of the internal stuff that I talk about in my class too.

Refer http://otnyathra.com for further details.

I am also excited to be in the tour with Connor McDonald, Biju Thomas, Sai, and many others. Don’t forget to send an email to Sai, the conference rooms will fill out quickly.

Did I mention deep dive demos?:)

Update 1: Updating the presentation slides and scripts. Do ask if I miss any script that you are looking for:)

india_2016_files_v1

India bound !

As part of the activities organised by the Oracle Ace Program and the All India Oracle User Group, the OTN Yathra tour is coming up in a few weeks.

OTNYathra2016

I’ll be speaking on 12c developer features and advanced SQL techniques at the following locations:

  • 23-Apr-16 Saturday,  Chennai
  • 24-Apr-16 Sunday, Bangalore
  • 26-Apr-16 Tuesday, Hyderabad
  • 28-Apr-16 Thursday, Pune

If you’re in any of these places, come along for some great education, good fun and don’t be shy – come up and say Hello. 

Full details are here

(Special thanks to Sai for his organizational efforts in getting this all together).

AWR Warehouse Fails on Upload- No New Snapshots

This issue can be seen in either EM12c or EM13c AWR Warehouse environments.  It occurs when there is a outage on the AWR Warehouse and/or the source database that is to upload to it.

oh_gif_by_gifsandstock-d4ldoq9

The first indication of the problem, is when databases appear to not have uploaded once the environments are back up and running.

awrw5http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/03/awrw5.png?resi... 300w" sizes="(max-width: 624px) 100vw, 624px" data-recalc-dims="1" />

The best way to see an upload, from beginning to end is to highlight the database you want to load manually, (click in the center of the row, if you click on the database name, you’ll be taken from the AWR Warehouse to the source database’s performance home page.)  Click on Actions, Upload Snapshots Now.

A job will be submitted and you’ll be aware of it by a notification at the top of the console:

awrw1http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/03/awrw1.png?resi... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/03/awrw1.png?resi... 768w" sizes="(max-width: 377px) 100vw, 377px" data-recalc-dims="1" />

Click on the View Job Details and you’ll be taken to the job that will run all steps of the AWR Warehouse ETL-

  1.  Inspect what snapshots are required by comparing the metadata table vs. what ones are in the source database.
  2. Perform a datapump export of those snapshots from the AWR schema and update the metadata tables.
  3. Perform an agent to agent push of the file from the source database server to the AWR Warehouse server.
  4. Run the datapump import of the database data into the AWR Warehouse repository, partitioning by DBID, snapshot ID or a combination of both.
  5. Update support tables in the Warehouse showing status and success.

Now note the steps where metadata and successes are updated.  We’re now inspecting the job that we’re currently running to update our tables, but instead of success, we see the following in the job logs:

awrw2http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/03/awrw2.png?resi... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/03/awrw2.png?resi... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

We can clearly see that the extract, (ETL step on the source database to datapump the AWR data out)  has failed.

Scrolling down to the Output, we can see the detailed log to see the error that was returned on this initial step:

awrw3http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/03/awrw3.png?resi... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/03/awrw3.png?resi... 768w" sizes="(max-width: 424px) 100vw, 424px" data-recalc-dims="1" />

ORA-20137: NO NEW SNAPSHOTS TO EXTRACT.

Per the Source database, in step 1, where it compares the database snapshot information to the metadata table, it has returned no new snapshots that should be extracted.  The problem, is that we know on the AWR Warehouse side, (seen in the alerts in section 3 of the console) there are snapshots that haven’t been uploaded in a timely manner.

How to Troubleshoot

First, let’s verify what the AWR Warehouse believes is the last and latest snapshot that was loaded to the warehouse via the ETL:

Log into the AWR Warehouse via SQL*Plus or SQLDeveloper and run the following query, using the CAW_DBID_MAPPING table, which resides in the DBSNMP database:

SQL> select target_name, new_dbid from caw_dbid_mapping;
TARGET_NAME
--------------------------------------------------------------------------------
NEW_DBID
----------
DNT.oracle.com
3695123233
cawr
1054384982
emrep
4106115278

and what’s the max snapshot that I have for the database DNT, the one in question?

SQL> select max(dhs.snap_id) from dba_hist_snapshot dhs, caw_dbid_mapping cdm
2 where dhs.dbid=cdm.new_dbid
3 and cdm.target_name='DNT.oracle.com';
MAX(DHS.SNAP_ID)
----------------
501

The Source

These next steps require querying the source database, as we’ve already verified the latest snapshot in the AWR WArehouse and the error occurred on the source environment, along with where it failed at that step in the ETL process.

Log into the database using SQL*Plus or another query tool.

We will again need privileges to the DBSNMP schema and the DBA_HIST views.

SQL> select table_name from dba_tables
where owner='DBNSMP' and table_name like 'CAW%';
TABLE_NAME
--------------------------------------------------------------------------------
CAW_EXTRACT_PROPERTIES
CAW_EXTRACT_METADATA

These are the two tables that hold information about the AWR Warehouse ETL process in the source database.

There are a number of ways we could inspect the extract data, but the first thing we’ll do is get the last load information from the metadata table, which will tell us what were the

SQL> select begin_snap_id, end_snap_id, start_time, end_time, filename
from caw_extract_metadata 
where extract_id=(select max(extract_id) 
from caw_extract_metadata);
502 524
23-MAR-16 10.43.14.024255 AM
23-MAR-16 10.44.27.319536 AM
1_2EB95980AB33561DE053057AA8C04903_3695123233_502_524.dmp

So we can see that per the metadata table, the ETL BELIEVES it’s already loaded the snapshots from 502-524.

We’ll now query the PROPERTIES table that tells us where our dump files are EXTRACTED TO:

SQL> select * from caw_extract_properties
 2 where property_name='dump_dir_1';
dump_dir_1
/u01/app/oracle/product/agent12c/agent_inst
ls /u01/app/oracle/product/agent12c/agent_inst/*.dmp
1_2EB95980AB33561DE053057AA8C04903_3695123233_502_524.dmp

So here is our problem.  We have a dump file that was created, but never performed the agent to agent push or load to the AWR Warehouse.  As the source table was updated with the rows to the METADATA table, it now fails to load these rows.

Steps to Correct

  1. Clean up the dump file from the datapump directory
  2. Update the METADATA table
  3. Rerun the job
cd /u01/app/oracle/product/agent12c/agent_inst
rm 1_2EB95980AB33561DE053057AA8C04903_3695123233_502_524.dmp

Note: You can also choose to rename the extension in the file if you wish to retain it until you are comfortable that everything is successfully loading, but be aware of size constraints in your $AGENT_HOME directory.  I’ve seen issues due to space constraints.

Log into the database and remove the latest row update in the metadata table:

select extract_id from caw_extract_metadata
where being_snap_id=502 and end_snap_id=504;
101
delete from caw_extract_metadata where extract_id=101;
1 row deleted.
commit;

Log into your AWR Warehouse dashboard and run the manual Upload Snapshots Now for the database again.

awrw4http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/03/awrw4.png?resi... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/03/awrw4.png?resi... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />



Tags:  , , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [AWR Warehouse Fails on Upload- No New Snapshots], All Right Reserved. 2016.