Search

OakieTags

Who's online

There are currently 0 users and 38 guests online.

Recent comments

Oakies Blog Aggregator

PL/SQL context switch

Whenever you use PL/SQL in SQL statements, the Oracle engine needs to switch from doing SQL to doing PL/SQL, and switch back after it is done. Generally, this is called a “context switch”. This is an example of that:

-- A function that uses PL/SQL 
create or replace function add_one( value number ) return number is
        l_value number(10):= value;
begin
        return l_value+1;
end;
/
-- A SQL statement that uses the PL/SQL function
select sum(add_one(id)) from t2;

Of course the functionality of the function is superfluous, it can easily be done in ‘pure’ SQL with ‘select sum(id+1) from t2’. But that is not the point.
Also, I added a sum() function, for the sake of preventing output to screen per row.

The first thing to check, if there is a difference in performance between executing with sum(id+1) and sum(add_one(id)). If there isn’t we can stop here :-)

TS@frits > set timing on
TS@frits > select sum(id+1) from t2;

 SUM(ID+1)
----------
5.0000E+11

Elapsed: 00:00:00.19
TS@frits > select sum(add_one(id)) from t2;

SUM(ADD_ONE(ID))
----------------
      5.0000E+11

Elapsed: 00:00:01.13

This statement executes a full table scan, I’ve closely guarded the IO times were alike.
But it looks there is a whopping difference between including PL/SQL and not: 113/19*100=595%, or differently worded: almost six times slower.

How does this work? Let’s have a look using stapflames. The idea behind flame graphs in general is to understand in which (c code) functions (user mode and kernel mode) the time is spend. Because of the full backtrace (all the called functions on top of each other), it gives an insight on how a program is working.

The first thing we need to establish, is how PL/SQL looks like from the perspective of C-functions. For that reason, I created a bogus PL/SQL program to profile:

declare
t number:=0;
begin
while t < 1000000 loop
t:=t+1;
end loop;
end;
/

Yes, that is right, the only thing this anonymous PL/SQL block does, is declare a number to a variable named ‘t’, and then loop adding one to the variable until the variable reaches the number 1000000. Again, what this program does is not interesting nor functional, the only thing it needs to do is run, so when we profile the program we are sure it is doing PL/SQL.

I ran this anonymous PL/SQL block using my stapflame utility to generate a flamegraph, and this is how that looks like:
I have taken the flamegraph of all time considered on cpu by the Oracle database.

First of all, one important property of flamegraphs is shown: the sequence is random. If you look at the kpoal8 function, you see there are two different paths taken from this function: opiexe (oracle program interface execute) and opiosq0 (oracle program interface prepare to parse). Of course the PL/SQL block is first parsed and then executed, so the order is different than shown.

What is also very visible, is that almost all time executing, is done using a function ‘pfrrun’, which seems to be the main function executing PL/SQL. On top of that we can see some functions which roughly resemble the functionality used in the PL/SQL block: pfrinstr_ADDN (addition, t:=t+1) and pfrinstr_RELBRNCH (the loop). This also gives a fair indication PL/SQL is interpreted rather than compiled. Anyway, what is important is that from looking at the little test above, we can distinguish running PL/SQL from SQL by the pfrrun function.

Now let’s look at at a flamegraph of running a PL/SQL function in the SQL statement:
The flamegraph shows all the time considered running on CPU for executing the statement ‘select sum(add_one(id)) from t2’. There is a lot to see!

When we look on top of the function kpoal8, we see the function opifch2. This means the vast majority of the time is spend in the fetch phase of the SQL statement. On top of the opifch2 function we see two functions which start with qer. ‘qer’ probably means Query Execute Rowsource. ‘qertbFetch’ is the fetch procedure for table scans. Then we see the kdsttgr function (kernel data scan table get row), and then the ultra fast table scan function (kdstf; kernel data scan table full) followed by a lot of zero’s and/or one’s and ending with ‘km’ or ‘kmP’. There are a lot of kdstf functions in the Oracle executable, I assume the zero’s and one’s after ‘kdstf’ are a bitmap of potentially needed functions during the full scan, so the full table scan function can omit a lot of choices on runtime, leading to better CPU efficiency. See an article by Tanel on what this means.

In the full table scan function, there are two main functions which consume time, kdst_fetch and qesaFastAggNonDistSS. ‘kdst_fetch’ and deeper functions are functions related to doing IO for reading the data from the data file. ‘qesaFastAggNonDistSS’ is the main function for actually processing the data. The function qesaFastAggNonDistSS has two main functions consuming its time, evaopn2 and a function visible as ‘sl..’, which is actually a function called slnxsum, in other words, the sum() function. The function evaopn2 is a function to evaluate operands. This evaluation is the path towards executing the PL/SQL function.

Actually, when carefully looking at the evaopn2 function, we see the slnxsum function, and ‘evapls’, which is the function to switch to PL/SQL. The two main functions in ‘evapls’ are kgmexec and opiomc. Again here the order is switched; what happens here is first a cursor must be mapped for executing PL/SQL (opiomc function), after which it can be executed (kgmexec function).

In order to understand what the time taken by “switching” to PL/SQL is, we can take the total time the query engine is processing everything PL/SQL related, which is the total time taken by the ‘evapls’ function, and measure the time actually running PL/SQL, which is the time taken by the ‘pfrrun’ function. This can be accomplished by simple systemtap script:
(please mind you need a recent systemtap version, preferably gotten from https://sourceware.org/git/systemtap.git, and kernel version 3.10 to be able to use return probes)

global evapls_time, pfrrun_time, evapls_tot=0, pfrrun_tot=0

probe begin {
	printf("Begin.\n")
}
probe process("/u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle").function("evapls") {
	if ( pid() == target() )
		evapls_time=local_clock_us()
}
probe process("/u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle").function("evapls").return {
	if ( pid() == target() )
		evapls_tot+=local_clock_us()-evapls_time
}
probe process("/u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle").function("pfrrun") {
	if ( pid() == target() )
		pfrrun_time=local_clock_us()
}
probe process("/u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle").function("pfrrun").return {
	if ( pid() == target() )
		pfrrun_tot+=local_clock_us()-pfrrun_time
}

probe end {
	printf("\nevapls time: %12d\npfrrun time: %12d\n", evapls_tot, pfrrun_tot)
}

This is how it looks like on my machine:

# stap -x 29680 plsql.stap
Begin.
^C
evapls time:      3203637
pfrrun time:       951830

So, the overhead or context switching time, which must be Oracle server code executing between the the evapls function, where it determines it needs to execute PL/SQL and the pfrrun function, which is the PL/SQL interpreter, is on my machine:
(1-951830/3203637)*100=70%
Wow!

One way of reducing this problem, is using subquery factoring, alias the ‘with clause’. To use the function that way, this is how the SQL should be written:

with
function add_one( value number ) return number is
	l_value number(10):= value;
begin
	return l_value+1;
end;
select sum(add_one(id)) from t2;
/

Let’s have a look at the flamegraph of this construction:
It becomes apparent that with subquery factoring, there are way lesser functions between the evapls and pfrrun functions.
Normal PLSQL: kgmexec, kkxmpexe, kkxdexe, peidxexe, peidxr_run, plsql_run
Subquery factoring: kkxmss_speedy_stub, peidxrex
Also mind there is no codepath for mapping a cursor.

Let’s have a look at the timing:

# stap -x 29680 plsql.stap
Begin.
^C
evapls time:      2192685
pfrrun time:       880230

The time spend in PL/SQL, by looking at total time spend in the evapls function reduced by 32% ((1-2192685/3203637)*100).
However, if you calculate the overhead, it is still pretty significant: (1-880230/2192685)*100=60%

Conclusion
The most simple conclusion I can make is: do not use PL/SQL if you can solve it in SQL, like in the example above. This does not mean you should never use PL/SQL, contrary: in a lot of cases processing should be done where the data is, and sometimes you need a procedural language for that.

I made a lot of assumptions in this little investigation. The function naming (the translation from the Oracle C function name to what functionality it is supposed to deliver) are speculations.

The context switch between SQL mode and PL/SQL mode looks like it is technically setting up the execution environment for PL/SQL. Indeed this takes time, and the true PL/SQL execution time when repeatedly executing PL/SQL is very low in my case. Please mind actual times will differ on different systems. However, the main conclusion is that using PL/SQL in SQL execution probably is not the most performant thing to do, including using subquery factoring.

Tagged: oracle, performance, plsql, profiling, systemtap

[Cloud | On-Premise | Outsourcing | In-Sourcing] and why you will fail!

error-24842_640I was reading this article about UK government in-sourcing all the work they previously outsourced.

This could be a story about any one of a number of failed outsourcing or cloud migration projects I’ve read about over the years. They all follow the same pattern.

  • The company is having an internal problem, that they don’t know how to solve. It could be related to costs, productivity, a paradigm shift in business practices or just an existing internal project that is failing.
  • They decide launch down a path of outsourcing or cloud migration with unrealistic expectations of what they can achieve and no real ideas about what benefits they will get, other than what Gartner told them.
  • When it doesn’t go to plan, they blame the outsourcing company, the cloud provider, the business analysts, Gartner, terrorists etc. Notably, the only thing that doesn’t get linked to the failure is themselves.

You might have heard this saying,

“You can’t outsource a problem!”

Just hoping to push your problems on to someone else is a guaranteed fail. If you can’t clearly articulate what you want and understand the consequences of your choices, how will you ever get a result you are happy with?

Over the years we’ve seen a number of high profile consultancies get kicked off government projects. The replacement consultancy comes in, hires all the same staff that failed last time, then continue on the failure train. I’m not going to mention names, but if you have paid any attention to UK government IT projects over the last decade you will know who and what I mean.

Every time you hear someone complaining about failing projects or problems with a specific model (cloud, on-premise, outsourcing, in-sourcing), it’s worth taking a step back and asking yourself where the problem really is. It’s much easier to blame other people than admit you’re part of the problem! These sayings spring to mind.

“Garbage in, garbage out!”

“A bad workman blames his tools!”

Cheers

Tim…

PS. I’ve never done anything wrong. It’s the rest of the world that is to blame… :)

Update: I wasn’t suggesting this is only an issue in public sector projects. It just so happens this rant was sparked by a story about public sector stuff. :)


[Cloud | On-Premise | Outsourcing | In-Sourcing] and why you will fail! was first posted on January 22, 2016 at 2:47 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Friday Philosophy – Database Dinosaurs

I’m guessing many of you reading this are over 40. I know some of you are actually beyond the half century and a couple of you are….older! If you are younger than 40, just print out this and put it in an envelope marked “read a decade later than {current date}”. It will have become relevant for you by then…

beware the network admin

Beware the network admin – creative commons, Elvinds

So wind back your memories to those glorious days in your first role working with IT. For most of us it was half our lives back or more, when we were in our early 20’s or even in our teens. One of you was 18, I know, and I knew one guy who started as a salaried, paid programmer at 16. Do you remember those old guys (and occasional gals) you met back then? Often with beards, an odd sense of “style” and a constant grumbling murmur that, if you listened closely, was a constant diatribe about the youngsters “not getting it” and this UNIX thing not being a “proper OS” {fill in whatever was appropriate for the upstart OS back when back where for you}.

Don't annoy the DBA

Don’t annoy the DBA

You are now that person. I know, you don’t feel like it – you can still do all this technology stuff, you program better now than ever, you know how to get the job done and you have kept up with the tech as it moves forward. And you sure as hell do not look as weird as those oldsters did! Well I have bad news. You do look as weird as those old guys/gals to any youth about {and is that not a good thing, as most of them look a right state} and you have probably not kept quite so up with the tech as you think. You have to keep partly up-to-date as the versions of Oracle or whatever roll on, else the career becomes tricky. But as I’ve realised this last few weeks, you probably use old coding techniques and ways of doing things. This is maybe not a bad thing in you day-to-day job as these older ways *work* and doing it that way is quicker for you than spending time checking up the latest “time saving” shortcuts in the code you write. I’ve had that brought home to me recently as I’m working in PL/SQL at the moment and I am using some code I initially wrote back in the last century {I love saying that} as the basis of an example. It works just fine but I decided I should re-work it to remove now-redundant constructs and use features that are current. It is taking me a lot of time, a lot more than I expected, and if I was writing something to Just-Do-The-Job with slightly rusty ways, I’d have it done now. That is what I mean about it not being such a bad thing to use what you know. So long as you eventually move forward!

Of course it does not help that you work on a legacy system, namely Oracle. I am not the first to say this by a long, long shot, Mogens Norgaard started saying this back in 2004 (I can’t find the source articles/document yet, just references to them} and he was right even then. If you think back to those more mature work colleagues when we started, they were experts in legacy software, OS’s and hardware that did in fact die off. VMS went, OS/2 died, Ingress, Informix, Sybase and DB2 are gone or niche. And don’t even mention the various network architectures that we had then and are no more. Their tech had often not been around as long as Oracle has now. And I know of places that have refreshed their whole application implementation 3 or 4 times – and have done so with each one based on a later version of Oracle (I do not mean a migration, I mean a re-build).

Or the Sys Admin

Or the Sys Admin

The difference is, Oracle has had a very, very long shelf life. It has continued to improve, become more capable and the oracle sales & marketing engines, though at times the bane of the technologist’s lives (like making companies think RAC will solve all your problems when in fact it solves specific problems at other costs), have done a fantastic job for the company. Oracle is still one of the top skills to have and is at the moment claiming to be the fastest growing database. I’m not sure how they justify the claim, it’s a sales thing and I’ve ignored that sort of things for years, but it cannot be argued that there is a lot of Oracle tech about still.

So, all you Oracle technologists, you are IT Dinosaurs working on legacy systems.

But you know what? Dinosaurs ruled the earth for a very, very, very long time. 185 million years or so during the Mesozoic period. And they only died out 65 million years ago, so they ruled for three times as long as they have been “retired”. We IT Dinosaurs could well be around for a good while yet.

We better be as there is another difference between when we started and now. Back then, we youth were like the small mammals scurrying in numbers around the dinosaurs(*). Now we are the dinosaurs, there does not seem to be that many youth scurrying about. Now that I DO worry about.

(*) the whole big-dinos/small scurrying mammals is a bit of a myth/miss-perception but this is not a lesson on histozoology…

Video: Oracle Linux Virtual Machine (VM) on Amazon Web Services (AWS)

Continuing the cloud theme, here is a quick run through of the process of creating an Oracle Linux virtual machine on Amazon Web Services (AWS).

A few months ago I wrote an article about installing an Oracle database on AWS.

I updated the images in that article last night to bring them in line with this video.

The cameo today is by Joel Pérez, who was a bit of a perfectionist when recording “.com”. I’ve included about half of his out-takes at the end of the video. Don’t ever hire him for a film or you will run over budget! :)

Cheers

Tim…


Video: Oracle Linux Virtual Machine (VM) on Amazon Web Services (AWS) was first posted on January 21, 2016 at 10:14 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Literals and VARCHAR2

We all know that comparing two VARCHAR2 fields of different lengths will be treated by Oracle as being not EQUAL:


SQL> drop table T purge;

Table dropped.

SQL> create table T ( x varchar2(10), y varchar2(10));

Table created.

SQL> insert into T values ('abc','abc    ');

1 row created.

SQL> select * from T where x = y;

no rows selected

SQL> select * from T where y = 'abc';

no rows selected

But interestingly enough, if there are no references to tables and/or columns, that is, you are just using plain literals, VARCHAR2 is not the datatype that is in play. For example:


SQL> select * from dual where 'abc' = 'abc  ';

D
-
X

SQL> set serverout on
SQL> exec begin if 'abc' = 'abc  ' then dbms_output.put_line('YES'); end if; end;
YES

PL/SQL procedure successfully completed.



You can see this from the DUMP command, type=1 is VARCHAR2, and type=96 is CHAR.


SQL> select dump(x) from t;

DUMP(X)
-------------------------------
Typ=1 Len=3: 97,98,99


SQL> select dump('abc') from dual;
 
DUMP('ABC')
----------------------
Typ=96 Len=3: 97,98,99

A old story from the past

My own personal hell story was back in the late 1990’s, when moving database from one server to another.  We were recycling some of the hardware (disks and memory), so it was a unload-to-tape, reload-from-tape job. The servers were in Port Hedland (a rat-infested dusty 110-degrees-plus 98% humidity hell hole… Hi to anyone in Port Hedland … ). I was not physically present – don’t you love it when IT companies think “remote login” is always the best way…

Anyway, because of the dust issues, I unloaded three copies of the database to three tapes, and then did a verification read of all them. An on-site non-techie took care of unloading/reloading tapes for me.  He chucks the disk drives and memory from the old server plus the three tapes into a truck and drive 20 km’s across town (where the new server is). Then I spend an hour on the phone trying to explain blind to him how to plug all this stuff in to the new server.

He turns it on… smoke starts billowing out of the box. Emergency shutdown (ie, rip out power cord). He’s plugged some of the memory in wrong so its munched one of the boards. So (over the phone again) we talk through removing all the memory from just that board but leaving the other memory in. Finally, the machine does in fact boot nicely. I log in as root and mount the new disks and start restoring from tape.

30mins in… first tape dies with a read error. About half the database restored. “No problem” I think, because we expected that due to the environment … and we load the second tape.

Another 30 mins, second tapes dies with a read error. Everything restored except SYSTEM tablespace. “Luckily we’ve got that third tape…” I think.

Third tape goes in… ‘ufsrestore’ command just hangs… Phone rings. On-site techie says “I can see tape spewing out of the Exabyte drive….”

So all tapes used, and no SYSTEM tablespace… As they say in the classics,

“Thats when you discover that adrenalin is brown and sticky”.

I’m thinking about career moves etc, when the phone rings again. Its my partner.

“How’s thing going at work?” she asks.

“Don’t ask!…” I say and give her the run-down on what’s happened…“Anyway, how’s things at home?”

There’s a long pause… followed by her bursting into tears….

“My pet budgerigar got out of the cage and has flown away….(sob) (sob)… he’s in the tree in the back yard…(sob) (sob) can you please come home and try catch him…”

Not really what I needed at that point in time…

So I tell the techie I’m off to get a coffee… I jump in the car and scoot home to console the partner…“Where is the little bird?”

“That tree there…” says my partner as she points to our 100 year old Citradora, around 150ft high! With the binoculars I can just see the little yellow budgie about 149.5ft up…

partner- “Can you climb up and get him?”

me (mentally) – “Are you out of your frickin mind ?!?!?!”
me (verbally) – “Of course dear, fetch me the ladder…”

So I start climbing this stupid tree, knowing full well that I’ll never get this stupid bird, and that even if I did, I would wring its stupid little neck….30 mins later, I’m teetering at about 80ft and the branches won’t support my weight anymore. So I climb back down, covered in sweat and scratches, but the effort seems to have appeased my partner…

So I spend another 10 mins crapping on about how “its better now that the the little bird is free” etc etc, knowing full well the neighbour’s cat will have him for dinner within a matter of hours Smile

I scoot back to work…its amazing how teetering on the brink at 80ft gives you some clarity. I have a sneaking suspicion that the SYSTEM tablespace was on one of the disks that was NOT recycled from the old server. I get the techie to drive back to the old server, and collect all the remaining disks.

Another hour to get them all plugged in correctly (remember the good old days of SCSI terminators….) and voila! I find the SYSTEM tablespace on the old disks.

So at the end of all this drama, I naturally send my status email to management:

“Server moved, no errors encountered” Smile

Session level parallelism

If you’ve got a stack of large I/O operations to perform, you might want to take advantage of the all of the cores on your server. However, if the scripts are already in place, you might not want to be editing them individually to carefully add parallelism. A little known command is that you can set parallelism at session level. For example, we can control the DDL operations as shown below:


SQL> select * from V$PX_SESSION;

no rows selected

SQL> ALTER SESSION FORCE PARALLEL DDL PARALLEL 5;

Session altered.

SQL> create table T as
  2  select d.*
  3  from dba_Objects d, ( select 1 from dual connect by level <= 20 )
  4  /

Table created.

[in another session, whilst the above is running]
SQL> select * from V$PX_SESSION;

SADDR                   SID    SERIAL#      QCSID  QCSERIAL#  QCINST_ID SERVER_GROUP SERVER_SET    SERVER#     DEGREE REQ_DEGREE     CON_ID
---------------- ---------- ---------- ---------- ---------- ---------- ------------ ---------- ---------- ---------- ---------- ----------
000007FFBD785488        252      27078        474      63158          1            1          1       1     5          5          0
000007FFBD9631D8        481       7594        474      63158          1            1          1       2     5          5          0
000007FFBDB38BE8        714      34791        474      63158          1            1          1       3     5          5          0
000007FFBD5A5668         24      50589        474      63158          1            1          1       4     5          5          0
000007FFBD787558        251      46246        474      63158          1            1          1       5     5          5          0
000007FFBD971788        474      63158        474                                                                                 0

6 rows selected.


and similarly when it comes to indexing the table



SQL> create index IX on T ( object_id, owner );

Index created.

[in another session, whilst the above is running]
SQL> select * from V$PX_SESSION;

SADDR                   SID    SERIAL#      QCSID  QCSERIAL#  QCINST_ID SERVER_GROUP SERVER_SET    SERVER#     DEGREE REQ_DEGREE     CON_ID
---------------- ---------- ---------- ---------- ---------- ---------- ------------ ---------- ---------- ---------- ---------- ----------
000007FFBD785488        252      62878        474      12776          1            1          1       1     5          5          0
000007FFBD9631D8        481      38744        474      12776          1            1          1       2     5          5          0
000007FFBDB38BE8        714      30697        474      12776          1            1          1       3     5          5          0
000007FFBD5A5668         24      22049        474      12776          1            1          1       4     5          5          0
000007FFBD787558        251      57640        474      12776          1            1          1       5     5          5          0
000007FFBD967378        479      34266        474      12776          1            1          2       1     5          5          0
000007FFBDB3CD88        712      42043        474      12776          1            1          2       2     5          5          0
000007FFBD5B9E88         14      32512        474      12776          1            1          2       3     5          5          0
000007FFBD776ED8        259      42473        474      12776          1            1          2       4     5          5          0
000007FFBD95F038        483      37664        474      12776          1            1          2       5     5          5          0
000007FFBD971788        474      12776        474                                                                                 0

11 rows selected.



Determining What Management Packs Are Needed

In Enterprise Manager 12c, there was a little known functionality where you could check what management packs you needed to be licensed for to use a particular page in Enterprise Manager. I referred to it many times at conferences I was presenting at, and almost always people did not realize the functionality existed. Let’s see how this works.

To start with, I’m going to go to the Performance Home page for a particular database (accessed via the Performance menu from the Database home page). Note I’m not particularly interested in what it shows me, I’m just using this as an example of a page that requires the licensing of a management pack. Once I’m on that page, I can follow the path Setup -> Management Packs -> Packs for this Page:

mp_1http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_1-768x303.jpg 768w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_1-1024x404.jpg 1024w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_1-250x99.jpg 250w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_1-150x59.jpg 150w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_1.jpg 1196w" sizes="(max-width: 1196px) 100vw, 1196px" />

Once I click on Packs for this Page, I see the following:

mp_2http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_2-250x83.jpg 250w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_2-150x50.jpg 150w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_2.jpg 556w" sizes="(max-width: 556px) 100vw, 556px" />

There are a couple of other useful options off the Management Packs page as well. Firstly, the “Licensing Information” menu item shows you a complete list of the management packs, including their abbreviations and a description. Note that it’s the Licensing Guide that is actually the source of truth for licensing information, but information listed on this page should match the licensing doc, or it’s a bug. :) The abbreviations are actually useful for one of the other menu items – “Enable Annotations”. This is actually a toggle – you click it once to enable annotations and again to disable annotations (just be aware the menu wording doesn’t change between toggles). The other menu item on the Management Packs menu is “Management Pack Access”, where you can see what targets you have that are licensable, and you can turn on and off management packs for particular targets, as you can see here:

mp_3http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_3-768x443.jpg 768w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_3-1024x590.jpg 1024w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_3-250x144.jpg 250w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_3.jpg 1305w" sizes="(max-width: 1305px) 100vw, 1305px" />

If you swap to Pack Based Based Update on this page, you can enable/disable packs across all licensable targets or for particular target types, such as databases, hosts and so on. In the example below, I can disable the Data Masking and Subsetting Pack across all licensable targets, just by selecting the pack name and clicking Move (as I’ve already done in the screenshot) and then clicking Apply:

mp_4http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_4-768x358.jpg 768w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_4-1024x478.jpg 1024w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_4-250x117.jpg 250w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_4-150x70.jpg 150w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_4.jpg 1306w" sizes="(max-width: 1306px) 100vw, 1306px" />

Let’s go back to the “Enable Annotations” menu item. What happens when that’s selected? Well, basically it’s another way of allowing you to see which feature in EM requires a management pack. After you’ve enabled this, you can navigate to other menu items and see the Management Pack required. For example, after enabling annotation navigate to Setup > Notifications. It will display a list of management pack abbreviations and at least one of those packs is required to access that page:

mp_5http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_5-768x200.jpg 768w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_5-1024x267.jpg 1024w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_5-250x65.jpg 250w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_5-150x39.jpg 150w, http://petewhodidnottweet.com/wp-content/uploads/2016/01/mp_5.jpg 1202w" sizes="(max-width: 1202px) 100vw, 1202px" />

As you can see, the packs are shown using their abbreviations, which is why you might want to go to the “Licensing Information” menu item, at least until you become familiar with the abbreviations. You can also see that some items have a lot of packs that might be needed. In the example above, notifications fall under a whole bunch of packs. That doesn’t mean you need ALL of those packs for notification. You need the one that is relevant to the target type you’re setting up notifications for. You can also see (since the menu items now become a bit messy sometimes!) why you would toggle “Enable Annotations” on and off. The “Enable Annotations” menu is available in later versions of EM12c and, of course, EM13c as well.

So have a look at this Management Packs menu. It’s packed with lots of useful information for you!

The post Determining What Management Packs Are Needed appeared first on PeteWhoDidNotTweet.com.

Problematic SQL ? PL/SQL is your friend.

So then… I’ve written a crappy piece of SQL. It wouldn’t be the first time I’ve done it… and it probably wont be the last time I do it Smile

But at least I’ve done one thing right…I’ve encapsulated the SQL inside a PL/SQL procedure.



SQL> create or replace procedure P is
  2    x int;
  3  begin
  4  --
  5  -- this is my proc, and it has one very poor SQL
  6  --
  7    select count(*)
  8    into   x
  9    from   dba_views;
 10
 11    select count(*)
 12    into   x
 13    from   dba_tables;
 14
 15    select count(*)
 16    into   x
 17    from   dba_objects;
 18
 19    select count(*)
 20    into   x
 21    from   dba_objects, dba_objects;
 22
 23  end;
 24  /

Procedure created.

Let’s give that procedure a run …



SQL> exec P
[waiting]


We’re going to be waiting a while for that one to finish… a long while :-)

As a performance tuner, you might want to see what’s been running for a long time on your system. And that’s easy with a query to V$SESSION


SQL> select username, sql_id
  2  from   v$session
  3  where  status = 'ACTIVE'
  4  and    last_call_et > 10
  5  and    username is not null;

USERNAME                                      SQL_ID
--------------------------------------------- -------------
MY_USER                                       ff35fbgz27513


And since I’ve got the SQL_ID, its just as easy to look up the SQL text in V$SQL


SQL> select sql_text
  2  from   v$sql
  3  where  sql_id = 'ff35fbgz27513';

SQL_TEXT
----------------------------------------------------------------
SELECT COUNT(*) FROM DBA_OBJECTS, DBA_OBJECTS

But now what ? Somewhere in my application code, is a SQL statement that starts with “SELECT COUNT(*)” and its running badly. How do I find it ? (This is the polite way of saying “How do I locate the desk of the person that wrote it” Smile )

Luckily for me, it is coming from PL/SQL. Because finding the source of the statement, is then trivial. On V$SQL is also two columns of interest:


SQL> select PROGRAM_ID, PROGRAM_LINE#
  2  from   v$sql
  3  where  sql_id = 'ff35fbgz27513';

PROGRAM_ID PROGRAM_LINE#
---------- -------------
    102001            19

And once I’ve got those, I now have a direct link back to the originating code for that problematic SQL, including the line number where its run.


SQL> select owner, object_name
  2  from   dba_objects
  3  where  object_id = 102001;

OWNER                          OBJECT_NAME
------------------------------ -----------------
MY_USER                        P

Easy peasy

Not using binds .. banging an obsolete drum ?

We’ve all seen the mantra – “you should be using binds”.  And this is a demo that’s been rolled out for decades now, showing the performance overhead of not using them:



SQL> drop table T purge;

Table dropped.

SQL> create table T (x primary key) as
  2  select rownum x from dual
  3  connect by level <= 100000;

Table created.

SQL> set timing on
SQL> declare
  2    res int;
  3  begin
  4    for i in 1 .. 100000 loop
  5      select x into res from t where x = i;
  6    end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.61

SQL> set timing on
SQL> declare
  2    res int;
  3  begin
  4    for i in 1 .. 100000 loop
  5      execute immediate 'select x from t where x = '||i into res ;
  6    end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:58.72


And the conclusion is pretty straightforward. The parsing costs added nearly a 40-fold overhead to the execution time.

But in the middle of a discussion like this recently, someone responded to me: “Yeah, but I’m still getting 1700 executions per second.  That’s plenty fast enough for me”

And this is true…even with the overhead of parsing, we can still crank out a pretty good execution rate. 

Which leads me to thinking that the demo, which we’ve used for so long, is actually flawed.  Because in reality, we don’t run queries that are “select from one_table”.  The moment you have any sort of real application code in play, there are joins, there are views, there are security predicates…things are a lot more complicated.

So what happens to our parsing costs when we look at something closer to real application code.  Here’s a demo joining two of the Oracle dictionary views, both of which have complex definitions.  Let me stress two things in this case

  • we are not even running the SQL like the demo above.  All we are doing is parsing the query
  • we are only doing 10,000 calls, not 100,000.  This is only 10% of the size of the initial demo.  Why ?  Well… see for yourself below


SQL> set timing on
SQL> declare
  2    c number := dbms_sql.open_cursor;
  3  begin
  4    for i in 1 .. 10000 loop
  5      dbms_sql.parse(c,'select * from dba_objects o, dba_segments s
  6                        where object_id = '||i||'
  7                        and o.owner = s.owner
  8                        and o.object_name = s.segment_name' ,
  9                            dbms_sql.native);
 10    end loop;
 11    dbms_sql.close_cursor(c);
 12  end;
 13  /

PL/SQL procedure successfully completed.

Elapsed: 00:39:33.15

That’s nearly 40 minutes of my server being eaten alive by parsing costs.  Not using binds when you think about real application code is probably going to kill your app.

But to be fair – in a future post, I’ll talk about not using binds Smile