Search

OakieTags

Who's online

There are currently 0 users and 31 guests online.

Recent comments

Oakies Blog Aggregator

Index Usage – 3

In my last note on index usage I introduced the idea of looking at v$segstat (or v$segment_statistics) and comparing the “logical reads” statistic with the “db block changes” statistic as an indicator of whether or not the index was used in execution plans. This week I’ll explain the idea and show you some results – with a little commentary – from a production system that was reported on the OTN database forum.

The idea is fairly simple (and simplistic). If you update a typical index you will traverse three blocks (root, branch, leaf) to find the index entry that has to be updated, so if the only reason you use an index is to find out which index entry has to be updated than the number of “db block changes” for that index will be (we hope) roughly one-third of the number of “session logical I/Os” of the index.

We can do some testing of this hypothesis with some simple SQL:


create table t1 nologging as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                                  id,
        trunc(dbms_random.value(0,333333))      n1,
        rpad('x',100)                           padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 ; begin dbms_stats.gather_table_stats( ownname => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

alter table t1 add constraint t1_pk primary key(id) using index nologging;
create index t1_i1 on t1(n1)nologging;

So I’ve got a table with a million rows, a primary key, and an index on a column of randomly generated data. Now all I need to do is run the following little script  a few thousand times and check the segment stats – I’ve avoided using a pl/sql script because of all the special buffer-handling optimisations could appear if I did:


exec :b1 := trunc(dbms_random.value(1,1000001))

update t1
        set n1 = trunc(dbms_random.value(0,333333))
        where   id = :b1;

commit;

There are various ways of checking the segment stats, you could simply launch an AWR snapshot (or statspack snapshot at level 7) before and after the test – the results from the “Segments by …” sections of the report should tell you all you need to know; or you could run a simple piece of SQL like the following before and after the test and then doing some arithimetic:

select
        object_name, statistic_name, value 
from
       v$segment_statistics
where
       owner = {your user name here}
and    object_name in ('T1','T1_PK','T1_I1')
and    statistic_name in (
              'db block changes',
              'logical reads'
)
and     value != 0
order by
        object_name,
        statistic_name
;

I happen to have some snapshot code in a little procedure that does the job I need, so my testbed code looks like this:

execute snap_my_stats.start_snap
execute snap_segstat.start_snap

set termout off
set serveroutput off

variable b1 number

@start_10000    -- invoke my script 10,000 times

spool test

set serveroutput on
set termout on

execute snap_segstat.end_snap
execute snap_my_stats.end_snap

spool off

The question is, what do we expect the results to look like, and what do they actually look like. Given we have 10,000 updates going on we might expect something like the following:

  • T1_PK – index access by primary key, 10,000 * 3 logical I/Os
  • T1 – 10,000 logical I/Os as we find the rows then 10,000 db block changes
  • T1_I1 – index access to find entry to be deleted (10,000 * 3 logical I/Os), repeated to find leaf block for insertion of new entry (10,000 * 3 logical I/Os), with 10,000 * 2 db block changes for the delete/insert actions.

Here are a few results from 12.1.0.2 – if I don’t include a commit in the update script:


12.1.0.2 with no commit
Segment stats
=======================
T1
logical reads                               20,016
db block changes                            19,952

T1_PK
logical reads                               30,016
physical reads                                  19
physical read requests                          19

T1_I1
logical reads                               60,000
db block changes                            21,616

Session Stats
=============
Name                                         Value
----                                         -----
session logical reads                      110,919
consistent gets                             30,051
consistent gets examination                 30,037
db block gets                               80,868
db block changes                            81,989

Some of the figures match the predictions very nicely – in particular the logical reads and db block changes on the T1_I1 index are amazing (so good I feel I have to promise that I didn’t fake them, or wait until after the test to make my prediction;)

There are, however, some anomalies: why have I got 20,000 logical reads and db block changes on the table when I did only 10,000 updates. I was surprised by this, but it is something I’ve seen before: Oracle was locking each row before updating it, so generating two changes and two redo entries (Op Codes 11.4 and 11.5). In the past I’d noticed this as a side effect of setting the audit_trail to DB, but it was happening here with audit_trail =none. (Something to add to my “todo” list – why is this happening, when did it appear.)

You’ll also notice that the session level stats for logical reads nearly matches the table and index level (20K + 30K + 60K = ca. 110K) while the db block changes stats are out by a factor of 2. Don’t forget that for each change to a table or index we make a change to an undo block describing how to reverse that change so the 40,000 data changes are matched by a further 40,000 undo block changes; and on top of this every time we get the next undo block we change our transaction table entry in the undo segment header we’re using, and that accounts for most of the rest. The discrepancy in the number of logical reads is small because while we keeping getting and releasing the table and index blocks, we pin the undo block from the moment we acquire it to the moment it’s full so we don’t record extra logical reads each time we modify it.

Big observation

Based on the figures above, we could probably say that, for an index with a blevel = 2 (height = 3), if the number of db block changes recorded is close to one-third of the logical reads recorded, then that index is a good candidate for review as it may be an index that is not used to access data, it may be an index that does nothing except use up resources to keep itself up to date.

Big problem

Take a look at the statistics when I included the commit in my test case:

12.1.0.2 with commit
Segment Stats
====================
T1
logical reads                               20,000

T1_PK
logical reads                               30,000

T1_I1
logical reads                                  512
db block changes                               160

Session Stats
=============
Name                                         Value
----                                         -----
session logical reads                       80,625
consistent gets                             30,106
consistent gets examination                 30,039
db block gets                               50,519
db block changes                            60,489

Apparently my session has made 60,000 changes – but none of them applied to the table or index! In fact I haven’t even accessed the T1_I1 index! The segment statistics have to be wrong. Moreover, if I commit every update I ought to change a segment header block at the start and end of every update, which means I should see at least 20,000 more db block changes in the session (not 20,000 less); and since I’m not pinning undo blocks for long transaction I should see about 10,000 extra logical reads as I acquire 10,000 undo blocks at the start of each short transaction. The session statistics have to be wrong as well!

A quick check on the redo stream shows exactly the change vectors I expect to see for these transactions:

  • 11.4 – lock row price (table)
  • 5.2 – start transaction (update undo segment header)
  • 11.5 – update row piece (table)
  • 10.4 – delete leaf row (index)
  • 10.2 – insert leaf row (index)
  • 5.4 – commit (update undo segment header)
  • 5.1 – update undo block (op 11.1 – undo table row operation)
  • 5.1 – update undo block (op 11.1 – undo table row operation)
  • 5.1 – update undo block (op 10.22 – undo leaf operation)
  • 5.1 – update undo block (op 10.22 – undo leaf operation)

That’s a total of 10 changes per transaction – which means 100,000 db block changes  in total, not 60,000.

This anomaly is so large that it HAS to make my suggested use of the segment stats suspect.  Fortunately, though, the error is in a direction that, while sapping our confidence, doesn’t make checking the numbers a completely pointless exercise.  If the error is such that we lose sight of the work done in modifying the index then the figures remaining are such that they increase our perception of the index as one that is being used for queries as well – in other words the error doesn’t make an index that’s used for queries look like an index that’s only used for self-maintenance.

Case Study

The following figures were the results from the OTN database forum posting that prompted me to write this note and the previous one:

OTN

The poster has some code which gives a report of the indexes on a table (all 26 of them in this case) with their column definition and segment statistics. What (tentative) clues do we get about these indexes as far as this article is concerned ?

Conveniently the code arranges the indexes in order of “change percentage”, and we can see very easily that the first nine indexes in the list show “db block changes” > one-third of “logical reads”, the cut-off point for the article, so it’s worth taking a quick look at those indexes to see if they are suitable candidates for dropping. Inevitably the moment you start looking closely there are a number of observations to add to this starting point.

  1. Look at the number of changes in the first 12 indexes, notice how frequently numbers around 300,000 appear – perhaps that’s indicative of about 300,000 inserts taking place in the interval, in which case the first and 14th indexes (on (zcid) and (ps_spdh) respectively) must be on columns which are very frequently null and are therefore much smaller than the rest of the indes. Even though the index on (zcid) is reported at 39%, perhaps this is an index with a blevel of 1 (height = 2) in which case its cut-off point would be 50% rather than 33% – which means it could well be used for a lot of queries.
  2. The tenth index on (dp_datetime) reports 26%, “change percentage”  which is below the cut-off, but it’s worth noting that are three other indexes (12, 13 and 21) on that table that start with a column called dp_datetime_date. Is dp_datetime_date the truncated value of db_datetime and is it a real column or a virtual column ? Given my comments about the optimizer’s clever trick with indexes on trunc(date_column) in the second post in this series perhaps there’s scope here for getting rid of the dp_datetime index even though the simple numeric suggests that it probably is used for some queries.
  3. Of the three indexes starting with db_datetime_date, one consists of just that single column – so perhaps (as suggested in the first post in this series) we could simply drop that too. Then, when we look at the other two (indexes 12 and 13) we note that index 13 is subject to fives time as much change as index 12 (is that one insert plus 2 updates, given that an update means two changes), but fifteen times as much logical I/O. The extra LIO may be because the index is larger (so many more columns), it may be because the index is used very inefficiently – either way, we might look very carefully at the column ordering to see if index 13 could be rearranged to start the same way as index 12, and then drop index 12.  On top of everything else we might also want to check whether we have the right level of compression on the index – if it’s not very effective until we’ve selected on many columns then it must be subject to a lot of repetition in the first few columns.
  4. I gave a few examples in part one of reasons for dropping indexes based on similarity of columns used – the examples came from this output so I won’t repeat them, but if you refer back to them you will note that the desirability of some of the suggestions in the earlier article is re-inforced by the workload statistics – for example: the similarity of indexes 24 and 24, with an exact ordered match on the first 4 columns, suggests that we consider combining the two indexes into a single index: the fact that both indexes were subject to 2.7 million changes makes this look like a highly desirable target.

Summary

There are a lot of indexes on this table but it looks as if we might be able to drop nearly half of them, although we will have to be very careful before we do so and will probably want to make a couple at a time invisible (and we can make the change “online” in 12c) for a while before dropping them.

Remember, though, that everything I’ve said in this note is guesswork based on a few simple numbers, and I want to emphasise an important point – this note wasn’t trying to tell you how to decide if an index could be dropped, it was pointing out that there’s a simple way to focus your attention on a few places where you’re most likely to find some indexes that are worth dropping.  Run a report like this against the five biggest tables or the five busiest tables or the five tables with the most indexes and you’ll probably find a few easy wins as far as redundant indexes are concerned.

Footnote

While writing up my comments about the optimizer’s tricks with columns like dp_datetime and a virtual dp_datetime_date I had a sudden sneaky thought about how we could play games with the optimizer if both columns were real columns that were kept in synch with each other. If it works out I’ll write it up in a further blog.

Raspberry Pi Controlled Airplane Launcher

Everything I do with the Raspberry Pi is about finding ways to incorporate STEAM, (Science, Technology, Engineering, Art and Math) into my projects.  This is no easy feat, but I have a lot of fun while I try.  This latest project, we’re still working on ways to “enhance” and understand how code can be used with multiple projects and to satisfy multiple requirements.

Science Kit Projects, to the Nth Level

Our first project used DC motors to make an art project or spinner turn at the appropriate rotation required.  I then was able to repurpose this same code to bypass a battery operated Insectoid to control it with the Raspberry Pi and now, I’m going to use it to bypass the original battery operated project to control TWO motors, operating at opposite directions to launch paper airplanes.

Now what’s nice about these projects, is that 4M kits come in all designs, but the ones I like are the ones that have battery packs.  If they have battery packs, they’re trying to power something and that something can be then redirected to the Raspberry Pi and controlled through code.

This means you’ll focus in the product area categorized as “Mechanic Fun“.   What’s great about these kits is that your kid gets to build something that has a set of instructions, has the same gears, motors and other pieces that they would use in their own projects as their skills develop and more than anything else, often has a “base” or set of “frames” that are professionally designed that offer a huge success rate for the project.  Once the project is built, the original goal “swerves” to rewire it to the Raspberry Pi and the “hacking” begins to first wire it to the GPIO or Pibrella board.  After you’ve successfully rewired it, then you need to update your code to match the requirements for the new project and control the motor or the gears properly.

Building out the Air Plane Launcher

The kit came with a good amount of parts.  The hardest part was getting the engines mounted into the frame and it took a bit of dish soap around the openings to get them to fit into the rubber molds, (it was a tight fit and you had to be careful not to bend the shaft on the stepper motor, which is made of a soft, conductive metal.)

One of the benefits of the 4M products, is that it eliminates any soldering.  the wires either attach to the motor and plastic covers secure the connections or in case of the wires from the battery compartment and subsequently where the Raspberry Pi comes in, the jumper wires that will then attach to the GPIO out, there is simply a set of screws that are tightened to attach them to the rest of the unit.

20150831_105029

 

On the right side of the picture, you can see where the upper two red and the lower, two black attach.  One red and one black attach over to the battery pack.  As we don’t want the Raspberry Pi 5V, 2.5a power supply having to power the project along with the Raspberry Pi, keyboard, mouse, wifi and any other perephials attached that don’t have their own power supply, we’re going to keep the battery pack wired in, batteries included, but the power switch set to the OFF setting.  When we execute our code, the batteries will supplement the power and keep the project from overwhelming our Rpi power supply, (how would you know if you need a battery pack on a project?  Your Raspberry Pi will reboot immediately upon initiating power to the new add on.  Trust me, you don’t want to do this often, so please, if a battery pack comes with a project, wire it in with the jumpers to the Raspberry Pi.  You’ll thank me later! :))

20150831_105104

Now the second set of wires on the right hand side you see up there, with the black jumpers are the jumper wires to used to attach the project to the Raspberry Pi.  The first set, attached to the project and then going out, are male/male jumpers.  From there, we use male/female jumper wires like the following:

gpio_wires

 

I buy them in various lengths, colors and ends for my projects from MicroCenter.  You can then piggyback one after another of the male/female jumpers until you have the preferred length to then plug the appropriate positive, (+) and negative, (-) into your GPIO board or for me, my Pibrella board as follows.

20150831_105041

Coding in Python

My code will be turning on the “E” connection, (you can see the letters on the left, A-D and then on the right side, you can barely see the G-H next to the red button, which is covering the E and F, E being at the top. There is a small positive and negative symbol at the bottom to tell you which is which for the In or Out and so I’ve plugged in my wires to the “OUT” and into E, since I’m using the following command to turn on the output:

pibrella.output.e.on()  #use the pibella card, Out, E and turn it on…

Then I’m ready to update my code to do what I want with this new project.

import pibrella
import time

while true:
    if pibrella.button.read(): #use button to initiate code...
        pibrella.light.green.on() #turn on green light
        time.sleep(1)
        pibrella.light.green.off()
        pibrella.output.e.on() #Initiate E
        time.sleep(7)
        pibrella.output.e.off()
        pibrella.light.red.on() #Turn on Red light, to say done
        time.sleep(1)
        pibrella.light.red.off()
        pibrella.buzzer.success() #Issue Success sound to complete
        time.sleep(2)
        pibrella.buzzer.off()
exit()

This is now going to be our code that will do what we want with this project and yes, it does a lot more than the simple ON/OFF.  Save this code as fly-plane.py and you’re ready to create your plane to fly with your airplane launcher.

Now you can let your assistant, (in my case, my 15 yr old son, Josh) build me a raging paper airplane.  You can do this as simple or as complicated you want.  The one tip I’ve offer is to keep the plane clean and narrow at the base, as it needs to run cleanly between the rubber spinners.  Josh made me a few killer planes and this is a sample run of one of our launches.  It’s executed from a terminal screen by typing in the following:

sudo python3 fly-plane.py

The first thing you’ll notice if you tried the airplane launcher with just the battery pack vs. with the Raspberry Pi and the battery pack is that it is super-charged with the Raspberry Pi configuration.  We can control the speed of the rotation with GPIO code and such, but it’s just too much fun, (I also hear Tim the Toolman Taylor in the background making caveman ape like sounds… :))

So next time you’re looking for new projects, but coming up with nothing, take a look in the science and robotics kit aisle of your local toy store.  You might find something fun to incorporate with your Raspberry Pi!

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Raspberry Pi Controlled Airplane Launcher], All Right Reserved. 2015.

Index Usage – 2

I’ve been a little slow in the follow-up to my previous posting on possibly redundant indexes. Before going into the slightly more complex stuff, there’s another peripheral point (but a very important one) that’s worth raising about how clever the optimizer can be. Here’s some code for 11.2.0.4 to demonstrate the point:

create table t1
nologging
as
with generator as (
	select	--+ materialize
		rownum id 
	from dual 
	connect by 
		level <= 1e4
)
select
	rownum					id,
	trunc(sysdate,'MM') + (rownum-1)/1440	date_time,
	rpad('x',100)				padding
from
	generator	v1,
	generator	v2
where
	rownum <= 1e5
;


alter table t1 
add (
        date_only
	generated always as (trunc(date_time)) virtual 
)
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt	 => 'for all columns size 1'
	);
end;
/

create index t1_i1 on t1(date_only) nologging;

So, in a two-step process, I’ve got an indexed virtual column that holds the value of the date_time column truncated to just the date. Would you expect the optimizer to use the index to execute the following query efficiently:


select
        max(id)
from
        t1
where
        date_time between sysdate-1 and sysdate
;

Note that the query references the real date_time column not the virtual column date_only, and it’s not using the expression that defines the index – yet the plan reads as follows:


-----------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |       |     1 |    21 |    86   (2)| 00:00:01 |
|   1 |  SORT AGGREGATE                       |       |     1 |    21 |            |          |
|*  2 |   FILTER                              |       |       |       |            |          |
|*  3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1    |  1442 | 30282 |    86   (2)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN                  | T1_I1 |  4306 |       |    13   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(SYSDATE@!>=SYSDATE@!-1)
   3 - filter("DATE_TIME"<=SYSDATE@! AND "DATE_TIME">=SYSDATE@!-1)
   4 - access("T1"."DATE_ONLY">=TRUNC(SYSDATE@!-1) AND
              "T1"."DATE_ONLY"<=TRUNC(SYSDATE@!))

It’s a little odd that even though the optimizer in the newer versions of Oracle treats many simple expressions on sysdate as constants it still checks (operation 2) that “sysdate >= sysdate – 1” but perhaps that’s just a case of a piece of generic code that isn’t worth the risk or effort of changing.

The key point, of course, is that Oracle has managed to generate some extra predicates that allow it to use the “wrong” index to get a first approximation of the result set fairly efficiently, and then used the original predicate to reduce the approximation down to the correct result set.

If you want a quick sanity check on the access predicates used for operation 4:

  • If date_time >= sysdate-1, then trunc(date_time) >= trunc(sysdate-1)
  • If date_time <= sysdate, then trunc(date_time) <= trunc(sysdate)

This style of predicate manipulation also works numeric data types, but I think its greatest benefit (or convenience) is likely to come from date data types where the data has been created with a time component but there are frequent “date-only” queries. The days of creating two indexes as a workaround for handling generated code that wants to deal with both date_time and trunc(date_time) predicates should be numbered.

Footnote:

This enhancement probably appeared in 11.2.0.2, and I first saw it described in October 2013 in this blog note by Mohamed Houri; but 12c offers a delightful little enhancement – here’s what my table looks like in the 12c version of the code:


SQL> desc t1
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ID                                     NUMBER
 DATE_TIME                              DATE
 PADDING                                VARCHAR2(100)

SQL> 

Where’s the virtual column ? The 12c version of my code had a slightly different definition for it:


alter table t1  
add (
        date_only
        invisible
        generated always as (trunc(date_time)) virtual
)
;

The transformation still works even when the virtual column is invisible. So (subject to searching for anomalies, boundary conditions and bugs) it looks as if you can change the table definition, and get the benefits of two indexes for the price of one without the application realising that anything has changed.

Creating a Trace File from EM12

Creating a trace file from EM12c is quite easy and doesn’t require a DBA offering up the world to allow a developer or support person to perform this action.

Enterprise Manager also removes some of the syntax options that could get a developer or app support person that is less specialized in creating trace files, all the options and the knowledge of the syntax, etc. from getting into trouble, which creates an option for everyone to be successful.

The proposed solution that I’ll post here is where the EM12c user will be able to create trace files, but the DBA will control the OS level files, (delivering the file to the user after the trace file is created, will use a defined user to log into the database from Cloud Control and will have no target access granted previously.

Creating the Database User for Tracing

Connect to your PDB and create your user that you will be using for tracing….

create user dev_trace identified by ;
grant dev_trace connect, select_catalog_role, create session, alter session;

Limit the size of trace files for the database to ensure that no trace file can be forgotten and grow unconstrained:

connect system/
alter system set max_dump_file_size =40960;

Once this is complete, you can create your database target credentials for your developer role that will now use this new login and limit their rights in the target as they login.

Grant the following privileges at the “Target Type Privilege” for the new user:

Monitor Enterprise Manager Monitor Enterprise Manager performance
Add any Target Add any target in Enterprise Manager
View any Target Ability to view all managed targets in Enterprise Manager

Yes, you do have the right to switch from “View any Target” to a list of specified targets.  I just figured I’d lesson the steps here, so do what fulfills your IT security model.

Add the the new role that you’ve created for your Tracing User and then complete the wizard creation for a new user in Cloud Control.

Creating a Trace File

Once the user logs in, (and updates their password… :)) They should be able to log into a target and view performance data.  For our example, we’ll use the following database scenario, where the developer or app support user is concerned about “Other” in Top Activity.  They’ve been given a high level overview of Top Activity wait events and know that green, (CPU) is expected, dark blue, (IO) often occurs and that colors of red, (application or concurrency) brown, (configuration or network) orange, (commit) and pink, (other) should be monitored more closely.

tracing1

 

Now tracing is a session level process that created a trace file in an OS, (Operating System) directory.  For Oracle 11g and higher, this creates the file in the Automatic Diagnostic Repository, (ADR) home.  The base location can be easily located by the parameter DIAGNOSTICE_DEST + /rdbms/diag///trace.

show parameter DIAGNOSTIC_DEST;

In our above Top Activity example, we can see the sessions on the right hand side and see a couple that are of concern.  Since we’ve decided that we need to inspect the “Other” waits more, we will look in the Session ID column and double click on the 1107 Session ID.

This brings us to the details page for the 1107 session:

tracing2

 

We have a number of tabs in the Details page to inspect, but let’s say we want to create a trace file to dig deeper into the ENQUEUE waits instead.  To the right, at the upper and lower section, you can see the button “Enable SQL Trace”.  As the user has the privileges to “Alter Session”, they can click this and it will step into the creation steps for a trace file.

tracing3

 

This is like performing a 10046 trace.  As you can see, its very straight forward and simple vs. executing the SQL*Plus commands that would require the following for both options set to “No”:

execute dbms_support.start_trace_in_session (1107,39645, true);

To create the option with the wait information set to “Yes” we’d update our request to the following:

execute dbms_support.start_trace_in_session (1107,39645,waits=>true);

And for both to be set to “Yes”, we’d execute this command that has both:

execute dbms_support.start_trace_in_session (1107,39645,waits=>true, binds=>true);

Back to our Cloud Control option, (the easier option) once we decide to gather the wait and bind information, we can click “OK” and the trace will subsequently begin.

tracing4

Once we’ve finished tracing, we should remember to turn the trace back off, (which is the reason for the DBA setting the max dump file size to have a safety net for those that forget!)

The option will then show right in the session, even if you leave and come back to the session in Cloud Control.  You can click on it and disable the trace:

tracing5

 

If you were to forget, the DBA could easily locate the session being traced in the file and they can issue the following statement from the command line to disable it:

execute dbms_support.start_trace_in_session (1107,39645, false);

Getting a READABLE Trace File Report

Now this is where your DBA comes in and you need to make friends with them.  Most DBAs are not going to be comfortable granting access to OS level files and I don’t blame them.  It’s best if you don’t have to worry about this access and better if they just retrieve the files for you.  Before they do, you want them to convert the trace data into a readable report.  This is done via the TKPROF utility.

The trace file should be easy to locate from the server in the ADR directory for trace files, (the directory is called “trace”, duh… :))  and there are two files that are created as part of your trace file-  The .trm trace mapping file, which you won’t need and the .trc trace file that is the one that is required to create the report.

The DBA will run the following to create the report

tkprof .trc .txt

They can SCP or FTP the output file to their workstation and this is the one that you’ll want them to send to you.

And there you have it-  creating a secure user that can create trace files and then how to create the trace files once you have the access.  Now you DBAs and developers make friends and bring each other donuts and maybe buy each other a drink… :)



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Creating a Trace File from EM12], All Right Reserved. 2015.

Oracle Cloud : First Impressions

cloudFollowers of the blog will know I’ve been waiting to get access to the Oracle Cloud for a while. Well, I’ve finally got access to a bit of it. Specifically, the “Oracle Database Cloud Service” (DBaaS) part. :)

The Schema Service has been around for a few years and I had already tried that out, but IMHO it’s not really part of Oracle Cloud proper*, so I was reserving my judgement until I got the real stuff. :)

I’ve written a couple of articles already. Just basic stuff to document setting stuff up and connecting etc.

So here are some first impressions…

Oracle Cloud : Look and Feel

Overall the cloud offering looks clean and modern. Tastes vary of course, but I like the look of it.

The navigation is a bit inconsistent between the different cloud services. It feels like the console for each section (Compute, Java, DBaaS etc.) has been written by a different team, each doing what they think works, rather than working to a single design standard. Here’s a couple of examples:

  • In the “Oracle Database Cloud Service” section there is a “Consoles” button on the top-right of the screen that triggers a popup menu allowing you to switch to the Dashboard, Java Cloud and Compute Cloud console. In the “Oracle Compute Cloud” section, the “Consoles” button is not present. Instead there is a hamburger on the top-left of the screen that causes a navigation panel to slide out on the left of the screen, pushing the rest of the page contents to the right. On the top-level services page, the same hamburger produces a popup menu, kind-of like the “Consoles” button, but with the colouring of the navigation panel. I don’t find any method better or worse than the others. It would just be nice if they picked one and stuck with it, otherwise you are looking round the screen trying to decide how to make your next move. :)
  • Some consoles use tabs. Some use navigation tiles. Some use both.

Don’t get me wrong, it’s not hard to navigate. It’s just inconsistent, which kind-of ruins the overall effect. If they can bring it all into line I think it will be really cool.

I think Oracle Cloud looks neater than Amazon Web Services, but the navigation is not as consistent as AWS or Azure. Having used AWS, Azure and Oracle Cloud, I feel Azure has the neatest and most consistent interface. Like I said before, tastes vary. :)

Probably my biggest issue with the Oracle Cloud interface is the speed, or lack of. It’s really slow and unresponsive at times. On a few occasions I thought it had died, then after about 30 seconds the screen just popped back into life. Some of the actions give no feedback until they are complete, so you don’t know if you’ve pressed the button or not.

Oracle Cloud : Ease of Use

I found DBaaS pretty simple to use. I’ve already spent some time using AWS and Azure, so there is probably some carry-over there. I pretty much completed my first pass through creation, connections and patching before I even considered looking for documentation. :)

The documentation is OK, but contains very few screen shots, which leads me to believe the look and feel is all in a state of flux.

I think the general Oracle Compute Cloud Service network/firewall setup is really quite clear, but you can’t edit existing rules. Once a rule is created you can only enable, disable or delete it. I found myself having to delete and create rules a number of times when it felt more obvious to let me edit an existing rule. I’ll mention a DBaaS issue related to this later.

DBaaS Specifically

Just some general observations about the DBaaS offering.

  • The “Oracle Database Cloud Service” DBaaS offering looks OK , but I noticed they don’t have multiplexed redo logs. I never run without multiplexed redo logs, regardless of the redundancy on the storage layer. Even if they were all shoved in the same directory, it would still be better than running without multiplexed files. This is a bit of mandatory configuration the user is left to do after the fact.
  • The DBaaS virtual machine has Glassfish and ORDS installed on it, which is necessary because of the way they have organised the administration of the service, but it’s not something I would normally recommend. Databases and App Servers never go on the same box. Like I said, I understand why, but I don’t like it.
  • The management of the DBaaS offering feels fragmented. For some administration tasks you use the main cloud interface. For others you jump across to the DBaaS Monitor, which has a completely different look and feel. For others you to jump across to [DBConsole – 11g | DB Express -12c]. For a DBaaS offering, I think this is a mistake. It should all be incorporated into the central console and feel seamless. I understand that may be a pain and repetition of existing functionality, but it feels wrong without it.
  • I found the network/firewall setup done by the DBaaS service to be quite irritating. It creates a bunch of rules for each DBaaS service, which are all disabled by default (a good thing), but all the rules are “public”, which you would be pretty crazy to enable. Because you can’t edit them, they end up being pretty much useless. It really is one of those, “Do it properly or don’t bother!”, issues to me. If the DBaaS setup screens asked you to define a Security IP List, or pick an existing one, and decide which services you wanted to make available, it could build all these predefined rules properly in the first place. Alternatively, provide a DBaaS network setup wizard or just don’t bother. It feels so half-baked. :(
  • Dealing with the last two points collectively, the fragmentation of the management interface means some of the management functionality (DBaaS Monitor and [DBConsole – 11g | DB Express -12c]) is not available until you open the firewall for it. This kind-of highlights my point about the fragmentation. I’m logged into the DBaaS console where I can create and delete the whole service, but I can’t use some of the management features. It just feels wrong to me. It is totally down to the implementation choices. I would not have chosen this path.
  • Unlike the AWS RDS for Oracle, you get complete access to the OS and database. You even get sudo access to run root commands. At first I thought this was going to be a good thing and a nice differentiator compared to RDS, but having used the service I’m starting to think it is a bad move. The whole point of a DBaaS offering is it hides some of the nuts and bolts from you. I should not be worrying about the OS. I should not be worrying about the basic Oracle setup. Giving this level of access raises more questions/problems than it solves. I feel I should either do everything myself, or pick a DBaaS offering, accept the restrictions of it, and have it all done for me. The current offering feels like it has not decided what it wants to be yet.
  • When I patched the database through the service admin console it worked fine, but it took a “really” long time! I waited quite a while, went out to the gym and it was still going when I came back. Eventually I started an SSH session to try and find out what was happening. It turns out it took over 2 hours to “download” the PSU to the VM. Once the download was complete, the application of the patch was done quickly. Interesting.
  • The “Oracle Database Cloud Service – Virtual Image” option seems pretty pointless to me. On the website and console it says there is a software installation present, but this is not the case. Instead, there is a tarball containing the software (/scratch/db12102_bits.tar.gz). It also doesn’t come with the storage to do the actual installation on, or to hold the datafiles. To do the installation, you would need to “Up Scale” the service to add the storage, then do the installation manually. This process is actually more complicated than provisioning a compute node and doing everything yourself. I think Oracle need to ditch this option and just stick with DBaaS or Compute, like Amazon have done (RDS or EC2).

Conclusion

I like the Oracle Cloud more than I thought I would. I think it looks quite nice and if someone told me I had to use it as a general Infrastructure as a Service (IaaS) portal I would be fine with that.

I like the DBaaS offering less than I hoped I would. I feel quite bad about saying it, but it feels like a work in progress and not something I would want use at this point. If it were my decision, I would be pushing the DBaaS offering more in the direction of AWS RDS for Oracle. As I said before, the current DBaaS offering feels like it has not decided what it wants to be yet. It needs to be much more hands-off, with a more consistent, centralized interface.

I don’t have full access to the straight Compute Cloud yet, so I can’t try provisioning a VM and doing everything myself. If I get access I will try it, but I would expect it to be the same as what I’ve done for EC2 and Azure. A VM is a VM… :)

When I read this back it sounds kind-of negative, but I think all the things I’ve mentioned could be “fixed” relatively quickly. Also, this is only one person’s opinion on one specific service. The haters need to try this for themselves before they hate. :)

Cheers

Tim…

* Just to clarify, I am not saying the Schema Service isn’t “Cloud” and I’m not saying it doesn’t work. I’m just saying I don’t see this as part of Oracle’s grand cloud database vision. It always seemed like a cynical push to market to allow them to say, “we have a cloud DB”. If it had been branded “APEX Service” I might have a different opinion. It is after all a paid for version of apex.oracle.com. This is a very different proposition to promoting it as a “Cloud Database”.


Oracle Cloud : First Impressions was first posted on August 28, 2015 at 2:55 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Presenting in Perth on September 9th, 2015

I’ll be presenting at a “Lets Talk Oracle” event in Perth, with fellow Ex-Oracle ACE Directors Richard Foote and Chris Muir. Full agenda as follows:



8:30-9:00   Registration and coffee
9:00-10:30  Richard Part I – Database 12c New Features for DBAs (and Developers)
10:30-11:00 Break
11:00-12:30 Richard Part II – Database 12c New Features for DBAs (and Developers)
12:30-1:30  Lunch
1:30-2:30   Chris – Creating RESTful APIs with Oracle Data Services (for Developers and DBAs)
2:30-2:45   Break
2:45-4:15   Connor – Database 12c New Features for Developers (and DBAs)
4:15-5:00   Q&A with the ACES !
5:00        Wrap up

And yes… of course, the event is FREE!

It will be at the Oracle Offices in Perth so places are strictly limited. If you want to attend, send an email to dennis.ward@oracle.com

See you there !!

Adaptive Query Optimization in Oracle 12c : Ongoing Updates

I’ve said a number of times, the process of writing articles is part of an ongoing learning experience for me. A few days ago my personal tech editor (Jonathan Lewis) asked about a statement I made in the SQL Plan Directive article. On further investigation it turned out the sentence was a complete work of fiction on my part, based on my misunderstanding of something I read in the manual, as well as the assumption that everything that happens must be as a result of a new feature. :)

Anyway, the offending statement has been altered, but the conversation this generated resulted in new article about Automatic Column Group Detection.

The process also highlighted how difficult, at least for me, it is to know what is going on in the optimizer now. It wasn’t always straight forward before, but now with the assorted new optimizations, some beating others to the punch, it is even more difficult. There are a number of timing issues involved also. If a statement runs twice in quick succession, you might get a different sequence of events compared to having a longer gap between the first and second run of the statement. It’s maddening at times. I’m hoping Jonathan will put pen to paper about this, because I think he will do a better job of explaining the issues around the inter-dependencies better than I can.

Anyway, I will be doing another pass through this stuff over the coming days/weeks/months/years to make sure it is consistent with “my current understanding”. :)

Fun, fun, fun…

Cheers

Tim…


Adaptive Query Optimization in Oracle 12c : Ongoing Updates was first posted on August 27, 2015 at 7:09 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Real Application Testing report On Premise vs. Oracle Public Cloud

Just a quick post today as I wanted to highlight a video that has been made available by one of my colleagues, Valentin Tabacaru. Valentin is a sales consultant based out of one of our Spanish offices. The
YouTube video shows a SQL Performance Analyzer execution and report generation using real world workload testing between On Premise Oracle 11gR2 Database and Oracle Public Cloud 12c Pluggable Database Service. Both environments are managed using EM12c providing a single pane of glass to both monitor and manage the environments. The video lasts a shade under 24 minutes, so go and grab yourself a coffee, sit back and enjoy!

The post Real Application Testing report On Premise vs. Oracle Public Cloud appeared first on PeteWhoDidNotTweet.com.

Truncate – 2

Following on from my earlier comments about how a truncate works in Oracle, the second oldest question about truncate (and other DDL) appeared on the OTN database forum“Why isn’t a commit required for DDL?”

Sometimes the answer to “Why” is simply “that’s just the way it is” – and that’s what it is in this case, I think.  There may have been some historic reason why Oracle Corp. implemented DDL the way they did (commit any existing transaction the session is running, then auto-commit when complete), but once the code has been around for a few years – and accumulated lots of variations – it can be very difficult to change a historic decision, no matter how silly it may now seem.

This posting isn’t about answering the question “why”, though; it’s about a little script I wrote in 2003 in response to a complaint from someone who wanted to truncate a table in the middle of a transaction without committing the transaction. Don’t ask why – you really shouldn’t be executing DDL as part of a transactional process (though tasks like dropping and recreating indexes as part of a batch process is a reasonable strategy).

So if DDL always commits the current transaction how do you truncate a table without committing ? Easy – use an autonomous transaction. First a couple of tables with a little data, then a little procedure to do my truncate:


create table t1 (n1 number);
insert into t1 values(1);

create table t2 (n1 number);
insert into t2 values(1);

create or replace procedure truncate_t1
as
        pragma autonomous_transaction;
begin
        execute immediate 'truncate table t1';
end;
/

Then the code to demonstrate the effect:


prompt  ======================================
prompt  In this example we end up with no rows
prompt  in t1 and only the original row in t2,
prompt  the truncate didn't commit the insert.
prompt  ======================================

insert into t2 values(2);

execute truncate_t1;
rollback;

select * from t1;
select * from t2;


According to my notes, the last time I ran this code was on 9.2.0.3 but I’ve just tested it on 12.1.0.2 and it behaves in exactly the same way.

I’ve only tested the approach with “truncate” and “create table” apparently, and I haven’t made any attempt to see if it’s possible to cause major distruption with cunningly timed concurrent activity; but if you want to experiment you have a mechanism which Oracle could have used to avoid committing the current transaction – and you may be able to find out why it doesn’t, and why DDL is best “auto-committed”.

Fedora 22/23 and Oracle 11gR2/12cR1

linux-tuxAs always, installations of Oracle server products on Fedora are not a great idea, as explained here.

I was reading some stuff about the Fedora 23 Alpha and realised Fedora 22 had passed me by. Not sure how I missed that. :)

Anyway, I did a run through of the usual play stuff.

While I was at it, I thought I would get the heads-up on Fedora 23 Alpha.

The F23 stuff will have to be revised once the final version is out, but I’m less likely to forget now. :)

I guess the only change in F22 upward that really affects me is the deprecation of YUM in F22 in favour of the DNF fork. For the most part, you just switch the command.

#This:
yum install my-package -y
yum groupinstall my-package-group -y
yum update -y

#Becomes:
dnf install my-package -y
dnf groupinstall  my-package-group -y
dnf group install  my-package-group -y
dnf update -y

This did cause one really annoying problem in F23 though. The “MATE Desktop” had a single documentation package that was causing a problem. Usually I would use the following.

yum groupinstall "MATE Desktop" -y --skip-broken

Unfortunately, DNF doesn’t support “–skip-broken”, so I was left to either manually install the pieces, or give up. I chose the latter and use LXDE instead. :) F23 is an Alpha, so you expect issues, but DNF has been in since F22 and still no “–skip-broken”, which I find myself using a lot. Pity.

Cheers

Tim…


Fedora 22/23 and Oracle 11gR2/12cR1 was first posted on August 25, 2015 at 3:53 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.