Search

OakieTags

Who's online

There are currently 0 users and 17 guests online.

Recent comments

Oakies Blog Aggregator

Index Usage – 2

I’ve beeen a little slow in the follow-up to my previous posting on possibly redundant indexes. Before going into the slightly more complex stuff, there’s another peripheral point (but a very important one) that’s worth raising about how clever the optimizer can be. Here’s some code for 11.2.0.4 to demonstrate the point:

create table t1
nologging
as
with generator as (
	select	--+ materialize
		rownum id 
	from dual 
	connect by 
		level <= 1e4
)
select
	rownum					id,
	trunc(sysdate,'MM') + (rownum-1)/1440	date_time,
	rpad('x',100)				padding
from
	generator	v1,
	generator	v2
where
	rownum <= 1e5
;


alter table t1 
add (
        date_only
	generated always as (trunc(date_time)) virtual 
)
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt	 => 'for all columns size 1'
	);
end;
/

create index t1_i1 on t1(date_only) nologging;

So, in a two-step process, I’ve got an indexed virtual column that holds the value of the date_time column truncated to just the date. Would you expect the optimizer to use the index to execute the following query efficiently:


select
        max(id)
from
        t1
where
        date_time between sysdate-1 and sysdate
;

Note that the query references the real date_time column not the virtual column date_only, and it’s not using the expression that defines the index – yet the plan reads as follows:


-----------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |       |     1 |    21 |    86   (2)| 00:00:01 |
|   1 |  SORT AGGREGATE                       |       |     1 |    21 |            |          |
|*  2 |   FILTER                              |       |       |       |            |          |
|*  3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1    |  1442 | 30282 |    86   (2)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN                  | T1_I1 |  4306 |       |    13   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(SYSDATE@!>=SYSDATE@!-1)
   3 - filter("DATE_TIME"<=SYSDATE@! AND "DATE_TIME">=SYSDATE@!-1)
   4 - access("T1"."DATE_ONLY">=TRUNC(SYSDATE@!-1) AND
              "T1"."DATE_ONLY"<=TRUNC(SYSDATE@!))

It’s a little odd that even though the optimizer in the newer versions of Oracle treats many simple expressions on sysdate as constants it still checks (operation 2) that “sysdate >= sysdate – 1” but perhaps that’s just a case of a piece of generic code that isn’t worth the risk or effort of changing.

The key point, of course, is that Oracle has manged to generate some extra predicates that allow it to use the “wrong” index to get a first approximation of the result set fairly efficiently, and then used the original predicate to reduce the approximation down to the correct result set.

If you want a quick sanity check on the access predicates used for operation 4:

  • If date_time >= sysdate-1, then trunc(date_time) >= trunc(sysdate-1)
  • If date_time <= sysdate, then trunc(date_time) <= trunc(sysdate)

This style of predicate manipulation also works numeric data types, but I think its greatest benefit (or convenience) is likely to come from date data types where the data has been created with a time component but there are frequent “date-only” queries. The days of creating two indexes as a workaround for handling generated code that wants to deal with both date_time and trunc(date_time) predicates should be numbered.

Footnote:

This enhancement probably appeared in 11.2.0.2, and I first saw it described in October 2013 in this blog note by Mohamed Houri; but 12c offers a delightful little enhancement – here’s what my table looks like in the 12c version of the code:


SQL> desc t1
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ID                                     NUMBER
 DATE_TIME                              DATE
 PADDING                                VARCHAR2(100)

SQL> 

Where’s the virtual column ? The 12c version of my code had a slightly different definition for it:


alter table t1  
add (
        date_only
        invisible
        generated always as (trunc(date_time)) virtual
)
;

The transformation still works even when the virtual column is invisible. So (subject to searching for anomalies, boundary conditions and bugs) it looks as if you can change the table definition, and get the benefits of two indexes for the price of one without the application realising that anything has changed.

Creating a Trace File from EM12

Creating a trace file from EM12c is quite easy and doesn’t require a DBA offering up the world to allow a developer or support person to perform this action.

Enterprise Manager also removes some of the syntax options that could get a developer or app support person that is less specialized in creating trace files, all the options and the knowledge of the syntax, etc. from getting into trouble, which creates an option for everyone to be successful.

The proposed solution that I’ll post here is where the EM12c user will be able to create trace files, but the DBA will control the OS level files, (delivering the file to the user after the trace file is created, will use a defined user to log into the database from Cloud Control and will have no target access granted previously.

Creating the Database User for Tracing

Connect to your PDB and create your user that you will be using for tracing….

create user dev_trace identified by ;
grant dev_trace connect, select_catalog_role, create session, alter session;

Limit the size of trace files for the database to ensure that no trace file can be forgotten and grow unconstrained:

connect system/
alter system set max_dump_file_size =40960;

Once this is complete, you can create your database target credentials for your developer role that will now use this new login and limit their rights in the target as they login.

Grant the following privileges at the “Target Type Privilege” for the new user:

Monitor Enterprise Manager Monitor Enterprise Manager performance
Add any Target Add any target in Enterprise Manager
View any Target Ability to view all managed targets in Enterprise Manager

Yes, you do have the right to switch from “View any Target” to a list of specified targets.  I just figured I’d lesson the steps here, so do what fulfills your IT security model.

Add the the new role that you’ve created for your Tracing User and then complete the wizard creation for a new user in Cloud Control.

Creating a Trace File

Once the user logs in, (and updates their password… :)) They should be able to log into a target and view performance data.  For our example, we’ll use the following database scenario, where the developer or app support user is concerned about “Other” in Top Activity.  They’ve been given a high level overview of Top Activity wait events and know that green, (CPU) is expected, dark blue, (IO) often occurs and that colors of red, (application or concurrency) brown, (configuration or network) orange, (commit) and pink, (other) should be monitored more closely.

tracing1

 

Now tracing is a session level process that created a trace file in an OS, (Operating System) directory.  For Oracle 11g and higher, this creates the file in the Automatic Diagnostic Repository, (ADR) home.  The base location can be easily located by the parameter DIAGNOSTICE_DEST + /rdbms/diag///trace.

show parameter DIAGNOSTIC_DEST;

In our above Top Activity example, we can see the sessions on the right hand side and see a couple that are of concern.  Since we’ve decided that we need to inspect the “Other” waits more, we will look in the Session ID column and double click on the 1107 Session ID.

This brings us to the details page for the 1107 session:

tracing2

 

We have a number of tabs in the Details page to inspect, but let’s say we want to create a trace file to dig deeper into the ENQUEUE waits instead.  To the right, at the upper and lower section, you can see the button “Enable SQL Trace”.  As the user has the privileges to “Alter Session”, they can click this and it will step into the creation steps for a trace file.

tracing3

 

This is like performing a 10046 trace.  As you can see, its very straight forward and simple vs. executing the SQL*Plus commands that would require the following for both options set to “No”:

execute dbms_support.start_trace_in_session (1107,39645, true);

To create the option with the wait information set to “Yes” we’d update our request to the following:

execute dbms_support.start_trace_in_session (1107,39645,waits=>true);

And for both to be set to “Yes”, we’d execute this command that has both:

execute dbms_support.start_trace_in_session (1107,39645,waits=>true, binds=>true);

Back to our Cloud Control option, (the easier option) once we decide to gather the wait and bind information, we can click “OK” and the trace will subsequently begin.

tracing4

Once we’ve finished tracing, we should remember to turn the trace back off, (which is the reason for the DBA setting the max dump file size to have a safety net for those that forget!)

The option will then show right in the session, even if you leave and come back to the session in Cloud Control.  You can click on it and disable the trace:

tracing5

 

If you were to forget, the DBA could easily locate the session being traced in the file and they can issue the following statement from the command line to disable it:

execute dbms_support.start_trace_in_session (1107,39645, false);

Getting a READABLE Trace File Report

Now this is where your DBA comes in and you need to make friends with them.  Most DBAs are not going to be comfortable granting access to OS level files and I don’t blame them.  It’s best if you don’t have to worry about this access and better if they just retrieve the files for you.  Before they do, you want them to convert the trace data into a readable report.  This is done via the TKPROF utility.

The trace file should be easy to locate from the server in the ADR directory for trace files, (the directory is called “trace”, duh… :))  and there are two files that are created as part of your trace file-  The .trm trace mapping file, which you won’t need and the .trc trace file that is the one that is required to create the report.

The DBA will run the following to create the report

tkprof .trc .txt

They can SCP or FTP the output file to their workstation and this is the one that you’ll want them to send to you.

And there you have it-  creating a secure user that can create trace files and then how to create the trace files once you have the access.  Now you DBAs and developers make friends and bring each other donuts and maybe buy each other a drink… :)



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Creating a Trace File from EM12], All Right Reserved. 2015.

Oracle Cloud : First Impressions

cloudFollowers of the blog will know I’ve been waiting to get access to the Oracle Cloud for a while. Well, I’ve finally got access to a bit of it. Specifically, the “Oracle Database Cloud Service” (DBaaS) part. :)

The Schema Service has been around for a few years and I had already tried that out, but IMHO it’s not really part of Oracle Cloud proper*, so I was reserving my judgement until I got the real stuff. :)

I’ve written a couple of articles already. Just basic stuff to document setting stuff up and connecting etc.

So here are some first impressions…

Oracle Cloud : Look and Feel

Overall the cloud offering looks clean and modern. Tastes vary of course, but I like the look of it.

The navigation is a bit inconsistent between the different cloud services. It feels like the console for each section (Compute, Java, DBaaS etc.) has been written by a different team, each doing what they think works, rather than working to a single design standard. Here’s a couple of examples:

  • In the “Oracle Database Cloud Service” section there is a “Consoles” button on the top-right of the screen that triggers a popup menu allowing you to switch to the Dashboard, Java Cloud and Compute Cloud console. In the “Oracle Compute Cloud” section, the “Consoles” button is not present. Instead there is a hamburger on the top-left of the screen that causes a navigation panel to slide out on the left of the screen, pushing the rest of the page contents to the right. On the top-level services page, the same hamburger produces a popup menu, kind-of like the “Consoles” button, but with the colouring of the navigation panel. I don’t find any method better or worse than the others. It would just be nice if they picked one and stuck with it, otherwise you are looking round the screen trying to decide how to make your next move. :)
  • Some consoles use tabs. Some use navigation tiles. Some use both.

Don’t get me wrong, it’s not hard to navigate. It’s just inconsistent, which kind-of ruins the overall effect. If they can bring it all into line I think it will be really cool.

I think Oracle Cloud looks neater than Amazon Web Services, but the navigation is not as consistent as AWS or Azure. Having used AWS, Azure and Oracle Cloud, I feel Azure has the neatest and most consistent interface. Like I said before, tastes vary. :)

Probably my biggest issue with the Oracle Cloud interface is the speed, or lack of. It’s really slow and unresponsive at times. On a few occasions I thought it had died, then after about 30 seconds the screen just popped back into life. Some of the actions give no feedback until they are complete, so you don’t know if you’ve pressed the button or not.

Oracle Cloud : Ease of Use

I found DBaaS pretty simple to use. I’ve already spent some time using AWS and Azure, so there is probably some carry-over there. I pretty much completed my first pass through creation, connections and patching before I even considered looking for documentation. :)

The documentation is OK, but contains very few screen shots, which leads me to believe the look and feel is all in a state of flux.

I think the general Oracle Compute Cloud Service network/firewall setup is really quite clear, but you can’t edit existing rules. Once a rule is created you can only enable, disable or delete it. I found myself having to delete and create rules a number of times when it felt more obvious to let me edit an existing rule. I’ll mention a DBaaS issue related to this later.

DBaaS Specifically

Just some general observations about the DBaaS offering.

  • The “Oracle Database Cloud Service” DBaaS offering looks OK , but I noticed they don’t have multiplexed redo logs. I never run without multiplexed redo logs, regardless of the redundancy on the storage layer. Even if they were all shoved in the same directory, it would still be better than running without multiplexed files. This is a bit of mandatory configuration the user is left to do after the fact.
  • The DBaaS virtual machine has Glassfish and ORDS installed on it, which is necessary because of the way they have organised the administration of the service, but it’s not something I would normally recommend. Databases and App Servers never go on the same box. Like I said, I understand why, but I don’t like it.
  • The management of the DBaaS offering feels fragmented. For some administration tasks you use the main cloud interface. For others you jump across to the DBaaS Monitor, which has a completely different look and feel. For others you to jump across to [DBConsole – 11g | DB Express -12c]. For a DBaaS offering, I think this is a mistake. It should all be incorporated into the central console and feel seamless. I understand that may be a pain and repetition of existing functionality, but it feels wrong without it.
  • I found the network/firewall setup done by the DBaaS service to be quite irritating. It creates a bunch of rules for each DBaaS service, which are all disabled by default (a good thing), but all the rules are “public”, which you would be pretty crazy to enable. Because you can’t edit them, they end up being pretty much useless. It really is one of those, “Do it properly or don’t bother!”, issues to me. If the DBaaS setup screens asked you to define a Security IP List, or pick an existing one, and decide which services you wanted to make available, it could build all these predefined rules properly in the first place. Alternatively, provide a DBaaS network setup wizard or just don’t bother. It feels so half-baked. :(
  • Dealing with the last two points collectively, the fragmentation of the management interface means some of the management functionality (DBaaS Monitor and [DBConsole – 11g | DB Express -12c]) is not available until you open the firewall for it. This kind-of highlights my point about the fragmentation. I’m logged into the DBaaS console where I can create and delete the whole service, but I can’t use some of the management features. It just feels wrong to me. It is totally down to the implementation choices. I would not have chosen this path.
  • Unlike the AWS RDS for Oracle, you get complete access to the OS and database. You even get sudo access to run root commands. At first I thought this was going to be a good thing and a nice differentiator compared to RDS, but having used the service I’m starting to think it is a bad move. The whole point of a DBaaS offering is it hides some of the nuts and bolts from you. I should not be worrying about the OS. I should not be worrying about the basic Oracle setup. Giving this level of access raises more questions/problems than it solves. I feel I should either do everything myself, or pick a DBaaS offering, accept the restrictions of it, and have it all done for me. The current offering feels like it has not decided what it wants to be yet.
  • When I patched the database through the service admin console it worked fine, but it took a “really” long time! I waited quite a while, went out to the gym and it was still going when I came back. Eventually I started an SSH session to try and find out what was happening. It turns out it took over 2 hours to “download” the PSU to the VM. Once the download was complete, the application of the patch was done quickly. Interesting.
  • The “Oracle Database Cloud Service – Virtual Image” option seems pretty pointless to me. On the website and console it says there is a software installation present, but this is not the case. Instead, there is a tarball containing the software (/scratch/db12102_bits.tar.gz). It also doesn’t come with the storage to do the actual installation on, or to hold the datafiles. To do the installation, you would need to “Up Scale” the service to add the storage, then do the installation manually. This process is actually more complicated than provisioning a compute node and doing everything yourself. I think Oracle need to ditch this option and just stick with DBaaS or Compute, like Amazon have done (RDS or EC2).

Conclusion

I like the Oracle Cloud more than I thought I would. I think it looks quite nice and if someone told me I had to use it as a general Infrastructure as a Service (IaaS) portal I would be fine with that.

I like the DBaaS offering less than I hoped I would. I feel quite bad about saying it, but it feels like a work in progress and not something I would want use at this point. If it were my decision, I would be pushing the DBaaS offering more in the direction of AWS RDS for Oracle. As I said before, the current DBaaS offering feels like it has not decided what it wants to be yet. It needs to be much more hands-off, with a more consistent, centralized interface.

I don’t have full access to the straight Compute Cloud yet, so I can’t try provisioning a VM and doing everything myself. If I get access I will try it, but I would expect it to be the same as what I’ve done for EC2 and Azure. A VM is a VM… :)

When I read this back it sounds kind-of negative, but I think all the things I’ve mentioned could be “fixed” relatively quickly. Also, this is only one person’s opinion on one specific service. The haters need to try this for themselves before they hate. :)

Cheers

Tim…

* Just to clarify, I am not saying the Schema Service isn’t “Cloud” and I’m not saying it doesn’t work. I’m just saying I don’t see this as part of Oracle’s grand cloud database vision. It always seemed like a cynical push to market to allow them to say, “we have a cloud DB”. If it had been branded “APEX Service” I might have a different opinion. It is after all a paid for version of apex.oracle.com. This is a very different proposition to promoting it as a “Cloud Database”.


Oracle Cloud : First Impressions was first posted on August 28, 2015 at 2:55 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Presenting in Perth on September 9th, 2015

I’ll be presenting at a “Lets Talk Oracle” event in Perth, with fellow Ex-Oracle ACE Directors Richard Foote and Chris Muir. Full agenda as follows:



8:30-9:00   Registration and coffee
9:00-10:30  Richard Part I – Database 12c New Features for DBAs (and Developers)
10:30-11:00 Break
11:00-12:30 Richard Part II – Database 12c New Features for DBAs (and Developers)
12:30-1:30  Lunch
1:30-2:30   Chris – Creating RESTful APIs with Oracle Data Services (for Developers and DBAs)
2:30-2:45   Break
2:45-4:15   Connor – Database 12c New Features for Developers (and DBAs)
4:15-5:00   Q&A with the ACES !
5:00        Wrap up

And yes… of course, the event is FREE!

It will be at the Oracle Offices in Perth so places are strictly limited. If you want to attend, send an email to dennis.ward@oracle.com

See you there !!

Adaptive Query Optimization in Oracle 12c : Ongoing Updates

I’ve said a number of times, the process of writing articles is part of an ongoing learning experience for me. A few days ago my personal tech editor (Jonathan Lewis) asked about a statement I made in the SQL Plan Directive article. On further investigation it turned out the sentence was a complete work of fiction on my part, based on my misunderstanding of something I read in the manual, as well as the assumption that everything that happens must be as a result of a new feature. :)

Anyway, the offending statement has been altered, but the conversation this generated resulted in new article about Automatic Column Group Detection.

The process also highlighted how difficult, at least for me, it is to know what is going on in the optimizer now. It wasn’t always straight forward before, but now with the assorted new optimizations, some beating others to the punch, it is even more difficult. There are a number of timing issues involved also. If a statement runs twice in quick succession, you might get a different sequence of events compared to having a longer gap between the first and second run of the statement. It’s maddening at times. I’m hoping Jonathan will put pen to paper about this, because I think he will do a better job of explaining the issues around the inter-dependencies better than I can.

Anyway, I will be doing another pass through this stuff over the coming days/weeks/months/years to make sure it is consistent with “my current understanding”. :)

Fun, fun, fun…

Cheers

Tim…


Adaptive Query Optimization in Oracle 12c : Ongoing Updates was first posted on August 27, 2015 at 7:09 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Real Application Testing report On Premise vs. Oracle Public Cloud

Just a quick post today as I wanted to highlight a video that has been made available by one of my colleagues, Valentin Tabacaru. Valentin is a sales consultant based out of one of our Spanish offices. The
YouTube video shows a SQL Performance Analyzer execution and report generation using real world workload testing between On Premise Oracle 11gR2 Database and Oracle Public Cloud 12c Pluggable Database Service. Both environments are managed using EM12c providing a single pane of glass to both monitor and manage the environments. The video lasts a shade under 24 minutes, so go and grab yourself a coffee, sit back and enjoy!

The post Real Application Testing report On Premise vs. Oracle Public Cloud appeared first on PeteWhoDidNotTweet.com.

Truncate – 2

Following on from my earlier comments about how a truncate works in Oracle, the second oldest question about truncate (and other DDL) appeared on the OTN database forum“Why isn’t a commit required for DDL?”

Sometimes the answer to “Why” is simply “that’s just the way it is” – and that’s what it is in this case, I think.  There may have been some historic reason why Oracle Corp. implemented DDL the way they did (commit any existing transaction the session is running, then auto-commit when complete), but once the code has been around for a few years – and accumulated lots of variations – it can be very difficult to change a historic decision, no matter how silly it may now seem.

This posting isn’t about answering the question “why”, though; it’s about a little script I wrote in 2003 in response to a complaint from someone who wanted to truncate a table in the middle of a transaction without committing the transaction. Don’t ask why – you really shouldn’t be executing DDL as part of a transactional process (though tasks like dropping and recreating indexes as part of a batch process is a reasonable strategy).

So if DDL always commits the current transaction how do you truncate a table without committing ? Easy – use an autonomous transaction. First a couple of tables with a little data, then a little procedure to do my truncate:


create table t1 (n1 number);
insert into t1 values(1);

create table t2 (n1 number);
insert into t2 values(1);

create or replace procedure truncate_t1
as
        pragma autonomous_transaction;
begin
        execute immediate 'truncate table t1';
end;
/

Then the code to demonstrate the effect:


prompt  ======================================
prompt  In this example we end up with no rows
prompt  in t1 and only the original row in t2,
prompt  the truncate didn't commit the insert.
prompt  ======================================

insert into t2 values(2);

execute truncate_t1;
rollback;

select * from t1;
select * from t2;


According to my notes, the last time I ran this code was on 9.2.0.3 but I’ve just tested it on 12.1.0.2 and it behaves in exactly the same way.

I’ve only tested the approach with “truncate” and “create table” apparently, and I haven’t made any attempt to see if it’s possible to cause major distruption with cunningly timed concurrent activity; but if you want to experiment you have a mechanism which Oracle could have used to avoid committing the current transaction – and you may be able to find out why it doesn’t, and why DDL is best “auto-committed”.

Fedora 22/23 and Oracle 11gR2/12cR1

linux-tuxAs always, installations of Oracle server products on Fedora are not a great idea, as explained here.

I was reading some stuff about the Fedora 23 Alpha and realised Fedora 22 had passed me by. Not sure how I missed that. :)

Anyway, I did a run through of the usual play stuff.

While I was at it, I thought I would get the heads-up on Fedora 23 Alpha.

The F23 stuff will have to be revised once the final version is out, but I’m less likely to forget now. :)

I guess the only change in F22 upward that really affects me is the deprecation of YUM in F22 in favour of the DNF fork. For the most part, you just switch the command.

#This:
yum install my-package -y
yum groupinstall my-package-group -y
yum update -y

#Becomes:
dnf install my-package -y
dnf groupinstall  my-package-group -y
dnf group install  my-package-group -y
dnf update -y

This did cause one really annoying problem in F23 though. The “MATE Desktop” had a single documentation package that was causing a problem. Usually I would use the following.

yum groupinstall "MATE Desktop" -y --skip-broken

Unfortunately, DNF doesn’t support “–skip-broken”, so I was left to either manually install the pieces, or give up. I chose the latter and use LXDE instead. :) F23 is an Alpha, so you expect issues, but DNF has been in since F22 and still no “–skip-broken”, which I find myself using a lot. Pity.

Cheers

Tim…


Fedora 22/23 and Oracle 11gR2/12cR1 was first posted on August 25, 2015 at 3:53 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Midlands : Event #11

Just a quick note to say Oracle Midlands Event #11 is nearly here.

om11

Cheers

Tim…


Oracle Midlands : Event #11 was first posted on August 25, 2015 at 2:36 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Truncate

The old question about truncate and undo (“does a truncate generate undo or not”) appeared on the OTN database forum over the week-end, and then devolved into “what really happens on a truncate”, and then carried on.

The quick answer to the traditional question is essentially this: the actual truncate activity typically generates very little undo (and redo) compared to a full delete of all the data because all it does is tidy up any space management blocks and update the data dictionary; the undo and redo generated is only about the metadata, not about the data itself.

Of course, a reasonable response to the quick answer is: “how do you prove that?” – so I suggested that all you had to do was “switch logfile, truncate a table, dump logfile”. Unfortunately I realised that I had never bothered to do this myself and, despite having far more useful things to do, I couldn’t resist wasting some of my evening doing it. Here’s the little script I wrote to help


create table t2 (v1 varchar2(32));
insert into t2 values (rpad('A',32));
commit;

create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                  id, 
        rpad('x',100)           padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5
;

create index t1_i1 on t1(id);
alter system flush buffer_cache;
execute dbms_lock.sleep(3)

alter system switch logfile;

insert into t2 values(rpad('X',32));

truncate table t1;and t

insert into t2 values(rpad('Y',32));
commit;

execute dump_log

Procedure dump_log simply dumps the current log file. The call to switch logfile keeps the dumped log file as small as possible; and I’ve flushed the buffer cache with a three second sleep to minimise the number of misleading “Block Written Record” entries that might otherwise appear in the log file after the truncate. There were all sorts of interesting little details in the resulting activity when I tested this on 12.1.0.2 – here’s one that’s easy to spot before you even look at the trace file:


SQL> select object_id, data_object_id, object_name from user_objects where object_name like 'T1%';

 OBJECT_ID DATA_OBJECT_ID OBJECT_NAME
---------- -------------- --------------------
    108705         108706 T1_I1
    108704         108707 T1

Notice how the data_object_id of the index is smaller than that of the table after the truncate ? Oracle truncates (and renumbers) the index before truncating the table.

The truncate activity was pretty much as as I had assumed it would be – with one significant variation. The total number of change vectors report was 272 in 183 redo record (your numbers may vary slightly if you try to reproduce the example), and here’s a summary of the redo OP codes that showed up in those change vectors in order of frequency:


Change operations
=================
  1 OP:10.25    Format root block
  1 OP:11.11    Insert multiple rows (table)
  1 OP:24.1     DDL
  1 OP:4.1      Block cleanout record
  2 OP:10.4     Delete leaf row
  2 OP:13.28    HWM on segment header block
  3 OP:10.2     Insert leaf row
  3 OP:17.28    standby metadata cache invalidation
  4 OP:11.19    Array update (index)
  4 OP:11.5     Update row (index)
 10 OP:13.24    Bitmap Block state change (Level 2)
 11 OP:23.1     Block written record
 12 OP:14.1     redo: clear extent control lock
 12 OP:22.5     File BitMap Block Redo
 14 OP:14.2     redo - lock extent (map)
 14 OP:14.4     redo - redo operation on extent map
 14 OP:5.4      Commit / Rollback
 15 OP:18.3     Reuse record (object or range)
 15 OP:22.16    File Property Map Block (FPM)
 22 OP:13.22    State on Level 1 bitmap block
 24 OP:22.2     File Space Header Redo
 29 OP:5.2      Get undo header
 58 OP:5.1      Update undo block

The line that surprised me was the 14 commit/rollback codes – a single truncate appears to have operated as 14 separate (recursive) transactions. I did start to walk through the trace file to work out the exact order of operation, but it’s really messy, and a tedious task, so I just did a quick scan to get the picture. I may have made a couple of mistakes in the following, but I think the steps were:

  • Start transaction
  • Lock the extent map for the index — no undo needed
  • Lock each bitmap (space management) block  — no undo needed
  • Reset each bitmap block — undo needed to preserve space management information
  • Reset highwater marks where relevant on bitmap and segment header block — undo needed
  • Clear segment header block — undo needed
  • Write all the updated space management blocks to disc (local write waits)
    • Log file records “Block Written Record”.
  • For each space management block in turn
    • Update space management blocks with new data object_id — undo needed
    • Write the updated block to disc (local write wait)
    • Log file records one “Block Written Record” for each block
  • Repeat all the above for the TABLE segment.
  • Start a recursive transacion
    • Insert a row into mon_mod$ — undo needed
    • recursive commit
  • Set DDL marker in redo log (possibly holding the text of the DDL statement, but it’s not visible in the dump)
  • Set object reuse markers in the redo log
  • update tab$  — needs undo, it’s just DML
  • update ind$ — needs undo, it’s just DML
  • update seg$  — needs undo, it’s just DML (twice – once for table once for index)
  • update obj$ — needs undo, it’s just DML (twice – ditto)
  • COMMIT — at last, with a change vector for a “Standby metadata cache invalidation” marker

The remaining 12 transactions look like things that could be delayed to tidy up things like space management blocks for the files and tablespaces and releasing “block locks”.

This first, long, transaction, is the thing that has to happen as an atomic event to truncate the table – and you can imagine that if the database crashed (or you crashed the session) in the middle of a very slow truncate then there seems to be enough information being recorded in the undo to allow the database to roll forward an incomplete truncate, and then roll back to before the truncate.

It would be possible to test whether or not this would actually work – but I wouldn’t want to do it on a database that anyone else was using.