I’ve beeen a little slow in the follow-up to my previous posting on possibly redundant indexes. Before going into the slightly more complex stuff, there’s another peripheral point (but a very important one) that’s worth raising about how clever the optimizer can be. Here’s some code for 188.8.131.52 to demonstrate the point:
create table t1 nologging as with generator as ( select --+ materialize rownum id from dual connect by level <= 1e4 ) select rownum id, trunc(sysdate,'MM') + (rownum-1)/1440 date_time, rpad('x',100) padding from generator v1, generator v2 where rownum <= 1e5 ; alter table t1 add ( date_only generated always as (trunc(date_time)) virtual ) ; begin dbms_stats.gather_table_stats( ownname => user, tabname =>'T1', method_opt => 'for all columns size 1' ); end; / create index t1_i1 on t1(date_only) nologging;
So, in a two-step process, I’ve got an indexed virtual column that holds the value of the date_time column truncated to just the date. Would you expect the optimizer to use the index to execute the following query efficiently:
select max(id) from t1 where date_time between sysdate-1 and sysdate ;
Note that the query references the real date_time column not the virtual column date_only, and it’s not using the expression that defines the index – yet the plan reads as follows:
----------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 21 | 86 (2)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 21 | | | |* 2 | FILTER | | | | | | |* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 1442 | 30282 | 86 (2)| 00:00:01 | |* 4 | INDEX RANGE SCAN | T1_I1 | 4306 | | 13 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter(SYSDATE@!>=SYSDATE@!-1) 3 - filter("DATE_TIME"<=SYSDATE@! AND "DATE_TIME">=SYSDATE@!-1) 4 - access("T1"."DATE_ONLY">=TRUNC(SYSDATE@!-1) AND "T1"."DATE_ONLY"<=TRUNC(SYSDATE@!))
It’s a little odd that even though the optimizer in the newer versions of Oracle treats many simple expressions on sysdate as constants it still checks (operation 2) that “sysdate >= sysdate – 1” but perhaps that’s just a case of a piece of generic code that isn’t worth the risk or effort of changing.
The key point, of course, is that Oracle has manged to generate some extra predicates that allow it to use the “wrong” index to get a first approximation of the result set fairly efficiently, and then used the original predicate to reduce the approximation down to the correct result set.
If you want a quick sanity check on the access predicates used for operation 4:
This style of predicate manipulation also works numeric data types, but I think its greatest benefit (or convenience) is likely to come from date data types where the data has been created with a time component but there are frequent “date-only” queries. The days of creating two indexes as a workaround for handling generated code that wants to deal with both date_time and trunc(date_time) predicates should be numbered.
This enhancement probably appeared in 184.108.40.206, and I first saw it described in October 2013 in this blog note by Mohamed Houri; but 12c offers a delightful little enhancement – here’s what my table looks like in the 12c version of the code:
SQL> desc t1 Name Null? Type ----------------------------- -------- -------------------- ID NUMBER DATE_TIME DATE PADDING VARCHAR2(100) SQL>
Where’s the virtual column ? The 12c version of my code had a slightly different definition for it:
alter table t1 add ( date_only invisible generated always as (trunc(date_time)) virtual ) ;
The transformation still works even when the virtual column is invisible. So (subject to searching for anomalies, boundary conditions and bugs) it looks as if you can change the table definition, and get the benefits of two indexes for the price of one without the application realising that anything has changed.
Creating a trace file from EM12c is quite easy and doesn’t require a DBA offering up the world to allow a developer or support person to perform this action.
Enterprise Manager also removes some of the syntax options that could get a developer or app support person that is less specialized in creating trace files, all the options and the knowledge of the syntax, etc. from getting into trouble, which creates an option for everyone to be successful.
The proposed solution that I’ll post here is where the EM12c user will be able to create trace files, but the DBA will control the OS level files, (delivering the file to the user after the trace file is created, will use a defined user to log into the database from Cloud Control and will have no target access granted previously.
Connect to your PDB and create your user that you will be using for tracing….
create user dev_trace identified by
grant dev_trace connect, select_catalog_role, create session, alter session;
Limit the size of trace files for the database to ensure that no trace file can be forgotten and grow unconstrained:
alter system set max_dump_file_size =40960;
Once this is complete, you can create your database target credentials for your developer role that will now use this new login and limit their rights in the target as they login.
Grant the following privileges at the “Target Type Privilege” for the new user:
Monitor Enterprise Manager Monitor Enterprise Manager performance Add any Target Add any target in Enterprise Manager View any Target Ability to view all managed targets in Enterprise Manager
Yes, you do have the right to switch from “View any Target” to a list of specified targets. I just figured I’d lesson the steps here, so do what fulfills your IT security model.
Add the the new role that you’ve created for your Tracing User and then complete the wizard creation for a new user in Cloud Control.
Creating a Trace File
Once the user logs in, (and updates their password… :)) They should be able to log into a target and view performance data. For our example, we’ll use the following database scenario, where the developer or app support user is concerned about “Other” in Top Activity. They’ve been given a high level overview of Top Activity wait events and know that green, (CPU) is expected, dark blue, (IO) often occurs and that colors of red, (application or concurrency) brown, (configuration or network) orange, (commit) and pink, (other) should be monitored more closely.
Now tracing is a session level process that created a trace file in an OS, (Operating System) directory. For Oracle 11g and higher, this creates the file in the Automatic Diagnostic Repository, (ADR) home. The base location can be easily located by the parameter DIAGNOSTICE_DEST + /rdbms/diag/
/ /trace.show parameter DIAGNOSTIC_DEST;
In our above Top Activity example, we can see the sessions on the right hand side and see a couple that are of concern. Since we’ve decided that we need to inspect the “Other” waits more, we will look in the Session ID column and double click on the 1107 Session ID.
This brings us to the details page for the 1107 session:
We have a number of tabs in the Details page to inspect, but let’s say we want to create a trace file to dig deeper into the ENQUEUE waits instead. To the right, at the upper and lower section, you can see the button “Enable SQL Trace”. As the user has the privileges to “Alter Session”, they can click this and it will step into the creation steps for a trace file.
This is like performing a 10046 trace. As you can see, its very straight forward and simple vs. executing the SQL*Plus commands that would require the following for both options set to “No”:execute dbms_support.start_trace_in_session (1107,39645, true);
To create the option with the wait information set to “Yes” we’d update our request to the following:execute dbms_support.start_trace_in_session (1107,39645,waits=>true);
And for both to be set to “Yes”, we’d execute this command that has both:execute dbms_support.start_trace_in_session (1107,39645,waits=>true, binds=>true);
Back to our Cloud Control option, (the easier option) once we decide to gather the wait and bind information, we can click “OK” and the trace will subsequently begin.
Once we’ve finished tracing, we should remember to turn the trace back off, (which is the reason for the DBA setting the max dump file size to have a safety net for those that forget!)
The option will then show right in the session, even if you leave and come back to the session in Cloud Control. You can click on it and disable the trace:
If you were to forget, the DBA could easily locate the session being traced in the file and they can issue the following statement from the command line to disable it:execute dbms_support.start_trace_in_session (1107,39645, false);
Getting a READABLE Trace File Report
Now this is where your DBA comes in and you need to make friends with them. Most DBAs are not going to be comfortable granting access to OS level files and I don’t blame them. It’s best if you don’t have to worry about this access and better if they just retrieve the files for you. Before they do, you want them to convert the trace data into a readable report. This is done via the TKPROF utility.
The trace file should be easy to locate from the server in the ADR directory for trace files, (the directory is called “trace”, duh… :)) and there are two files that are created as part of your trace file- The .trm trace mapping file, which you won’t need and the .trc trace file that is the one that is required to create the report.
The DBA will run the following to create the reporttkprof
They can SCP or FTP the output file to their workstation and this is the one that you’ll want them to send to you.
And there you have it- creating a secure user that can create trace files and then how to create the trace files once you have the access. Now you DBAs and developers make friends and bring each other donuts and maybe buy each other a drink…
Followers of the blog will know I’ve been waiting to get access to the Oracle Cloud for a while. Well, I’ve finally got access to a bit of it. Specifically, the “Oracle Database Cloud Service” (DBaaS) part.
The Schema Service has been around for a few years and I had already tried that out, but IMHO it’s not really part of Oracle Cloud proper*, so I was reserving my judgement until I got the real stuff.
I’ve written a couple of articles already. Just basic stuff to document setting stuff up and connecting etc.
So here are some first impressions…
Overall the cloud offering looks clean and modern. Tastes vary of course, but I like the look of it.
The navigation is a bit inconsistent between the different cloud services. It feels like the console for each section (Compute, Java, DBaaS etc.) has been written by a different team, each doing what they think works, rather than working to a single design standard. Here’s a couple of examples:
Don’t get me wrong, it’s not hard to navigate. It’s just inconsistent, which kind-of ruins the overall effect. If they can bring it all into line I think it will be really cool.
I think Oracle Cloud looks neater than Amazon Web Services, but the navigation is not as consistent as AWS or Azure. Having used AWS, Azure and Oracle Cloud, I feel Azure has the neatest and most consistent interface. Like I said before, tastes vary.
Probably my biggest issue with the Oracle Cloud interface is the speed, or lack of. It’s really slow and unresponsive at times. On a few occasions I thought it had died, then after about 30 seconds the screen just popped back into life. Some of the actions give no feedback until they are complete, so you don’t know if you’ve pressed the button or not.
I found DBaaS pretty simple to use. I’ve already spent some time using AWS and Azure, so there is probably some carry-over there. I pretty much completed my first pass through creation, connections and patching before I even considered looking for documentation.
The documentation is OK, but contains very few screen shots, which leads me to believe the look and feel is all in a state of flux.
I think the general Oracle Compute Cloud Service network/firewall setup is really quite clear, but you can’t edit existing rules. Once a rule is created you can only enable, disable or delete it. I found myself having to delete and create rules a number of times when it felt more obvious to let me edit an existing rule. I’ll mention a DBaaS issue related to this later.
Just some general observations about the DBaaS offering.
I like the Oracle Cloud more than I thought I would. I think it looks quite nice and if someone told me I had to use it as a general Infrastructure as a Service (IaaS) portal I would be fine with that.
I like the DBaaS offering less than I hoped I would. I feel quite bad about saying it, but it feels like a work in progress and not something I would want use at this point. If it were my decision, I would be pushing the DBaaS offering more in the direction of AWS RDS for Oracle. As I said before, the current DBaaS offering feels like it has not decided what it wants to be yet. It needs to be much more hands-off, with a more consistent, centralized interface.
I don’t have full access to the straight Compute Cloud yet, so I can’t try provisioning a VM and doing everything myself. If I get access I will try it, but I would expect it to be the same as what I’ve done for EC2 and Azure. A VM is a VM…
When I read this back it sounds kind-of negative, but I think all the things I’ve mentioned could be “fixed” relatively quickly. Also, this is only one person’s opinion on one specific service. The haters need to try this for themselves before they hate.
* Just to clarify, I am not saying the Schema Service isn’t “Cloud” and I’m not saying it doesn’t work. I’m just saying I don’t see this as part of Oracle’s grand cloud database vision. It always seemed like a cynical push to market to allow them to say, “we have a cloud DB”. If it had been branded “APEX Service” I might have a different opinion. It is after all a paid for version of apex.oracle.com. This is a very different proposition to promoting it as a “Cloud Database”.
I’ll be presenting at a “Lets Talk Oracle” event in Perth, with fellow Ex-Oracle ACE Directors Richard Foote and Chris Muir. Full agenda as follows:
8:30-9:00 Registration and coffee 9:00-10:30 Richard Part I – Database 12c New Features for DBAs (and Developers) 10:30-11:00 Break 11:00-12:30 Richard Part II – Database 12c New Features for DBAs (and Developers) 12:30-1:30 Lunch 1:30-2:30 Chris – Creating RESTful APIs with Oracle Data Services (for Developers and DBAs) 2:30-2:45 Break 2:45-4:15 Connor – Database 12c New Features for Developers (and DBAs) 4:15-5:00 Q&A with the ACES ! 5:00 Wrap up
And yes… of course, the event is FREE!
It will be at the Oracle Offices in Perth so places are strictly limited. If you want to attend, send an email to email@example.com
See you there !!
I’ve said a number of times, the process of writing articles is part of an ongoing learning experience for me. A few days ago my personal tech editor (Jonathan Lewis) asked about a statement I made in the SQL Plan Directive article. On further investigation it turned out the sentence was a complete work of fiction on my part, based on my misunderstanding of something I read in the manual, as well as the assumption that everything that happens must be as a result of a new feature.
Anyway, the offending statement has been altered, but the conversation this generated resulted in new article about Automatic Column Group Detection.
The process also highlighted how difficult, at least for me, it is to know what is going on in the optimizer now. It wasn’t always straight forward before, but now with the assorted new optimizations, some beating others to the punch, it is even more difficult. There are a number of timing issues involved also. If a statement runs twice in quick succession, you might get a different sequence of events compared to having a longer gap between the first and second run of the statement. It’s maddening at times. I’m hoping Jonathan will put pen to paper about this, because I think he will do a better job of explaining the issues around the inter-dependencies better than I can.
Anyway, I will be doing another pass through this stuff over the coming days/weeks/months/years to make sure it is consistent with “my current understanding”.
Fun, fun, fun…
Just a quick post today as I wanted to highlight a video that has been made available by one of my colleagues, Valentin Tabacaru. Valentin is a sales consultant based out of one of our Spanish offices. The
YouTube video shows a SQL Performance Analyzer execution and report generation using real world workload testing between On Premise Oracle 11gR2 Database and Oracle Public Cloud 12c Pluggable Database Service. Both environments are managed using EM12c providing a single pane of glass to both monitor and manage the environments. The video lasts a shade under 24 minutes, so go and grab yourself a coffee, sit back and enjoy!
The post Real Application Testing report On Premise vs. Oracle Public Cloud appeared first on PeteWhoDidNotTweet.com.
Following on from my earlier comments about how a truncate works in Oracle, the second oldest question about truncate (and other DDL) appeared on the OTN database forum – “Why isn’t a commit required for DDL?”
Sometimes the answer to “Why” is simply “that’s just the way it is” – and that’s what it is in this case, I think. There may have been some historic reason why Oracle Corp. implemented DDL the way they did (commit any existing transaction the session is running, then auto-commit when complete), but once the code has been around for a few years – and accumulated lots of variations – it can be very difficult to change a historic decision, no matter how silly it may now seem.
This posting isn’t about answering the question “why”, though; it’s about a little script I wrote in 2003 in response to a complaint from someone who wanted to truncate a table in the middle of a transaction without committing the transaction. Don’t ask why – you really shouldn’t be executing DDL as part of a transactional process (though tasks like dropping and recreating indexes as part of a batch process is a reasonable strategy).
So if DDL always commits the current transaction how do you truncate a table without committing ? Easy – use an autonomous transaction. First a couple of tables with a little data, then a little procedure to do my truncate:
create table t1 (n1 number); insert into t1 values(1); create table t2 (n1 number); insert into t2 values(1); create or replace procedure truncate_t1 as pragma autonomous_transaction; begin execute immediate 'truncate table t1'; end; /
Then the code to demonstrate the effect:
prompt ====================================== prompt In this example we end up with no rows prompt in t1 and only the original row in t2, prompt the truncate didn't commit the insert. prompt ====================================== insert into t2 values(2); execute truncate_t1; rollback; select * from t1; select * from t2;
According to my notes, the last time I ran this code was on 220.127.116.11 but I’ve just tested it on 18.104.22.168 and it behaves in exactly the same way.
I’ve only tested the approach with “truncate” and “create table” apparently, and I haven’t made any attempt to see if it’s possible to cause major distruption with cunningly timed concurrent activity; but if you want to experiment you have a mechanism which Oracle could have used to avoid committing the current transaction – and you may be able to find out why it doesn’t, and why DDL is best “auto-committed”.
As always, installations of Oracle server products on Fedora are not a great idea, as explained here.
I was reading some stuff about the Fedora 23 Alpha and realised Fedora 22 had passed me by. Not sure how I missed that.
Anyway, I did a run through of the usual play stuff.
While I was at it, I thought I would get the heads-up on Fedora 23 Alpha.
The F23 stuff will have to be revised once the final version is out, but I’m less likely to forget now.
I guess the only change in F22 upward that really affects me is the deprecation of YUM in F22 in favour of the DNF fork. For the most part, you just switch the command.
#This: yum install my-package -y yum groupinstall my-package-group -y yum update -y #Becomes: dnf install my-package -y dnf groupinstall my-package-group -y dnf group install my-package-group -y dnf update -y
This did cause one really annoying problem in F23 though. The “MATE Desktop” had a single documentation package that was causing a problem. Usually I would use the following.
yum groupinstall "MATE Desktop" -y --skip-broken
Unfortunately, DNF doesn’t support “–skip-broken”, so I was left to either manually install the pieces, or give up. I chose the latter and use LXDE instead. F23 is an Alpha, so you expect issues, but DNF has been in since F22 and still no “–skip-broken”, which I find myself using a lot. Pity.
The old question about truncate and undo (“does a truncate generate undo or not”) appeared on the OTN database forum over the week-end, and then devolved into “what really happens on a truncate”, and then carried on.
The quick answer to the traditional question is essentially this: the actual truncate activity typically generates very little undo (and redo) compared to a full delete of all the data because all it does is tidy up any space management blocks and update the data dictionary; the undo and redo generated is only about the metadata, not about the data itself.
Of course, a reasonable response to the quick answer is: “how do you prove that?” – so I suggested that all you had to do was “switch logfile, truncate a table, dump logfile”. Unfortunately I realised that I had never bothered to do this myself and, despite having far more useful things to do, I couldn’t resist wasting some of my evening doing it. Here’s the little script I wrote to help
create table t2 (v1 varchar2(32)); insert into t2 values (rpad('A',32)); commit; create table t1 nologging as with generator as ( select --+ materialize rownum id from dual connect by level <= 1e4 ) select rownum id, rpad('x',100) padding from generator v1, generator v2 where rownum <= 1e5 ; create index t1_i1 on t1(id); alter system flush buffer_cache; execute dbms_lock.sleep(3) alter system switch logfile; insert into t2 values(rpad('X',32)); truncate table t1;and t insert into t2 values(rpad('Y',32)); commit; execute dump_log
Procedure dump_log simply dumps the current log file. The call to switch logfile keeps the dumped log file as small as possible; and I’ve flushed the buffer cache with a three second sleep to minimise the number of misleading “Block Written Record” entries that might otherwise appear in the log file after the truncate. There were all sorts of interesting little details in the resulting activity when I tested this on 22.214.171.124 – here’s one that’s easy to spot before you even look at the trace file:
SQL> select object_id, data_object_id, object_name from user_objects where object_name like 'T1%'; OBJECT_ID DATA_OBJECT_ID OBJECT_NAME ---------- -------------- -------------------- 108705 108706 T1_I1 108704 108707 T1
Notice how the data_object_id of the index is smaller than that of the table after the truncate ? Oracle truncates (and renumbers) the index before truncating the table.
The truncate activity was pretty much as as I had assumed it would be – with one significant variation. The total number of change vectors report was 272 in 183 redo record (your numbers may vary slightly if you try to reproduce the example), and here’s a summary of the redo OP codes that showed up in those change vectors in order of frequency:
Change operations ================= 1 OP:10.25 Format root block 1 OP:11.11 Insert multiple rows (table) 1 OP:24.1 DDL 1 OP:4.1 Block cleanout record 2 OP:10.4 Delete leaf row 2 OP:13.28 HWM on segment header block 3 OP:10.2 Insert leaf row 3 OP:17.28 standby metadata cache invalidation 4 OP:11.19 Array update (index) 4 OP:11.5 Update row (index) 10 OP:13.24 Bitmap Block state change (Level 2) 11 OP:23.1 Block written record 12 OP:14.1 redo: clear extent control lock 12 OP:22.5 File BitMap Block Redo 14 OP:14.2 redo - lock extent (map) 14 OP:14.4 redo - redo operation on extent map 14 OP:5.4 Commit / Rollback 15 OP:18.3 Reuse record (object or range) 15 OP:22.16 File Property Map Block (FPM) 22 OP:13.22 State on Level 1 bitmap block 24 OP:22.2 File Space Header Redo 29 OP:5.2 Get undo header 58 OP:5.1 Update undo block
The line that surprised me was the 14 commit/rollback codes – a single truncate appears to have operated as 14 separate (recursive) transactions. I did start to walk through the trace file to work out the exact order of operation, but it’s really messy, and a tedious task, so I just did a quick scan to get the picture. I may have made a couple of mistakes in the following, but I think the steps were:
The remaining 12 transactions look like things that could be delayed to tidy up things like space management blocks for the files and tablespaces and releasing “block locks”.
This first, long, transaction, is the thing that has to happen as an atomic event to truncate the table – and you can imagine that if the database crashed (or you crashed the session) in the middle of a very slow truncate then there seems to be enough information being recorded in the undo to allow the database to roll forward an incomplete truncate, and then roll back to before the truncate.
It would be possible to test whether or not this would actually work – but I wouldn’t want to do it on a database that anyone else was using.