So the main functionality I need from the semver package that I am writing, is the "satisfies" functionality. This is the function, that can define if a certain version string pattern is within a set of ranges of different versions.
For example, I want to know if version 1.2.* is satisfied with the following ranges: <=1.2.3 or >1.2.0 or between 1.4.1 and 1.4.6. But before I get there I need some more supporting functions. I my last entry, I had implemented some of the basics, and now I have completed a few more
Another topic of conversation that came out of Bryn‘s session at Oracle Midlands related to PL/SQL code formatting. I’m not sure I agree 100% with his opinion, but it’s certainly making me think about my current stance.
When I said I was going to blog about a couple of the points he raised, Bryn emailed me some stuff, which I’m going to use here. So this is Bryn’s content, with some of my opinions thrown in. I will no doubt get told off for not doing the subject justice, like I did with the last post.
Bryn: Look at page 1 (below). This makes PL/SQL look so old-fashioned. I’m sure that this is part of what makes it unappealing to the modern youth. How easy do you find it to read its real meaning?
Me: With a few minor formatting changes, this is pretty much what I do. As a result, it looks really normal to me and I like it, but if I try to put myself into the head-space of a young person, it does look overly formal, and as Bryn said in his talk, we are essentially shouting all the keywords and using naming conventions that detract from the true meaning we are trying to convey. I don’t like to admit it to myself, but I kind-of see his point. It does look a little Pipe and Slippers.
Bryn: Now look at Page 4 (below). Identifiers are like proper nouns in English prose. They’re capitalized to let them stand out. The capitals in Object_ID (or DBMS_Output) are an aid to pronunciation.
Me: This is a slightly unfair comparison, since there is no syntax highlighting, which is on by default in all PL/SQL IDEs, but it does look less “old”. I understand Bryn’s point about the capitals potentially aiding pronunciation, but if I’m going to stop “shouting” the keywords, I actually prefer it all in lower case myself. I agree that identifiers still need “_” between words. Although camelCase is used by many languages, it doesn’t work so well in Oracle. It looks fine in your scripts, but when you are looking at dictionary views, much of the information is presented in uppercase, so “uniquenessContraintMissing” becomes “UNIQUENESSCONSTRAINTMISSING”, which sucks.
Bryn: As promised, here’s my-marked up copy of the Gettysburg Address. It’s considered to be an exemplar of English prose, so having it mangled into an incomprehensible mess by the robotic application of “code formatting best practice” makes my point very vividly. … Feel free to use it. But do make sure that you credit the author, President Abraham Lincoln, and me for having had the idea to use it to make fun of the anally retentive code formatters who jointly conspire to make PL/SQL look so dated.
Me: Yeah. Point made!
I’ve tried to alter my code formatting in the past, which invariably meant I had to go back and reformat it a couple of days later when I couldn’t stand knowing it wasn’t “how I usually do things”. Having said that, if you read the last blog post, you would see I tried to stop shouting. It’s not what Bryn likes, but I think it’s a step in the right direction… Maybe…
Am I going to redo all my articles, blog posts and videos? No. Am I going to try and change? At the moment I’m thinking yes, but I reserve the right to
Don’t forget to check out these whitepapers from Bryn.
Monday 7th Dec: 11:20 – 12:05
I’ve arranged a panel session on the Cost Based Optimizer for UKOUG Tech 2015, with Maria Colgan, Nigel Bayliss, and Chris Antognini joining me to answer (and maybe argue about) questions from the audience.
To keep things moving along, we aim to have a few questions available before the day and collect a few questions at the door as well as accepting questions from the floor. Martin Widlake and Neil Chandler will be acting as MCs, wielding the microphones, and collecting questions at the door.
So far we have a handful of questions – if you have a question you’d like to raise, add it to the comments below. Even if you can’t attend we may get to your question, and I may be able to write up a summary of questions and answers after the event.
Questions so far:
You tend to name PL/SQL parameters and variables in such a way as to prevent name clashes with table columns. Let’s cause an obvious name clash…
The table DUAL has a column called DUMMY with a single row with the value ‘X’.
SQL> select * from dual; D - X 1 row selected. SQL>
Let’s create a procedure with a parameter called “dummy”, to cause a name clash. I want to use the parameter in the WHERE clause. I hope you can see where this is going.
create or replace procedure my_proc(dummy varchar2) as amount pls_integer; begin select count(*) into amount from dual where dummy = dummy; dbms_output.put_line('amount=' || amount); end my_proc; /
Now if I run it passing a parameter value of ‘Y’ I get an “incorrect result”.
set serveroutput on exec my_proc('Y'); amount=1 PL/SQL procedure successfully completed. SQL>
Actually I’m getting a correct result, because the table column is being matched against itself and the parameter is not being used. To solve this, we might rename the parameter to “p_dummy”.
create or replace procedure my_proc(p_dummy varchar2) as amount pls_integer; begin select count(*) into amount from dual where dummy = p_dummy; dbms_output.put_line('amount=' || amount); end my_proc; /
This gives us the expected result.
set serveroutput on exec my_proc('Y'); amount=0 PL/SQL procedure successfully completed. SQL>
Nothing new so far. Now comes the bit I didn’t know.
Alternatively, I could have prefixed the references to the parameter with the procedure name.
create or replace procedure my_proc(dummy varchar2) as amount pls_integer; begin select count(*) into my_proc.amount from dual a where a.dummy = my_proc.dummy; dbms_output.put_line('amount=' || my_proc.amount); end my_proc; / set serveroutput on exec my_proc('Y'); amount=0 PL/SQL procedure successfully completed. SQL>
This was just one of the points Bryn raised in Doing SQL from PL/SQL: Best and Worst Practices, which I’m sure I read, but obviously not well enough.
Why does this matter? Using a prefix like “p_” is an artificial way to avoid the situation. If someone were to add a column to the table called “p_dummy”, my code would be broken again. I’m relying on them knowing my naming standard and avoiding it when defining table columns.
By the way, this works for parameterised cursors and named anonymous blocks too.
set serveroutput on begin <
> declare dummy varchar2(1) := 'Y'; amount pls_integer; begin select count(*) into my_block.amount from dual a where a.dummy = my_block.dummy; dbms_output.put_line('amount=' || my_block.amount); end my_my_block; end; / amount=0 PL/SQL procedure successfully completed. SQL>
You learn something new every day!
Analytic functions still bamboozle many SQL practitioners out there. So I’m building a suite of videos to walk people through some of the common questions we get asked that we need to solve with SQL, and look at how we can often solve them most simply using Analytic functions.
It will all be based on the KISS principle – Keep It Simply SQL, as it pertains to understanding the Analytic syntax.
You can find my introductory video here, with more to come as we solve problems simply with (analytic) SQL. There is also an “Analytics” playlist on my channel, which will grow each time I add a new video.
I hope you enjoy it – and feel free to comment or offer feedback.
So anyone doing software releases knows the pain of doing versions correctly for the different releases. One solution to that is using SEMVER (http://semver.org/) which tries to provide a solution for that. Go to the website and read up when you change the different parts of the version (x.y.z = major.minor.patch), and what the rules are around comparing different versions.
As a DBA, its awesome when you have the Partitioning option at your disposal. So many cool things suddenly become either possible or easier. For example, all of a sudden you can separate the data of different time ranges into different partitions. But hand in hand with that, was that fear that one day we’d be getting nasty stares from either managers or customers, when the following happened:
As a DBA, its awesome when you have the partitioning option at your disposal. So many cool things suddenly become either possible or easier. But hand in hand with that, was that fear that one day we’d be getting nasty stares from either managers or customers, when the following happened:
SQL> create table T2 2 ( x date, 3 y int ) 4 partition by range ( x ) 5 ( 6 partition p1 values less than ( date '2015-06-01' ), 7 partition p2 values less than ( date '2015-07-01' ), 8 partition p3 values less than ( date '2015-08-01' ), 9 partition p4 values less than ( date '2015-09-01' ) 10 ) 11 / Table created. SQL> SQL> insert into T2 values ( sysdate,1); insert into T2 values ( sysdate,1) * ERROR at line 1: ORA-14400: inserted partition key does not map to any partition
It is the terrible moment when you realise the job you created to ensure that you had sufficient partitions created either was not run, or had stopped working and you had missed it. Luckily, a lot of this was solved when INTERVAL partitioning was created in Oracle Database 11g. All we needed than was a starting point, an interval (size), and at least one partition:
SQL> create table T1 2 ( x date, 3 y int ) 4 partition by range ( x ) 5 interval ( numtoyminterval (1,'MONTH') ) 6 ( 7 partition p1 values less than ( date '2015-06-01' ) 8 ) 9 / Table created. SQL> SQL> insert into T1 2 select add_months( date '2015-01-01',rownum), rownum 3 from dual 4 connect by level <= 10; 10 rows created.
SQL> insert into T1 values ( date '2016-01-01',99);
1 row created.
In the example above, I’ve inserted a few rows for 2015 and one row for 2016, and partitions were automatically created for me, as you can see by looking at the dictionary
SQL> select partition_name, high_value 2 from user_tab_partitions 3 where table_name = 'T1'; PARTITION_NAME HIGH_VALUE -------------------- ----------------------------------------------------------------------------------- P1 TO_DATE(' 2015-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') SYS_P517 TO_DATE(' 2015-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') SYS_P518 TO_DATE(' 2015-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') SYS_P519 TO_DATE(' 2015-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') SYS_P520 TO_DATE(' 2015-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') SYS_P521 TO_DATE(' 2015-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') SYS_P522 TO_DATE(' 2015-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') SYS_P523 TO_DATE(' 2016-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
That’s very cool, and a lot of people are now using the interval strategy as their default partitioning means. With Interval partitioning Oracle supports the concept of ‘holes’, where not all partitions have to be created consecutively. In our example we did not insert any values for December 2015, so we have not created a partition for it. However, the partition for January 2016 exists. If we ever insert data for December 2015, the missing partition will be created and the hole goes away; if we don’t then the hole stays forever, which does not matter either.
But eventually, people started to want to archive off their old data, either by switching partitions out, or dropping them. Now we are seeing the following “What is error ORA-14758?” come up on our question lists from time to time:
SQL> alter table T1 drop partition P1; alter table T1 drop partition P1 * ERROR at line 1: ORA-14758: Last partition in the range section cannot be dropped
To explain what is happening, we need to look at the data dictionary for the type of each partition
SQL> select partition_name, interval 2 from user_tab_partitions 3 where table_name = 'T1'; PARTITION_NAME INTERVAL -------------------- -------- P1 NO SYS_P517 YES SYS_P518 YES SYS_P519 YES SYS_P520 YES SYS_P521 YES SYS_P522 YES SYS_P523 YES
When we defined our table, we nominated a starting point, and then a “size” (interval) for each partition (that’s why we required at least one partition at creation time). The error is telling us that we are trying to drop the starting point, and to do so, means than we dont really have a fixed point anymore to base our intervals on. Notice in the list of partitions above, the first partition is not an interval partition, but it’s the only one of that type, so we cannot drop it. While this is a nuisance that is addressed in a future release , you can easily address this today – we simply let the database know that the existing partitions can be fixed in place and need no longer be interval based. You basically evolve the interval partitions – whose upper and lower boundaries are mathematical based on the calculation of the starting point – into range partitions with fixed upper bound values.
SQL> alter table T1 set interval ( numtoyminterval (1,'MONTH') ); Table altered. SQL> SQL> select partition_name, interval 2 from user_tab_partitions 3 where table_name = 'T1'; PARTITION_NAME INT -------------------- --- P1 NO SYS_P517 NO SYS_P518 NO SYS_P519 NO SYS_P520 NO SYS_P521 NO SYS_P522 NO
Simply by restating the interval, we have now marked all of the existing partitions as standard range partitions. So I can drop all (but one) of them and still have a starting point for my subsequent interval partitions.
SQL> alter table T1 drop partition P1; Table altered.
There is one little subtlety to this method, though: unlike Interval partitions – which have an upper and a lower bound – range partitions only have an upper bound defined. The lower bound of a range partition is defined by the upper bound of the preceding partition. So by making an interval partition a range partition we are in effect removing the lower bound. In our example we had a non-existing interval partition for December 2015, which eventually would have been created if/when data for it was inserted.
But by marking all existing partitions as range partitions, however, the former interval partition for January 2016 becomes a range partition covering the range for December 2015 and January 2016. It covers two months. This is intentional – the alternative design would have been to instantiate all missing partitions when marking interval as range partitions – which could have lead to the creation of potentially tens of thousands of unwanted partitions.
Bryn very kindly stepped in for Tom Freyer who unfortunately couldn’t make it. Bryn’s presentation was called Why use PL/SQL?, which was based on his whitepaper of the same name. I’ve written about a number of the topics he raised, including a piece on My Utopian Development Environment, but his talk and his whitepaper put forward a much more consistent and compelling argument. Definitely worth checking out if you are a PL/SQL expert or newbie!
Along the way, he used a little piece of syntax I have never noticed before, which has been available for the whole 20+ years I’ve been using PL/SQL. I’ll write about that in a post tomorrow if I get chance.
By the way, Bryn loves being called Brian, and he loves it when you say S.Q.L. and PL/S.Q.L. rather than Sequel and PL/Sequel.
Next up was Neil with a session called “Why has my plan changed?”. I’ve written about many of the issues discussed in this session, but Neil crystallised a whole array of points really well, and used live demos which I always like to see. Regardless of how well you think you know the subject, I would be surprised if you don’t walk away with a few new things, which you can later pretend you always knew.
Both guys will be presenting these sessions at UKOUG Tech15 if you want to catch up with them!
Thanks to the guys for coming to speak to us. Thanks to Bryn’s mom, for living in the UK, giving him an excuse to come to UKOUG Tech15 early and therefore be available to step in at short notice. Thanks as always to Red Stack Tech for sponsoring the event, allowing it to remain free, and to Mike for organising it!
After my AWR Warehouse session at DOAG in Nuremberg, Germany last week, an attendee asked me if he really needed to use trace files anymore with the value provided by AWR, (Automatic Workload Repository) and ASH, (Active Session History.) I responded that trace files were incredibly valuable and should never be discounted.
To use an analogy, when I was 22 years old, and entered a doctor’s office missing a small percentage of my visual field after an apparent migraine, doctors used a number of different “tools” at their disposal in an attempt to diagnose what had happened. They first started with an Ophthalmoscope to determine the health of my retinas. They assumed that I was simply experiencing visual disturbances due to a migraine and sent me home with a prescription strength case of Tylenol after verifying that my eyes were healthy.
After no change with another two days passed, the doctors then proceeded with a Computed Tomography Scan, aka CAT/CT scan. This is a standard first line inspection of the brain and again, it resulted with no information as to what was causing the visual disturbance. If it hadn’t been for a neurologist that was on duty at the hospital, I may very well have been sent home again. He asked me about the severe migraine, the actual specifics of the experience and suddenly realized he was hearing someone describe to him the symptoms of a stroke. No one had considered to ask what had happened and since I was in my early twenties, hadn’t considered this. At this point, the doctor asked for me to have an MRI, (Magnetic Resonance Imaging) with contract. An injection of gadolinium contrast resulted in certain tissues and abnormalities more clearly visible and for me, it showed that I had experienced a break in the blood vessels in the back right of my brain, issuing a small aneurysm, but suffering only a little blood loss. The main damage was to the brain tissue in the back area of my brain which “translates” the information being sent from my optic nerve and this was the cause of my left side visual field loss. Due to this new information, he was able to start the research that in the coming years saved my life.
This doctor, like a database administrator or developer, knew to use the right tool for the job once he knew what he was up against and this is how we succeed when others fail in the technical industry. To be told that you no longer need a tool such as tracing because valuable data is provided from AWR or ASH limits the ability to see real culprits that the secondary tools discussed weren’t really designed to capture.
To know when to use one tool over the other is often the biggest challenge. A number of years back, a customer was experiencing terrible, overall performance issues in an ETL process that no one had been able to pinpoint. I always love a good challenge and began to look at it via AWR reports. I noticed that there were waits that were unaccounted for in the SQL processing.
| Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop | -------------------------------------------------------------------------------------------------------------- | 0 | DELETE STATEMENT | | | | 688 | | | | 1 | DELETE | _ITEM_MONTH_ROLLUP | | | | | | | 2 | NESTED LOOPS | | 1490 | 86420 | 688 | | |#ff0000;"><--This one doesn't add up to time/value consumed. | 3 | VIEW | VW_NSO_1 | 1 | 26 | 20 | | | | 4 | SORT UNIQUE | | 1 | 40 | 20 | | | | 5 | TABLE ACCESS BY INDEX ROWID| PROCESS_LOG_MASTER | 1 | 40 | 4 | | | | 6 | INDEX RANGE SCAN | PROCESS_LOG_MASTER_I7 | 132 | | 1 | | | | 7 | PARTITION RANGE ITERATOR | | 1490 | 47680 | 386 | KEY | KEY | | 8 | INDEX RANGE SCAN | ITEM_MONTH_ROLLUP_I2 | 1490 | 47680 | 386 | KEY | KEY | --------------------------------------------------------------------------------------------------------------
– dynamic sampling used for this statement (level=8)
– SQL profile “SYS_SQLPROF_013dad8465770005” used for this statement
CPU Costing is off and there is a profile on this statement. Stats are up to date, why am I seeing dynamic sampling level 8??
I proceeded to capture a trace and that’s when I was able to demonstrate that due to auto-DOP in 11.2, a choice was made to perform dynamic sampling that was adding over 80% to the execution time, impacting performance significantly.
SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring blah, blah, blah, blah... :)
call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 28.60 48.52 322895 43776 489123 189 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 2 28.60 48.52 322895 43776 489123 189
Limiting your tools, no matter if you limit yourself to just using tracing or AWR/ASH, SLOB, Snapper or any other tool is a poor choice. Use the right tool for the situation that provides answer to the questions the business relies on you for. You wouldn’t trust a doctor that used a CAT scan to check a broken bone vs. an X-ray, so why would you trust an optimization specialist that limits themselves to one set of tools?
Recently a customer asked if it was possible to promote unmanaged targets automatically without any interaction. They were already using auto-discovery.
The answer is yes, and of course as soon as you see words like “without any interaction” you know there’s going to be scripting involved. In the Enterprise Manager world, scripting is handled by the EM Command Line Interface, better known as EMCLI. I’ve posted some example scripts of using EMCLI a while back (see this post), but I thought it was worthwhile to show you how this particular request is handled as well. If you have the latest bundle patch on either 188.8.131.52 or 184.108.40.206, you can also use additional parameters to the get_targets verb to list discovered targets (-unmanaged). This includes listing the associated instance targets if you want to promote a RAC database (-associations).
Now on to the script. This one is a python script which promotes some or all single instance databases, just as an example. And of course, all the usual caveats apply – use at your own risk, Oracle does not support this script, yaddah yaddah yaddah. The script comes courtesy of my boss, Dave, so if there’s something wrong you can blame him!
Since WordPress does some funky stuff with lesser than and greater than signs, it’s easiest if I post this as a file, so here’s the link to the file on DropBox. If DropBox is blocked by your work, just shoot me an email and I’ll get a copy of it to you that way.
Hope that example is of use to you!