Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Library Cache Stats

In resonse to a comment that one of my notes references a call to a packate “snap_libcache”, I’ve posted this version of SQL that can be run by SYS to create the package, with a public synonym, and privileges granted to public to execute it. The package doesn’t report the DLM (RAC) related activity, and is suitable only for 11g onwards (older versions require a massive decode of an index value to convert indx numbers into names).

rem Script: snap_11_libcache.sql
rem Author: Jonathan Lewis
rem Dated: March 2001 (updated for 11g)
rem Purpose: Package to get snapshot start and delta of library cache stats
rem Notes
rem Lots of changes needed by 11.2.x.x where x$kglst holds
rem two types – TYPE (107) and NAMESPACE (84) – but no
rem longer needs a complex decode.
rem Has to be run by SYS to create the package
rem Usage:
rem set serveroutput on size 1000000 format wrapped
rem set linesize 144
rem set trimspool on
rem execute snap_libcache.start_snap
rem — do something
rem execute snap_libcache.end_snap

create or replace package snap_libcache as
procedure start_snap;
procedure end_snap;

create or replace package body snap_libcache as

cursor c1 is
kglsttyp lib_type,
kglstdsc name,
kglstget gets,
kglstght get_hits,
kglstpin pins,
kglstpht pin_hits,
kglstrld reloads,
kglstinv invalidations,
kglstlrq dlm_lock_requests,
kglstprq dlm_pin_requests,
— kglstprl dlm_pin_releases,
— kglstirq dlm_invalidation_requests,
kglstmiv dlm_invalidations
from x$kglst

type w_type1 is table of c1%rowtype index by binary_integer;
w_list1 w_type1;
w_empty_list w_type1;

m_start_time date;
m_start_flag char(1);
m_end_time date;

procedure start_snap is

m_start_time := sysdate;
m_start_flag := ‘U’;
w_list1 := w_empty_list;

for r in c1 loop
w_list1(r.indx).gets := r.gets;
w_list1(r.indx).get_hits := r.get_hits;
w_list1(r.indx).pins := r.pins;
w_list1(r.indx).pin_hits := r.pin_hits;
w_list1(r.indx).reloads := r.reloads;
w_list1(r.indx).invalidations := r.invalidations;
end loop;

end start_snap;

procedure end_snap is

m_end_time := sysdate;

dbms_output.put_line(‘Library Cache – ‘ ||
to_char(m_end_time,’dd-Mon hh24:mi:ss’)

if m_start_flag = ‘U’ then
dbms_output.put_line(‘Interval:- ‘ ||
trunc(86400 * (m_end_time – m_start_time)) ||
‘ seconds’
dbms_output.put_line(‘Since Startup:- ‘ ||
to_char(m_start_time,’dd-Mon hh24:mi:ss’)
end if;


rpad(‘Type’,10) ||
rpad(‘Description’,41) ||
lpad(‘Gets’,12) ||
lpad(‘Hits’,12) ||
lpad(‘Ratio’,6) ||
lpad(‘Pins’,12) ||
lpad(‘Hits’,12) ||
lpad(‘Ratio’,6) ||
lpad(‘Invalidations’,14) ||

rpad(‘—–‘,10) ||
rpad(‘—–‘,41) ||
lpad(‘—-‘,12) ||
lpad(‘—-‘,12) ||
lpad(‘—–‘,6) ||
lpad(‘—-‘,12) ||
lpad(‘—-‘,12) ||
lpad(‘—–‘,6) ||
lpad(‘————-‘,14) ||

for r in c1 loop
if (not w_list1.exists(r.indx)) then
w_list1(r.indx).gets := 0;
w_list1(r.indx).get_hits := 0;
w_list1(r.indx).pins := 0;
w_list1(r.indx).pin_hits := 0;
w_list1(r.indx).invalidations := 0;
w_list1(r.indx).reloads := 0;
end if;

if (
(w_list1(r.indx).gets != r.gets)
or (w_list1(r.indx).get_hits != r.get_hits)
or (w_list1(r.indx).pins != r.pins)
or (w_list1(r.indx).pin_hits != r.pin_hits)
or (w_list1(r.indx).invalidations != r.invalidations)
or (w_list1(r.indx).reloads != r.reloads)
) then

r.gets – w_list1(r.indx).gets,
r.get_hits – w_list1(r.indx).get_hits,
(r.get_hits – w_list1(r.indx).get_hits)/
r.gets – w_list1(r.indx).gets,
r.pins – w_list1(r.indx).pins,
r.pin_hits – w_list1(r.indx).pin_hits,
(r.pin_hits – w_list1(r.indx).pin_hits)/
r.pins – w_list1(r.indx).pins,
r.invalidations – w_list1(r.indx).invalidations,
r.reloads – w_list1(r.indx).reloads,
end if;

end loop;

end end_snap;

startup_time, ‘S’
m_start_time, m_start_flag

end snap_libcache;

drop public synonym snap_libcache;
create public synonym snap_libcache for snap_libcache;
grant execute on snap_libcache to public;

You’ll note that there are two classes of data, “namespace” and “type”. The dynamic view v$librarycache reports only the namespace rows.

Friday Philosophy – Jerks, Plain & Simple

A couple of weeks ago I saw a post on social media by someone who had just encountered a jerk. You know, someone who is an arrogant, bullying, self-important cockwomble (*).

The offended party had tweeted a link to the thread where the abuse had happened and I went and took a look. It’s not really that important in what manner the jerk was being a jerk, though in this case they were asking for free help that the original poster was giving, and when they did not get as much help as they felt entitled to, they became abusive to the person helping. Sadly, nothing new there.

But what did make me pause was that the original tweet that linked to this tale of the toss-pot ended with the line “Welcome to my world as a woman in I.T.” And I paused – but not for the obvious reason.

I went back and checked the thread to make sure I was right, that I had not missed anything. I hadn’t, not that I could see on the thread anyway.

At no point in this woman’s dealings with this jerk had anything been said about sex, gender, male, female, anything. This person asking for help was undoubtedly not a nice person, the speed with which they swapped from “please help me” through “You have to do more for me” and then on to a tirade of abuse was rapid. And it was nasty abuse – but none of it was sexist.

The woman had made a point that they received this sort of treatment because of their gender – but there was no sign of gender being part of this at all.  And that bothered me. It bothered me for three reasons.

  • I keep coming across people who immediately assume that if someone treats them badly, is offensive, does not like them, or is in any way a jerk towards them, it is because of whatever minority group they are part of. Sometimes sadly that is the case. However, in others it is not – but the offended person has stopped being able to discern that difference. At times I’ve suffered from bullying and being ignored in my own little way but I realized that I was being over sensitive and sometimes I was being ignored just because the person ignoring me ignored pretty much everyone, or was derogatory to everyone. It was nothing to do with my height.
  • Maybe because of that first point, where any issue is perceived as being due to an “..ism”, some people have developed an attitude that all people not in their minority group are against them. For example, I regularly come across the attitude of “all men are….”. I’ve been told to my face that all men think that women are not as good at programming as men. Well, you’re wrong. Not all men think like that. I told the person in question I did not hold that opinion and the reply was something like “well you’re about the only exception then!”. They were determined to hold a point of view in which it was not that there are still some sexist men about – but that all men were sexist, and rabidly so. That’s pretty annoying and it risks making people not want to help fight your corner.
  • I’ve had people say to me “I can’t say anything about X doing this wrong as I’ll only get accused of …ism” – and It’s occasionally been a worry for me. This can lead to a backlash where people get away with poor work or bad behaviour as no one wants to be labelled with an “…ism”

What worries me about this “cry wolf” scenario and the attitudes of “they are all going to be out to get you” is that it actually perpetuates the very thing we need to stand against. When I come across someone who is constantly, always citing all their interpersonal problems as being due to the minority they associate themselves with, I confess I thinking to myself “well, perhaps I’ll be a little wary of you, you seem to have issues here”. It’s like a chap I met who was adamant that every boss he had ever had was “a moron”. He’d had a lot of bosses and he could not accept that maybe, just maybe, at times perhaps the boss was not the problem.

Don’t get me wrong, we absolutely should challenge behaviour that treats a group of people as inferior, just for being part of that group. We should not condone any “..ism”. We should all ask ourselves if we are being unconsciously prejudiced and, if so, work to stop that. But we should be wary of citing bigotry as a knee-jerk reaction or falling into the error of challenging sexism, racism etc with exactly the same attitude but just from the other side of the fence.

And I think part of this is recognising that sometimes, often, jerks are just jerks. There is no …ism. Let’s just call them out for being jerks. And if there is an …ism, we call them out for being absolute, utter jerks.


(*) cockwomble is a term that seems to be becoming more universally recognised. It just means someone who is a bit of an idiot, someone you don’t want to spend any more time with that you need. A Womble is a creature from UK kids TV in the 70’s and 80’s. They are made-up animals that wander around Wimbledon Common (a large park in London), tidying it up and making things out of the rubbish they find. Sue made this cockwomble out of a beany-bag womble and a beany-bag cockerel.

Delphix XPP explained

This article was originally posted on the Delphix Support blog on 15-Nov 2015, but with the deprecation of the XPP feature with the new 6.0 release and higher, it was decided best to remove this article.

So, I have saved it and posted it here instead, almost 4 years to the day after it was originally posted…

The topic of converting Oracle databases from one of the proprietary
UNIX platforms (i.e. Solaris, AIX, or HP-UX) to Linux seems at first pretty
esoteric and far-fetched.

Meh. Right?

Plus, a lot of folks who have used Solaris, AIX, and HP-UX over the years can
argue that those operating systems have far more capabilities and
technical advantages than Linux, and they may be absolutely correct.  Who wants to get caught up in a religious debate?

Meh again.  Right?

But interestingly enough, the big issue here is not the relative technical merit.

The Cloud is about the commoditization of servers and services.  Unless you are using the hosting services or Cloud services offered by the operating system vendor themselves (i.e. Oracle for Solaris, IBM for AIX, or by HP or HP-UX), then the only operating systems being offered by hosting or Cloud vendors are either Linux or Windows on Intel x86 or x86_64.

So, converting an Oracle database from UNIX to Linux is a prerequisite to moving it to the Cloud.  Betcha didn’t think of it that way, eh?

This is why Delphix uses the term modernization to describe the capability of cross-platform provisioning (i.e. XPP), also known as UNIX-to-Linux (i.e. U2L).  It is not because Linux on x86/x86_64 platforms are any more modern than Solaris on SPARC, AIX on Power, or HP-UX on PA-RISC/Itanium, but because modernization involves moving the data center to hosting or the Cloud.

I’m not willing to engage in an argument about which platform has more technical merit than one another.  However, I will assert that if you’re not running Oracle on Linux, then you are not positioned for the future, just based on the economics.  If you really think technical merit beats economics, then I have a nice VAX running VMS for you.

So, Delphix XPP a.k.a. U2L is the on-ramp to the Cloud.  I prefer the term “XPP” rather than “U2L”, so I’m going to use that going forward…

Now let’s be clear on a few things.  Only Oracle can write the software to migrate an existing Oracle database from one operating system to another, from one “endian” chipset to another.  So Delphix has not, and could not, write the basic processes to perform conversion, because Oracle datafile formats are themselves proprietary.  But what Delphix has done is take the processes for conversion created by Oracle and actually made them feasible.

Here’s how…

The capability of cross-platform migration was introduced to Oracle RMAN in Oracle10g database release 1 (i.e. 10.1).  This was the CONVERT DATABASE functionality, and it was only capable of converting same-endian chipsets.

The terms “big endian” and “little endian” were invented in Jonathan Swift’s satirical novel “Gulliver’s Travels”, first published in 1726.

The term refers to a civil war in the novel between the people of Lilliput, some of whom ate their soft-boiled eggs by cracking the blunt or “big” end of an egg, others who ate their soft-boiled eggs by cracking the pointy or “little” end of an egg.  This wickedly sharp satire accurately portrays controversy and conflicts over the choices of ordering of bytes within words by processor manufacturers in the 1960s and 1970s, which have subsequently been propagated forward to this day.

-Jonathan Swift “Gulliver’s Travels”, 1726

All three of the proprietary UNIX variants (i.e. Solaris on SPARC, AIX on Power, and HP-UX on both PA-RISC and Itanium) are big-endian.  All x86 and x86_64 chips are little-endian, so XPP involves converting numerics from big-endian to little-endian.

In Oracle10 release 2 (i.e. 10.2), Oracle added the CONVERT TABLESPACE and CONVERT DATAFILE functionality which permit conversion between endian platforms (i.e. big to little or little to big).

So, at this point in time, XPP is functionality that has existed in Oracle for about 10 years now, or as long as Oracle10gR2 has been available.

But XPP hasn’t been a frequently-used feature over that time, as you are no doubt aware.  I know lots of people have known about it, but very few have actually done it.  And even fewer have done it outside of playgrounds, out in real-life databases.

There are two reasons for this: 1) the trend toward commoditization of x86/x86_64 is only now accelerating with the maturation of the Cloud and 2) there are substantial obstacles.

The most substantial obstacle is the fact that the functionality involving endian conversion (i.e. CONVERT DATAFILE/TABLESPACE) also includes Oracle’s transportable tablespace (a.k.a. TTS) functionality.  TTS requires that the source database be in read-only state during any tablespace transport operations.

Now, if you’re trying to convert a Linux copy of a UNIX production database, you don’t want to have to interrupt service in the production database by making it read-only.  In the IT biz, we call that down time and it is bad.  Instead, what you’re going to have to do is create a full copy of the UNIX production database, and then make that copy read-only.

This sounds innocuous enough, but let’s think that through?  If our production database on UNIX is several terabytes or larger, and we wish to simply test the conversion to Linux using CONVERT DATAFILE / TABLESPACE, then we need enough space for two additional copies of the production database:  one for the source UNIX copy (to be set to read-only) and one for the target converted Linux copy.  To paraphrase an old saying, “A couple terabytes here and a couple terabytes there, and pretty soon you’re talking real storage“.  And of course, it is not just about allocating twice the volume of the production storage for this test, there is also the time involved in copying two complete sets of the production database.  Needless to say, it is not easy to enter into this exercise lightly.  Nor repeatedly.  And so, this 10 year old functionality remains generally a big mystery.

Enter Delphix data virtualization.

Delphix can create a virtual database (VDB) for the read-only UNIX copy of the production database, requiring only a few minutes to completely provision regardless of the size of the database.  Of course, practically no additional storage is required for this virtual database which is set to read-only almost immediately and stays read-only throughout its brief lifetime.

Even better, Delphix can also create a VDB for the converted Linux copy of the database, and because the Delphix file-system is able to transparently deduplicate (or mooch) blocks that contain the same contents but in different endian format, even the converted Linux copy of the database consumes very little storage as a result of the conversion.

To use numbers from actual experience, converting a 5 TB production UNIX database without Delphix requires the allocation and copying of 10 TB of additional storage.  Converting the same 5 TB production UNIX database using Delphix requires less than 12 GB of additional storage.

Please note the change from “TB” to “GB” in that last sentence? That is an order of magnitude difference.

Admittedly, while the storage savings are consistent, the amount of time needed to perform XPP potentially has one or both of two long-running operations, which is one fewer long-running operation than when not using Delphix.  Those long-running operations are…

  1. Copying the production UNIX database to create the non-production UNIX database copy
  2. Copying database metadata for TTS operations from the non-production UNIX database copy to the converted Linux database
  3. Using the RMAN CONVERT DATAFILE command to convert all datafile blocks from UNIX to Linux

Item #1 is a long-running and expensive step only without Delphix XPP; with XPP, it is fast and inexpensive.

Item #2 is only an issue for certain databases supporting enormous applications like Peoplesoft which contain millions of objects like tables and indexes.  For most applications, there are only several thousand objects, so the single-threaded export/import of database metadata is not a concern

Item #3 might be an issue if the database is very large, because the RMAN CONVERT DATAFILE operation is converting every single block in the database.

Both items #2 and #3 could issues regardless of whether Delphix XPP is used or not.

So please understand from the beginning that XPP might be a long-running process because of the one or two long-running steps.  At least, with Delphix XPP there are only the one or two potentially long-running steps, because without Delphix XPP there is always one more long-running step.

Also, please understand that without the orchestration provided by Delphix, there are about a dozen non-trivial smaller steps to be performed manually in addition to the 2-3 long-running steps mentioned above.  For example, after the RMAN CONVERT DATAFILE processes are complete, there are several small post-processing tasks, such as recompiling all PL/SQL objects in the database, and so on.  Doing these steps manually as documented by Oracle is prone to human error, and the additional time and effort of automating these steps is strongly recommended without Delphix.

From a cookbook standpoint, there are two major phases of a Delphix XPP operation:  1) validation and 2) conversion, as documented online here.

The validation step, or validating an Oracle dSource for XPP, is documented online here.  It automates the creation of a UNIX database copy to be set to read-only so that validation procedures like the CHECK_DB and CHECK_EXTERNAL procedures in the DBMS_TDB package and the TRANSPORT_SET_CHECK procedure in the DBMS_TTS package can be executed automatically.  These procedures will fail if the database is not in read-only mode, and will verify whether the database contains any of the many restrictions listed in Oracle documentation for conversions online here and for transporting tablespaces online here.  Validation with Delphix XPP is fast and easy, so it can be performed over and over, to ensure that any restrictions encountered have been addressed and eliminated, one way or the other.  Delphix XPP also allows the upload of a SQL script to fix issues identified during validation, if it is not feasible to fix the problem in the source production database itself.  The creation and upload of scripts to fix identified XPP issues is documented online here.

The conversion step, or provisioning a Linux VDB from a UNIX dSource, is fully automated and is documented online here.  Conversion is possible only after validation is successfully completed.  If a SQL script was uploaded to address restrictions encountered during validation, then that script should continue to be used during conversion.  During conversion, Delphix automates the many small and large steps, including the provisioning of a read-only UNIX database copy, the export/import of database metadata, and the RMAN CONVERT DATAFILE processes.

So you can see that a lot of blood, sweat, and tears have been put into this feature.

XPP is another example of the enabling power of data virtualization.  Without it, Oracle cross-platform migration is a complicated, expensive, and slow process with many manual steps.  With Delphix data virtualization, XPP is simple and inexpensive.  It is still not a fast process due to the long-running steps described earlier, but even so it may be fast or easy enough to become feasible as your organization moves toward migration into the Cloud.

Do you work in I.T?

If you do, then you know that only 20% of your I.T work is spent at the office. And no, I’m not talking about telecommuting or working remotely here. I’m talking about the 80% of our time we spend handling all those I.T calls from friends and family Smile

So I was motivated to put together this little ode – dedicated to all of us that have to go the extra mile to keep our loved one’s I.T devices working as they should.

Enjoy and share with all those I.T people in your life Smile

Getting started with #Exasol on #AWS

It’s amazingly easy to run an Exasol Cluster on Amazon Web Services (AWS).

Subscribe Exasol in AWS marketplace

After having registered and having logged in to your AWS account, go to the AWS marketplace and search for Exasol: 1240w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Click on the Exasol Single Node and Cluster BYOL link and then on Continue to Subscribe: 1240w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

After having reviewed the T&C, click on Accept Terms. It shows this message afterwards: 1240w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Create Key Pair

Now login to the AWS Management Console, select a region close to your location and open the EC2 Dashboard. Click on Key Pairs: 1238w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Click on Create Key Pair now and enter a name for the new Key Pair, then click on Create: 1240w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Now you are ready to use the Exasol Cloud Deployment Wizard. Stay logged in with AWS Management Console as you will be routed back there by the Deployment Wizard soon.

Using the Cloud Deployment Wizard

Put this URL into your browser: and click on AWS then: 1238w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Select a region close to your location and click on Continue: 1240w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Click on Advanced Configuration and specify 1240w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

License Model Bring-your-own-license, System Type Enterprise Cluster, Instance Family Memory Optimized, Instance Type r5, Instance Model r5 large, Number of DB Node 1 then click Continue.

BYOL works without license file with a limit of 20 GB memory for the database. Means no costs are charged by Exasol (But by Amazon) for this environment.

Select create new VPC and click on Launch Stack on this page now: 150w, 300w, 768w, 942w" sizes="(max-width: 620px) 100vw, 620px" />

Using CloudFormation

This takes you to the Quick create stack page of CloudFormation in AWS Management Console: 1238w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Enter these details on the page:

Stack name

Key Pair (select the key pair created previously)

SYS User Password

ADMIN User Password

Public IPs (true)

Tick the acknowledge box and click on Create stack

Now go to the EC2 details page and copy the Public IP of the management node: 1240w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Put that with an https:// prefix into a browser and click on Advanced: 1240w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Then you should see a progress bar like this: 1238w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

That screen changes after about 30 Minutes to the EXAoperation login screen.

Login as user admin with the password, you specified previously on the CloudFormation Quick create stack page. There should be a database running: 1240w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

As you can see now, you have a database, a remote archive volume using an Amazon S3 bucket ready for backup & restore and a log service to monitor your system.

This database is limited to 20 GB memory only unless a license file is uploaded to the license server aka management node. For educational purposes, I don’t need more.

Use Elastic IPs

The public IPs of your data nodes will change upon every restart, which is probably not convenient.
Therefore, click on Elastic IPs in the EC2 dashboard, then click on Allocate new address: 1240w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Select Amazon pool then click on Allocate: 1240w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Click on the IP on the following screen: 150w, 300w, 768w, 1024w, 1026w" sizes="(max-width: 620px) 100vw, 620px" />

Select the action Associate address on the next screen: 150w, 300w, 768w, 777w" sizes="(max-width: 620px) 100vw, 620px" />

Select the data node from the Select instance pull down menu and click on Associate: 150w, 300w, 768w, 1024w, 1137w" sizes="(max-width: 620px) 100vw, 620px" />

Close the next screen and go to the EC2 instance page. You should see the elastic IP assigned to the data node there: 1240w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Connect with a SQL Client to your Exasol database on AWS

This is how that looks with DbVisualizer: 150w, 300w" sizes="(max-width: 510px) 100vw, 510px" />

And that’s it: Now you have an Exasol 1+0 cluster running on AWS. That’s not the same as a single node system, because this 1+0 cluster can be enlarged with more data nodes. I will show how to do that in future posts.

A word about costs: Instead of using our corporate AWS account, I registered myself to see how much that will take. It was less than 80 Euro with a 2+1 cluster environment I used for about one month, shutting down the EC2 instances whenever I didn’t need them for testing and for creating courseware. It should be well below 10 Euro per day with the very moderate resource consumption configured for the environment subject to my postings.

Stay tuned for some more to come about Exasol on AWS </p />

    	  	<div class=

WordPress 5.3 “Kirk”

WordPress 5.3 “Kirk” has been released.

I guess many of the people out there are running on hosted solutions, so your upgrade will happen when your hosting company decides to apply it. For those that self-host, you’re just a button press away.

This morning I’ve updated 5 separate blogs, all hosted independently, and they all went through fine. At the moment the upgrades have to be manually initiated, but I’m guessing in a few days they’ll just run through automatically.

Since it was introduced, the Site Health feature (Tools > Site Health) has been improved with each release. It’s worth running this to check if there are any recommended security or performance settings. The new release picked up some things the previous release didn’t. My website was bounced a couple of times while I made changes to sort them.

If I’m honest, I can barely notice a difference. I’m not a “power user” of WordPress. I just write blog posts with it. There are allegedly improvements to the block editing, but I really can’t tell. A couple of the popup controls look a little different, but from the way I use it, that’s about all. I’m sure people with different usage patterns will have a different reaction to this release.

I was interested to see what the Twenty Twenty theme would look like. Not my cup of tea! </p />

    	  	<div class=

UKOUG TechFest19 Survival Guide

Brighton, December 1st-4th 2019, Grand Hotel, Techfest2019. This is the big Technology event for the UKOUG this year, one of the largest Oracle Tech events in Europe.

The UKOUG conference is traditionally the one to finish the European year of conferencing and it is always incredibly popular with both delegates and presenters. There are two things that are not traditional about this year’s UKOUG December conference:

  • It is Technology Focused. We asked our members when they wanted their annual conference and there was a strong split between Business Applications and Technology members, with many Business Apps members finding December a bad time to be out of the office and most of them preferring May/June, so we split the conference and the big Business Apps conference will be in June 2020. However, our Tech members wanted to stick to December.
  • The conference is in the South of England. Birmingham was our conference home for many years and we have been to Manchester & Liverpool, so time to try the South.

I’m really please we are in Brighton as it is a lively, fascinating place. Also, being that little bit further south, it might be less cold. Slightly!

Why Come?

Because there will be fantastic talks, round tables, Q&A sessions, experts to quiz, people with the the same technical challenges as you, Partners able to offer services and, last but not least, it will be fun!

Technical Content

The UKOUG conferences are very popular with presenters. On average we get 5 submissions per presenting slot, more for some streams. We could fill the conference with talks from Oracle ACEs, Oracle Certified Masters, and the best Oracle Corp offer. What we actually do is have stream-specific teams that select not just known speakers but also topics we know are hot, new presenters, avoid repeating content. It’s damned hard work but we aim to give you:

  • Independent experts who will tell you exactly how it is, like Richard Foote on indexes (all the way from Auz, so a rare chance to see him), Frank Pachot from CERN, Security guru Pete Finnigan, Abigail Giles-Haigh, Craig Shallahamer, Jonathan Lewis, Zahid Anwar, Loneke Dikmans…
  • Oracle giving you the latest information “from the horses mouth” and, just as important, the chance to meet product managers and other experts. People like Maria Colgan, Mike Deitrich, Jeff Smith, Nigel Bayliss, Susan Duncan
  • 9 or more concurrent streams across Development, Analytics & Data Science, Database, Systems & Infrastrructure, and APEX. No matter what your interest in the Oracle Tech world we hope your problem will not be “is there a session of interest” but “which session of interest do I go to now?”
  • Roundtable discussions, panels, keynotes, presentations – and the chance to meet the experts around the conference and at the socials


Learning stuff at conference is the name of the game, but so is having some fun. The more enjoyable the conference and the social times after are, the more you you will get out of the content. I know from personal experience that if a conference is just information and being serious, after a few hours my brain shuts off.

Also, it’s when you are more relaxed that the magic thing about attending an event in person happens – you meet people and get to know them better. This opens doors to industry experts, you find people dealing with the same PIA technical issues as you, you exchange war stories. You make friends. I get just as much (if not more) from the people I meet at conference than the official presentations.

Monday evening there will be networking drinks, Tuesday will be the big party (and I’ve been promised No Loud Music!!!). If you are a UKOUG volunteer or speaker, there is a drinks reception Sunday night. (I know of a couple of other events being put on by other companies too, such as Rittman Mead).

We will be having the retro games consoles scattered around the venue again.

And, we are in Brighton! Of course as the UKOUG President I would never encourage you to leave the conference hotel… But as a human being I would say go and look around Brighton, have a bit of fun! You might want to do what I am doing and be in Brighton a day or two before the event (or after) and really enjoy what the town has to offer.  Mrs Widlake is coming with me on Saturday so we can have a mini break.

One other fun thing – Mark Rittman is organising a gentle cycle ride Sunday morning. Details can be found {here},it will be a couple of hours via a cafe, prior to Super Sunday starting. I plan to take part.

Now, the practical stuff:

Getting There


Basically, if you can get to London OK, you can get to Brighton just fine. Trains go from Victoria in under an hour, from St Pancras (very convenient if you come to London on Eurostar), London Bridge (both about 90 mins) and, if you live near Cambridge, you can get a direct train through London to Brighton. There is a direct service from Gatwick Airport taking about half an hour.

I’d strongly advise booking *now*. If you come down on Saturday or Sunday, it can cost as little as £15-20 from London, £40 from Birmingham, Bristol or Leeds.

If you don’t often travel by train just be aware that “open” tickets and booking only a few days ahead can be eye-wateringly expensive. Plan ahead, decide when you are travelling, and book ASAP.


The best international airport to fly to for Brighton is Gatwick, as there is a fast (1/2 hour) train service direct to Brighton for as little as £10. A taxi will take 40-50 minutes and cost that many pounds.

Heathrow is also sort-of on the same side of London as Brighton but you will either have to go into London to Victoria by the slow Tube line and then out on the normal train services to Brighton, or take the Heathrow Express (15 mins, about £15 each way) to London Paddington and take the tube Central Line around to Victoria.

If you come in to Stansted, basically get into London (Stansted Express) and work it out from there!

For Luton (and Stansted, sort of) Niall Litchfield says

If you are flying into Luton, don’t go into London and change. Take the shuttle bus to Luton Airport Parkway station (10 minutes) and take the direct train to Brighton. If you are going to Stanstead then you should consider your life choices!



UPDATE – see comments by Niall Litchfield (again, helpful chap), a local who says to not drive in to Brighton as parking is so bad. He is 20 mins away and will take the local train. Best bet if you must is Park and Ride

It’s relatively simple to drive to Brighton. You go around the M25 to the M23 and down that, and keep going when it turns into the A23. I’m not so sure about coming along the more coastal road (A27) – I have bad memories of it taking ages to get anywhere.

But parking can be expensive. If you are not being provided parking by a hotel you are using or you plan to come in and go home each day then you might like to look at or similar. I’m no expert on parking in Brighton (I last did it 30 years ago) but I’ll ask someone local and update this accordingly. My one hint would be avoid NCP car parks – they are usually very expensive and, as a company, they are terrible. Ask anyone who commutes by train into London or any other major city and they probably hate NCP with a passion.


Don’t be daft, unless you are local, in which case you know more than I do!


Where to Stay

I’m afraid you missed the special deal to stay at the Grand (the location of the conference) but you might still be able to book there. However, at the time of writing (see image), there are many, many hotels available around Brighton and you might want to look at Air B&B for something cheaper.

I personally use Trivago to find accommodation but other websites are available. They should all allow you to what I do which is choose the lowest “comfort” level you want and the price range. I then use the map view as it makes things a lot easier than a list of hotels with no idea where they actually are!

I’m actually staying at the conference venue – as President I have a lot of duties so it makes sense for me to be on-site. I also know that there are a lot of presenters etc staying at the hotel so it should add to the vibe, but sometimes I specifically choose to stay a 5, 10 minute walk from a Conference, so I can get away from it all if I should wish. I find a 10 minutes stroll before a conference wakes me up and doing so after gives my brain a chance to turn off a little.

Coffee, Refreshments etc.

It’s been a problem for years at UKOUG conferences. Getting coffee (or tea or whatever) has been a real challenge as the venues always wanted a fortune to provide catering all day. Catering! Just hot drinks and maybe some biscuits! This year, tea & coffee will be available throughout the conference! I’m not guaranteeing it will be good tea and coffee, I’m not daft, but Brighton has a big coffee culture so I have hopes.

Water should always be available.

If your are a coffee snob (looking at one person in particular here) then, look, we are IN BRIGHTON! Go out the hotel and walk 2 minutes, you will soon find a hipster cafe and can get your double espresso skinny latte with raw cane sugar there. And in fact, yeah, do it! Pop out the venue for 10 mins and go to a local cafe. Or get an ice cream. Or, if you are inclined, a glass of wine and a cake. Cafe culture is all around you.

If you don’t like the provided coffee at the conference, don’t tell me. Tell me about other things that are right or wrong but, honestly, the quality of the coffee is not something I want to hear anything more about. This is the UK and it is an I.T. conference, the coffee is supposed to bad!

You will have been asked when you registered for the event if you have dietary requirements and this should be catered for. Vegetarian options should be provided at all meals as a matter of course. Any issues, as the UKOUG staff and they will sort it out for you.

At the social events there will be soft drinks as well as alcoholic ones. Some people like alcohol, some do not, it really is not that important if you drink or not. BUT if you find there are no soft options then let the UKOUG staff know immediately – we had a problem one year where the caterers only provided beer & wine and no one mentioned it for ages. They just got angry and slagged us off after the event.

There will be no secret whisky tasting this year. There never has been. It’s just a rumour. If whisky is not your thing then feel free to not bring a different thing to share at the non-existing tasting.

Chocolate. I’ve also not heard rumours about a chocolate tasting happening…

Other Hints

Go to at least one talk you know nothing about, that is not your core work area. You will probably learn something unexpectedly useful! You might even get a peak at a shift in your career.

Speak to the famous people. They are human, they are *just like you* (only, of course, much much smarter…). Honestly, just say “hi” or “isn’t it a shame about the Rugby world cup final” or “what bread do you like to bake?” (this is surprisingly likely to get an interested response from a growing number of speakers). Have a little chat. But also, please do not stalk. If you find yourself hanging about after a session to chat to the same person you chatted to three time already, you have become a scary stalker and need to stop.

If you don’t know many people at the conference, go to a panel session or a round table. If you can build up the courage, when you see a circle of half a dozen people chatting and you recognise some of them as “in your area”, go and join in. (And, if you are one of those people in a circle of mates, chatting, keep an eye out for people hanging about nearby looking nervous. I wish we did not stand in these circles, backs to each other, but I can’t think of a good way to break the circle.)

Take breaks. If you do 7 sessions in succession I am willing to bet nothing is going into the brain anymore. If you happen to find yourself talking with people just before a session starts and you are enjoying the conversation, maybe keep it going and have a coffee/water. I really do believe that those contacts you make/develop at conferences and the ad-hoc things you learn as just as valuable as listening to Connor McDonald bang on about SQL in his boring monotone again. He does rubbish slides.



nVision Bug in PeopleTools 8.55/8.56 Impacts Performance

I have recently come across an interesting bug in nVision that has a significant performance impact on nVision reports in particular and can impact the database as a whole.

Problem nVision SQL

This is an example of the problematic SQL generated by nVision.  The problem is that all of the SQL looks like this. There is never any group by clause, nor any grouping columns in the select clause in from of the SUM().

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 100%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT SUM(A.POSTED_BASE_AMT) 

Each query only returns a single row, that only populates a single cell in the report, and therefore a different SQL statement is generated and executed for every cell in the report.  Therefore, more statements are parsed and executed, and more scans of the ledger indexes and look-ups of the ledger table and performed.  This consumes more CPU, more logical I/O.

Normal nVision SQL

This is how I would expect normal nVision SQL to look.  This example, although obfuscated, came from a real customer system.  Note how the query is grouped by TREE_NODE_NUM from two of the tree selector tables, so this one query now populates a block of cells.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 100%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 

The Bug

This Oracle note details an nVision bug:

"UPTO SET2A-C Fixes - Details-only nPlosion not happening for Single Chart-field nPlosion Criteria.
And also encountered a performance issue when enabled details-only nPlosion for most of the row criteria in the same layout
Issue was introduced on build 8.55.19.
Condition: When most of the row filter criteria enabled Details-only nPlosion. This is solved in 8.55.22 & 8.56.07.
UPTO SET3 Fixes - Performance issue due to the SET2A-C fixes has solved but encountered new one. Performance issue when first chart-field is same for most of the row criteria in the same layout.
Issue was introduced on builds 8.55.22 & 8.56.07.
Condition: When most of the filter criteria’s first chart-field is same. The issue is solved in 8.55.25 & 8.56.10."

In summary

  • Bug introduced in PeopleTools 8.55.19, fully resolved in 8.55.25.
  • Bug introduced in PeopleTools 8.56.07, fully resolved in 8.56.10.

Consumer Group Mapping Rules Use Pattern Matching from 12.1

I recently noticed a small, but I think significant, change in the way consumer group mapping rules behave from Oracle 11.2.04.  Session attributes can be matched to resource groups using LIKE expressions and simple regular expressions specified in the matching rules, though only for certain attributes.
(Updated 12.11.2019) I am grateful to Mikhail Velikikh for his comment.  It depends on which version of Oracle's documentation for 11.2 you read.  Pattern matching does work in for the attributes listed in the 12.1 documentation. My testing indicates that pattern matching does not happen in
You cannot pattern match the SERVICE_NAME in 11.2.  The attribute value is validated against the list of valid services.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 100%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">exec DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING(DBMS_RESOURCE_MANAGER.SERVICE_NAME, 'WIBBLE', 'MYGROUP1');
ORA-00042: Unknown Service name WIBBLE

However, you can pattern match SERVICE_NAME in 12.1, although SERVICE_NAME is not in the list of attributes for which the documentation says pattern matching is available.  This may be documentation bug (see Oracle support note 1992704.1).

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 100%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING(
attribute IN VARCHAR2,
value IN VARCHAR2,
consumer_group IN VARCHAR2 DEFAULT NULL);

11.2 Documentation (c)2010

You use the SET_CONSUMER_GROUP_MAPPING procedure to map a session attribute/value pair to a consumer group. The parameters for this procedure are the following:

attribute The session attribute type, specified as a package constant
value The value of the attribute
consumer group The consumer group to map to for this attribute/value pair

11.2 Documentation (c)2015

This procedure adds, deletes, or modifies entries that map sessions to consumer groups, based on the session's login and runtime attributes.

attribute Mapping attribute to add or modify. It can be one of the Constants listed.
value Attribute value to match. This includes both absolute mapping and regular expressions.
consumer group Name of the mapped consumer group, or NULL to delete a mapping

Usage Notes 
If no mapping exists for the given attribute and value, a mapping to the given consumer group will be created. If a mapping already exists for the given attribute and value, the mapped consumer group will be updated to the one given. If the consumer_group argument is NULL, then any mapping from the given attribute and value will be deleted.
The subprogram supports simple regex expressions for the value parameter. It implements the same semantics as the SQL 'LIKE' operator. Specifically, it uses '%' as a multi-character wildcard and '_' as a single character wildcard. The '\' character can be used to escape the wildcards. Note that wildcards can only be used if the attribute is one of the following:


*Consumer group mapping comparisons for DBMS_RESOURCE_MANAGER.CLIENT_PROGRAM are performed by stripping the @ sign and following characters from V$SESSION.PROGRAM before comparing it to the CLIENT_PROGRAM value supplied.

Autonomous Database Example

The mappings are reported in the view DBA_RSRC_GROUP_MAPPINGS. An example of using a LIKE expression in a mapping can be found in the standard configuration of autonomous cloud databases. There are 5 standard mappings for 5 services to 5 consumer groups. The automatically created service names are prefixed with the database name, then HIGH, LOW, MEDIUM, TP or TPURGENT., and are suffixed

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 100%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT * FROM v$services ORDER BY 1;


However, the mappings are defined using pattern matching rather than referencing the explicit name of the service, thus exactly the same mappings can be defined on every autonomous database.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 100%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">select * from DBA_RSRC_GROUP_MAPPINGS order by 1,2;

------------ --------------- ---------- ----------


  • By default, the ORACLE_USER attribute's priority 6 takes precedence over SERVICE_NAME whose priority is 7. Therefore, GGADMIN, the GoldenGate user, always uses the LOW consumer group irrespective of the service that it connects to. The default priorities can be changed.
  • Wildcards cannot be used for the ORACLE_USER attribute.

Video : Oracle REST Data Services (ORDS) : Database Authentication

Today’s video is a run through the Database Authentication functionality in Oracle REST Data Services (ORDS).

As always, this is based on an article on the same subject.

There are better methods of authentication and authorization available from ORDS, which you can read about here.

The star of today’s video is Stew Ashton, who is barely audible over the noise of the bar. </p />

    	  	<div class=