In the previous post, I pontificated about triggers that "lock you in" to having them fire, which can create dramas when it comes to doing data patching.
Maybe you can design your application around this, but if you cant, the last thing you want to be doing is having to take an outage so that you can disable the trigger whilst you do your data maintenance. Ideally you want the trigger to fire as per normal, except in your session.
And that’s actually pretty easy to do. A simple example is below
As promised, here is the first in a series of posts dealing with Oracle 12c new features.
Creating new table rows often requires assigning a key value. In the past, it has been common to use an Oracle SEQUENCE to generate key values using an Insert trigger. Oracle 12c provides two new options: IDENTITY columns and SEQUENCEs used as column default values.
IDENTITY columns are new to Oracle, but, not new to the database world. IDENTITY columns use an Oracle SEQUENCE “under the covers” and their creation is automatic rather than manual. Should a table be dropped and recreated, the IDENTITY value starts over again.
Oracle has had SEQUENCES for years; the IDENTITY column allows use of a SEQUENCE as part of a column definition (much like some competitor databases):
Some people hate triggers, some people love triggers…
I am not that opinionated on them in either direction, but one thing I do hate, whether it be a trigger or not, is dumb code. And today’s post just happens to be about dumb code in a trigger.
Consider this simple trigger (you see these everywhere pre 12c):
CREATE OR REPLACE TRIGGER MY_TRG BEFORE INSERT ON MY_TABLE FOR EACH ROW BEGIN SELECT MY_SEQ.NEXTVAL INTO :NEW.MY_PK_COL FROM DUAL; END; /
Seems harmless enough…and I’m not talking about whether we should be using ":new.col := seq.nextval", because under the covers it will do a select-from-dual anyway.
The reason the code is really dumb…is you are totally screwed if you need to some data maintenance…
Consider the following scenario:
I was helping a customer debug some external table load problems. They are developing some code to do massive inserts via external tables. As the code was being tested, we saw a fair number of tests that were doing simple queries of an external table. I expected to see “external table read” wait events, but was surprised when we saw more “external table write” wait events than reads.
I thought this was due to writes to the “log” file and possible “bad” file, but I had to be sure. I searched the docs but could not find reference to this wait event. I specifically was seeing the following:
I found this hilarious
SQL> startup ORACLE instance started. Total System Global Area 1469792256 bytes Fixed Size 2402776 bytes Variable Size 536872488 bytes Database Buffers 922746880 bytes Redo Buffers 7770112 bytes Database mounted. ORA-19821: an intentionally corrupt log file was found
Really ? I intentionally corrupted my log file ? I dont think so !
Another big public username and password leak…
Some good reading on how it was done, and thus ensuring your code isn’t prone to SQL injection here:
Once you get into pluggable database territory, you might need to check your usage of "alter system enable restricted session", because unless you’ve patched, there’s a little bug which lets you enable restricted session, but wont let you get out of it ! :-)
SQL> alter session set container = pdb12; Session altered. SQL> alter system enable restricted session; System altered. SQL> select logins from v$instance; LOGINS ---------- RESTRICTED SQL> alter system disable restricted session; alter system disable restricted session * ERROR at line 1: ORA-65144: ALTER SYSTEM DISABLE RESTRICTED SESSION is not permitted
You can get out of the predicament, by force opening the pluggable database as shown below, but probably best to look at the latest 12c PSU, which contains a fix (unverified)
I had the pleasure of being a guest of the Norwegian User Group for the second year in a row on their famous cruise conference at the start of April. For a relatively ‘small’ conference (around 300-400 people), the array of quality speakers the group manage to get is always astounding. Martin Nash, Cary Millsap, Tim Hall, Bryn Llewellyn, Martin Bach, Jeff Smith, Doug Burns to name just a few. I gave a few talks which (seemed :-)) to be well received by the audience.
The conference runs like clockwork, and Oslo is a beautiful city to spend a day wandering around sampling the cuisine. The only real challenge is the 45min timeslot for papers, whereas as most places I’ve spoken at allow 50-60 mins, so you’re presented with the tough choice of cramming your existing content into 45 mins, or deciding what must be pared out.
Here’s a pet peeve of mine: Customers who don’t read the error messages. The usual symptom is a belief that there is just on error: “Doesn’t work”, and that all forms of “doesn’t work” are the same. So if you tried something, got an error, your changed something and you are still getting an error, nothing changed.
I hope everyone who reads this blog understand why this behavior makes any troubleshooting nearly impossible. So I won’t bother to explain why I find this so annoying and so self defeating. Instead, I’ll explain what can we, as developers, can do to improve the situation a bit. (OMG, did I just refer to myself as a developer? I do write code that is then used by customers, so I may as well take responsibility for it)
Here’s what I see as main reasons people don’t read error messages: