Which piece of code will be faster (clue – the table in question has no indexes):
Option 1 – pure SQL
update join1 set data = data||'#' where key_1=500 and key_2=23851 and key_3=57012 and key_4=521 and key_6=1 and key_7=23352 ;
Option 2 – a silly PL/SQL row by row approach:
declare type rowid_type is table of urowid index by binary_integer; tab_rowid rowid_type; lv_rows_updated number :=0; cursor my_cursor is select rowid rid from join1 where key_1=500 and key_2=23851 and key_3=57012 and key_4=521 and key_6=1 and key_7=23352 ; begin open my_cursor; -- We know that the number of rows to be updated is very small fetch my_cursor bulk collect into tab_rowid limit 10000; forall lv_row in tab_rowid.first .. tab_rowid.last update join1 set data = data||'#' where rowid = tab_rowid(lv_row); lv_rows_updated := sql%rowcount; close my_cursor; end; /
It’s a trick question, of course, and although the automatic response from any DBA-type is likely to be “the SQL”, the correct answer is (as so often) “it depends”.
This question appeared as a problem on the OTN database forum a few days ago. In it’s original form it asked why a select statement should be much faster than a select for update or an update – even though the volume identified and updated was very small (just one row in 100M).The note then went on to show that using PL/SQL to select the rowids of the target rows then doing the bulk update by rowid was faster than the basic SQL update. The answer didn’t spring to mind immediately; but fortunately someone asked for some run-time statistics (v$sesstat) and the supplied statistics told me what was going on.
Conveniently the OP gave us the code to recreate the test case – all 100M rows of it; I cut this back to 16M rows (ca. 1.5GB of disc space), and then ran the tests with ny db_cache_size set to 256MB (another clue). I got similar results to the OP – not so dramatic, but the PL/SQL ran faster than the SQL and the difference was due to an obvious change in the CPU usage.
If you haven’t guess from the clue in the 256MB db_cache_size (which means the table is more than 5 times the size of the cache), the answer is “serial direct path reads”. For a sufficiently large table (and that’s not easy to define – start here and follow a few links) it’s fairly common knowledge that from 11g a tablescan can use a serial direct path read, and that’s what the PL/SQL was doing to select the required rowids. However, here’s a detail that’s not often mentioned: an update has to take place in public where everyone can see it so when Oracle executed the simple SQL update or select for update statement it had to scan the table through the buffer cache. Pulling all those blocks into the buffer cache, grabbing latches to link them to the right cache buffers chains, pinning them, then unpinning them uses a lot of CPU – which isn’t needed for the direct path read. The PL/SQL with its pure select used far less CPU than the basic SQL with its update/select for update, and because the OP had a very high-powered machine with plenty of CPU and loads of (disc-)caching effects all over the place the difference in CPU time was exremely visible as a fraction of the total DB time.
This was, inevitably, a very special case where a little detail became a significant fraction of the workload. The OP scanned 100M rows to update 1 row (in 7 – 13 seconds!). This doesn’t sound like a strategy you would normally want to adopt for frequent use; and for occasional use we might be happy to use the slower (13 second) approach to avoid the coding requirement of the fast (7 second) solution.
It’s worth pointing out that the PL/SQL strategy is not safe. In the few seconds between the select statement starting and the row being identified and updated by rowid it’s possible that another session could have updated (or deleted) the row. In the former case the update statement is now updating a row which doesn’t match the specification; in the latter case the code will raise an exception.
We can make the PL/SQL safer by including the original predicates in the update statement – but that still leaves the question of what the code should do if the select statement finds a row and the update fails to update it. Should it, perhaps, assume that there is still a row in the table that needs an update and re-run (using up all the time you saved by adopting a PL/SQL solution).
A recent question on the OTN Forums Reg: Index – Gathering Statistics vs. Rebuild got me thinking on a scenario not unlike the one raised in the question where a newly populated index might immediately benefit from a coalesce. I’ve previously discussed some of the pertinent concepts such as how index rebuilds can make indexes bigger, not smaller […]
In the last few years I have not done as many conference speaking dates as I used to. This is simply because when offered they usually clashed with pre-booked work. I spoke for the UKOUG in Dublin last year and....[Read More]
Posted by Pete On 06/07/15 At 09:40 AM
Using backupsets for Transportable Tablespaces reduces the volume of data you need to ship to the destination database. See how that works:
The tablespace is made READ ONLY before the new BACKUP FOR TRANSPORT command is done. At this point, you can also convert the platform and the endian format if required. Then on the destination site:
The FOREIGN keyword indicates that this doesn’t use a backup taken at the destination. Practical example:
[oracle@uhesse ~]$ sqlplus sys/oracle@prima as sysdba SQL*Plus: Release 220.127.116.11.0 Production on Mon Jul 6 08:36:30 2015 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 18.104.22.168.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> select * from v$tablespace; TS# NAME INC BIG FLA ENC CON_ID ---------- ------------------------------ --- --- --- --- ---------- 0 SYSTEM YES NO YES 0 1 SYSAUX YES NO YES 0 2 UNDOTBS1 YES NO YES 0 3 TEMP NO NO YES 0 4 USERS YES NO YES 0 5 TBS1 YES NO YES 0 6 rows selected. SQL> select table_name,owner from dba_tables where tablespace_name='TBS1'; TABLE_NAME -------------------- OWNER -------------------- T ADAM SQL> alter tablespace tbs1 read only; Tablespace altered. SQL> exit Disconnected from Oracle Database 12c Enterprise Edition Release 22.214.171.124.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options [oracle@uhesse ~]$ rman target sys/oracle@prima Recovery Manager: Release 126.96.36.199.0 - Production on Mon Jul 6 08:37:28 2015 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. connected to target database: PRIMA (DBID=2113606181) RMAN> report schema; using target database control file instead of recovery catalog Report of database schema for database with db_unique_name PRIMA List of Permanent Datafiles =========================== File Size(MB) Tablespace RB segs Datafile Name ---- -------- -------------------- ------- ------------------------ 1 347 SYSTEM YES /u01/app/oracle/oradata/prima/system01.dbf 2 244 SYSAUX NO /u01/app/oracle/oradata/prima/sysaux01.dbf 3 241 UNDOTBS1 YES /u01/app/oracle/oradata/prima/undotbs01.dbf 4 602 USERS NO /u01/app/oracle/oradata/prima/users01.dbf 5 100 TBS1 NO /u01/app/oracle/oradata/prima/tbs1.dbf List of Temporary Files ======================= File Size(MB) Tablespace Maxsize(MB) Tempfile Name ---- -------- -------------------- ----------- -------------------- 1 40 TEMP 32767 /u01/app/oracle/oradata/prima/temp01.dbt RMAN> host 'mkdir /tmp/stage'; host command complete RMAN> configure device type disk backup type to compressed backupset; old RMAN configuration parameters: CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1; new RMAN configuration parameters: CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1; new RMAN configuration parameters are successfully stored RMAN> backup for transport format '/tmp/stage/tbs1.bkset' datapump format '/tmp/stage/tbs1.dmp' tablespace tbs1; Starting backup at 06-JUL-15 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=40 device type=DISK Running TRANSPORT_SET_CHECK on specified tablespaces TRANSPORT_SET_CHECK completed successfully Performing export of metadata for specified tablespaces... EXPDP> Starting "SYS"."TRANSPORT_EXP_PRIMA_yvym": EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK EXPDP> Master table "SYS"."TRANSPORT_EXP_PRIMA_yvym" successfully loaded/unloaded EXPDP> ****************************************************************************** EXPDP> Dump file set for SYS.TRANSPORT_EXP_PRIMA_yvym is: EXPDP> /u01/app/oracle/product/12.1.0/dbhome_1/dbs/backup_tts_PRIMA_25997.dmp EXPDP> ****************************************************************************** EXPDP> Datafiles required for transportable tablespace TBS1: EXPDP> /u01/app/oracle/oradata/prima/tbs1.dbf EXPDP> Job "SYS"."TRANSPORT_EXP_PRIMA_yvym" successfully completed at Mon Jul 6 08:39:50 2015 elapsed 0 00:00:26 Export completed channel ORA_DISK_1: starting compressed full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00005 name=/u01/app/oracle/oradata/prima/tbs1.dbf channel ORA_DISK_1: starting piece 1 at 06-JUL-15 channel ORA_DISK_1: finished piece 1 at 06-JUL-15 piece handle=/tmp/stage/tbs1.bkset tag=TAG20150706T083917 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03 channel ORA_DISK_1: starting compressed full datafile backup set input Data Pump dump file=/u01/app/oracle/product/12.1.0/dbhome_1/dbs/backup_tts_PRIMA_25997.dmp channel ORA_DISK_1: starting piece 1 at 06-JUL-15 channel ORA_DISK_1: finished piece 1 at 06-JUL-15 piece handle=/tmp/stage/tbs1.dmp tag=TAG20150706T083917 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 06-JUL-15 RMAN> alter tablespace tbs1 read write; Statement processed RMAN> exit Recovery Manager complete. [oracle@uhesse ~]$ ls -rtl /tmp/stage total 5608 -rw-r-----. 1 oracle oinstall 5578752 Jul 6 08:39 tbs1.bkset -rw-r-----. 1 oracle oinstall 163840 Jul 6 08:39 tbs1.dmp [oracle@uhesse ~]$ rman target sys/oracle@sekunda Recovery Manager: Release 188.8.131.52.0 - Production on Mon Jul 6 08:40:49 2015 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. connected to target database: SEKUNDA (DBID=3356258651) RMAN> report schema; using target database control file instead of recovery catalog Report of database schema for database with db_unique_name SEKUNDA List of Permanent Datafiles =========================== File Size(MB) Tablespace RB segs Datafile Name ---- -------- -------------------- ------- ------------------------ 1 347 SYSTEM YES /u01/app/oracle/oradata/sekunda/system01.dbf 2 249 SYSAUX NO /u01/app/oracle/oradata/sekunda/sysaux01.dbf 3 241 UNDOTBS1 YES /u01/app/oracle/oradata/sekunda/undotbs01.dbf 4 602 USERS NO /u01/app/oracle/oradata/sekunda/users01.dbf List of Temporary Files ======================= File Size(MB) Tablespace Maxsize(MB) Tempfile Name ---- -------- -------------------- ----------- -------------------- 1 40 TEMP 32767 /u01/app/oracle/oradata/sekunda/temp01.dbt RMAN> restore foreign tablespace tbs1 format '/u01/app/oracle/oradata/sekunda/tbs1.dbf' from backupset '/tmp/stage/tbs1.bkset' dump file from backupset '/tmp/stage/tbs1.dmp'; Starting restore at 06-JUL-15 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=37 device type=DISK channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring all files in foreign tablespace TBS1 channel ORA_DISK_1: reading from backup piece /tmp/stage/tbs1.bkset channel ORA_DISK_1: restoring foreign file 5 to /u01/app/oracle/oradata/sekunda/tbs1.dbf channel ORA_DISK_1: foreign piece handle=/tmp/stage/tbs1.bkset channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:08 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring Data Pump dump file to /u01/app/oracle/product/12.1.0/dbhome_1/dbs/backup_tts_SEKUNDA_85631.dmp channel ORA_DISK_1: reading from backup piece /tmp/stage/tbs1.dmp channel ORA_DISK_1: foreign piece handle=/tmp/stage/tbs1.dmp channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:02 Performing import of metadata... IMPDP> Master table "SYS"."TSPITR_IMP_SEKUNDA_ppol" successfully loaded/unloaded IMPDP> Starting "SYS"."TSPITR_IMP_SEKUNDA_ppol": IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK IMPDP> Job "SYS"."TSPITR_IMP_SEKUNDA_ppol" successfully completed at Mon Jul 6 08:42:51 2015 elapsed 0 00:00:20 Import completed Finished restore at 06-JUL-15 RMAN> report schema; Report of database schema for database with db_unique_name SEKUNDA List of Permanent Datafiles =========================== File Size(MB) Tablespace RB segs Datafile Name ---- -------- -------------------- ------- ------------------------ 1 347 SYSTEM YES /u01/app/oracle/oradata/sekunda/system01.dbf 2 249 SYSAUX NO /u01/app/oracle/oradata/sekunda/sysaux01.dbf 3 241 UNDOTBS1 YES /u01/app/oracle/oradata/sekunda/undotbs01.dbf 4 602 USERS NO /u01/app/oracle/oradata/sekunda/users01.dbf 5 100 TBS1 NO /u01/app/oracle/oradata/sekunda/tbs1.dbf List of Temporary Files ======================= File Size(MB) Tablespace Maxsize(MB) Tempfile Name ---- -------- -------------------- ----------- -------------------- 1 40 TEMP 32767 /u01/app/oracle/oradata/sekunda/temp01.dbt RMAN> alter tablespace tbs1 read write; Statement processed RMAN> select count(*) from adam.t; COUNT(*) ---------- 1000000
Hope you find it useful :-)
A couple of weeks ago I wrote a post about doing a Cloud Control 12cR5 installation and said I would be testing the upgrade from 12cR4. I’ve now done that.
The upgrade documentation is quite extensive and the prerequisites are going to be different depending on the database and cloud control versions you are starting with, so this is no way a “recommended” way to do the upgrade. Each one will need to be approached on a case-by-case basis. It’s just meant to give a flavour of what you have to do.
Suffice to say, it worked fine for me.
We’ve all heard it before. Computers are totally logical, they do exactly what they are told. After all, Central Processing Units (CPUs) are built out of fundamental units called Logic Gates. With perhaps the exception when a stray cosmic ray gets lucky, the circuits in a computer chip and memory act in a totally logical and predicted manner.
And of course, anything built on top of computers will be utterly logical as well. All those robots that companies are designing & building to clean our houses, do our manual labour and fight our wars are going to be logical, follow the rules given and be sensible.
But they are not. As Software is not logical. Often, it is infuriatingly illogical and confusing.
This XKCD cartoon very much some recent experiences I have had with consumer software:
I’d say that, unless an algorithm is about as simple as a Centigrade-to-Fahrenheit conversion program, it will have a bug or will mess up with out-of-range values. Just think back to when you wrote your Centigrade-to-Fahrenheit program (we all have, haven’t we?) back at school or on your home computer or you first week on the college course. What happened if you input a temperature of -1000C, an impossible temperature? I bet it either fell over or gave a just-as-impossible Fahrenheit value. Logical but stupid.
I worked on a financial system a few years back that, as one very small but significant part of what it did, showed you your average spend on things over 3 years. It took several weeks to explain to the program manager and his minions that their averaging code was wrong. Utterly, hopelessly and tragically wrong. First, it calculated and displayed the value to several decimal places. to thousands of a penny. Secondly, it did not take into account the actual period over which you had spent your money. If you had opened your account 1 year ago, it still calculated the value over 3 years. As for taking into account months, weeks and days of the year, don’t make me laugh. You might be able to forgive this except the same team had also written the code to archive off data once it was over 3 years old – in whole years. So there would only be 3 years’ data for 1 day.
We have all experienced endless issues with computers or peripherals that will work one day, not work properly the next and then go back to working. Firmware and Operating Systems are just software really, with the same flaws as the stuff we write and fix in our working lives day after day. There will be a logical reason buried deep somewhere why the printer will not work on Thursdays, but it won’t be a sensible reason.
All the software out there is more or less illogical and broken. The less broken gets used and we learn it’s idiocies. The worst gets canned or labelled “Windows 8″ and forced on us.
I know some people worry about the inexorable rise of the machines, Terminator Style maybe, or perhaps benign but a lot smarter than us (as they are logical and compute really, really fast) and we become their pets. But I am not concerned. The idiot humans who write the software will mess it up massively. Oh, some of these things will do terrible harm but they will not take over – they will run out of bullets or power or stop working on Thursday. Not until we can build the first computer that is smart enough to write sensible software itself and immediately replaces itself with something that CAN write a Centigrade-to-Fahrenheit conversion program that does not mess up. It will then start coding like a human developer with 1 night to get the system live, a stack of angry managers and an endless supply of Jack Daniels & coffee – only with no errors. With luck it will very soon write the perfect computer game and distract itself long enough for us to turn the damned thing off.
Make a Sad Face..:-( I seemed to have missed my blogs tenth which happened on the 20th September 2014. My last post last year and until very recently was on July 23rd 2014; so actually its been a big gap....[Read More]
Posted by Pete On 03/07/15 At 11:28 AM
You’ve been google-ing and you’ve seen articles (for example) like http://www.oracle.com/technetwork/articles/sql/11g-securefiles-084075.html and you’re pretty pumped about using SECUREFILE features. You’ve got lots of existing CLOB data and moving to SECUREFILE is going to make your life much easier. You’re also excited about the fact that none of your code has to change – you just change the existing CLOB columns to be stored as SECUREFILE and you’ll have set yourself up for all sorts of feature goodness !
But how do we do it in a continuous delivery (CD) model ? Because moving CLOB’s sounds like downtime doesn’t it ?
And by default, that’s exactly what it will be. Let’s explore that with a faux application that uses CLOB’s.
We’ll create an “application” which consists of a table, a sequence, a trigger* to populate the sequence, and a procedure which is our application interface.
(*- yes, in 12c we can skip the trigger and just use the new DEFAULT facilities available, but I’m keeping the trigger because that’s a very common occurrence in the industry as people move to 12c)
SQL> create table T1 ( x int, y clob, t timestamp default systimestamp, constraint T1_PK primary key (x) ) ; Table created. SQL> create sequence T1_SEQ; Sequence created. SQL> create or replace trigger T1_TRG 2 before insert on T1 3 for each row 4 begin 5 :new.x := t1_seq.nextval; 6 end; 7 / Trigger created. SQL> create or replace 2 procedure T1_LOGGER is 3 l_long varchar2(32767) := rpad('x',32000,'x'); 4 begin 5 for i in 1 .. 1000 loop 6 insert into t1 (y ) values (l_long); 7 commit; 8 dbms_lock.sleep(0.1); 9 end loop; 10 end; 11 / Procedure created.
And there we have our application. The T1_LOGGER application will insert approximately 10 rows per second, each one containing a 32kilobyte CLOB. We’ll start our application running:
SQL> -- SQL> -- This represents your app, busily inserting clobs SQL> -- SQL> exec t1_logger;
And now here is where our continuous delivery comes in. I want to change to using SECUREFILES (and backdate the existing CLOBS) without interrupting user services. So I login to a second session and issue:
SQL> alter table T1 move lob ( y) store as securefile; Table altered.
Well that all looks fine, until I return to the session where my application is running
SQL> exec t1_logger; BEGIN t1_logger; END; * ERROR at line 1: ORA-01502: index 'SCOTT.T1_PK' or partition of such index is in unusable state ORA-06512: at "SCOTT.T1_LOGGER", line 5 ORA-06512: at line 1
Ker-splat! I’ve broken my app, and its dead until I rebuild that index, which could be hours. Time to update mv CV :-) And what’s worse is, even if our application had not had a unique index, then we still created all sort of problems. Notice that our “alter table move” took 46 seconds. If we look at the insertion timestamps for our lobs, we see:
SQL> select max(t) from t1; MAX(T) --------------------------------------------------------------------------- 03-JUL-15 10.26.04.644000 AM SQL> select systimestamp from dual; SYSTIMESTAMP --------------------------------------------------------------------------- 03-JUL-15 10.26.51.748000 AM +08:00
We ended up with a “freeze” of 46 seconds in our application whilst the table was locked as it was moved. If this had been (say) a service-based interface, there’s a very good chance that our services would have reported timeout’s and all sort of other errors back to the calling environment.
So we take a squizz around the Oracle documentation and find this:
The “ONLINE” option looks promising. Let’s give that a whirl…
SQL> alter table T1 move lob ( y) store as securefile online; alter table T1 move lob ( y) store as securefile online * ERROR at line 1: ORA-00906: missing left parenthesis SQL> alter table T1 move online lob ( y) store as securefile; alter table T1 move online lob ( y) store as securefile * ERROR at line 1: ORA-01735: invalid ALTER TABLE option
We don’t get any joy there, since a closer look at the documentation tells us that the ONLINE option is for index-organised tables, so it doesn’t apply here.
So it seems like we’re stuck….Enter DBMS_REDEFINITION. This is a very cool utility for redefining tables without making them inaccessible to calling applications. Let’s go through the same process, this time using DBMS_REDEFINITION. We will build and run our application from scratch
SQL> create sequence T1_SEQ; Sequence created. SQL> create or replace trigger T1_TRG 2 before insert on T1 3 for each row 4 begin 5 :new.x := t1_seq.nextval; 6 end; 7 / Trigger created. SQL> create or replace 2 procedure T1_logger is 3 l_long varchar2(32767) := rpad('x',32000,'x'); 4 begin 5 for i in 1 .. 1000 loop 6 insert into t1 (y ) values (l_long); 7 commit; 8 dbms_lock.sleep(0.1); 9 end loop; 10 end; 11 / Procedure created. SQL> -- SQL> -- This represents your app, busily inserting clobs SQL> -- SQL> exec t1_logger;
Now we create a second session, whilst our application is running, and redefine our table online using a template tale to inform the database what structure we want.
SQL> create table T2 ( x int , y clob, t timestamp default systimestamp ) lob ( y) store as securefile; Table created. SQL> EXEC DBMS_REDEFINITION.can_redef_table(user, 'T1'); PL/SQL procedure successfully completed. SQL> EXEC DBMS_REDEFINITION.start_redef_table(user, 'T1', 'T2'); PL/SQL procedure successfully completed. SQL> SET SERVEROUTPUT ON SQL> DECLARE 2 l_num_errors PLS_INTEGER; 3 BEGIN 4 DBMS_REDEFINITION.copy_table_dependents( 5 uname => user, 6 orig_table => 'T1', 7 int_table => 'T2', 8 copy_indexes => DBMS_REDEFINITION.cons_orig_params, 9 copy_triggers => TRUE, -- Default 10 copy_constraints => TRUE, -- Default 11 copy_privileges => TRUE, -- Default 12 ignore_errors => FALSE, -- Default 13 num_errors => l_num_errors); 14 DBMS_OUTPUT.put_line('l_num_errors=' || l_num_errors); 15 END; 16 / l_num_errors=0 PL/SQL procedure successfully completed. SQL> EXEC DBMS_REDEFINITION.sync_interim_table(user, 'T1', 'T2'); PL/SQL procedure successfully completed. SQL> EXEC DBMS_REDEFINITION.finish_redef_table(user, 'T1', 'T2'); PL/SQL procedure successfully completed.
And we’done. We flick back to our original application session and check for errors:
SQL> exec t1_logger; PL/SQL procedure successfully completed.
It ran to completion without any problems. We have moved to SECUREFILE without any disruption to service. Very cool!
But although nothing crashed, maybe there was a blockage of service to the database? We can check that by locking at the maximum time between insertions in our application.
SQL> select max(delta) 2 from 3 ( select t - lag(t) over ( order by t) as delta from t1 ); MAX(DELTA) --------------------------------------------------------------------------- +000000000 00:00:00.250000
We were inserting rows every 0.10 seconds, so 0.25 represents a tiny overhead and unlikely to be noticed.
So when you need to make small structural changes to your database tables, have a think about DBMS_REDEFINITION. In a future blog post, I’ll talk about why you might want to redefine your tables even when you are not changing their structure at all !
Today is another anniversary, but this time it’s the website, which is 15 years old.
OK. This is a bit of a cheat because:
Anyway, July 3rd is from now on the official birthday of the website. Makes it easy to remember, because it’s the day after my birthday.
PS. For those that are interested, the blog was 10 years old last month. I do know the exact date for that because the posts are dated and you can read the first post.
“Continuous Delivery (CD) is a software engineering approach in which teams keep producing valuable software in short cycles and ensure that the software can be reliably released at any time”
Perhaps a simpler definition is “CD is the currently the cool thing to do”
Sarcasm aside, there’s a lot of common sense in being able to rapidly push out software changes in a safe manner.
Many years ago, I was a developer at a company that was drowning in bureaucracy, and I was tasked with trying to solve a suite of performance problems with part of the core business application. The first thing I did (and I’d recommend this to anyone trying to assist in solving performance problems) was to visit the end-users who actually use the software. (It’s too easy to jump in and start tracing SQL statements etc…but the pain points you are trying to solve are the customer’s pain points, not the servers)
She sat down and ran me through the litany of performance problems she was having. I tried to set some realistic expectations for her about when we could solve them, but I also asked:
“If there is one thing that is absolutely top of the list, what would it be, and I’ll focus on that”
Interestingly, when phrased that way, she pondered for a moment and told me it was not a performance issue. Although the performance was terrible, she (embarrassingly for our IT dept) had re-structured her daily activities to accommodate the slow parts of the system. (“I run the daily report whilst I’m at morning tea, and its usually done by time I get back”). No, she had a much simpler request:
“We have smaller screens in the field office, so you have to scroll the screen every time to get to the ‘Submit’ button. Can you move it to the top of screen?”
“Leave it with me!” I exclaimed. “This is simple, and we’ll get it to you asap”
So I was feeling pretty good about myself – I’d identified the important performance issues, bought myself some time to work on them, and had a simple fix to appease the customer in the interim. I got back to the office, checked out the source code, move the button a few inches higher and voila! I’m done.
….Or so I thought.
I wont share the gory details, but it took over 6 months to get that change through all of the processes, environments, approvals, release cycles, etc and finally deliver it into Production. I was so ashamed that I’d let this customer down so badly. And it strips away at your job satisfaction as a developer – nothing makes you feel more foolish than sitting in front of a “Change Approval Committee” meeting, and you’re justifying the business benefit of a change you coded 6 months ago, where a button was moved. A total waste of everyone’s time. But … after all that red tape, it had finally gone in.
My manager called me into the office after deployment:
“Well, your change is in! You should phone the customer, tell her we’ve done her changes, and make sure she’s happy with it”
Can you imagine it ? “Hi, remember me, its been six months…that button is 2 inches higher. Are you impressed ?”
Anyway…enough back story, this sets the scene for my next blog post…An simple example of CD in the database.