Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Oracle Terminated Connection Timeout

I have recently come across situations on two different PeopleSoft sites where ad-hoc queries continue to run on the Oracle database server long after the Application Server process, which is the Oracle session client, has terminated. Often, queries perform poorly because they are poorly coded, but that is another story. To help guard against this situation Oracle has mechanism called Terminated Connection Timeout (also known as Dead Connection Detection (DCD) when it was introduced in Net8).

Oracle Support Note 615782.1 explains the mechanism. "DCD is initiated on the server when a connection is established. At this time SQL*Net reads the SQL*Net parameter files and sets a timer to generate an alarm. The timer interval is set by providing a non-zero value in minutes for the SQLNET.EXPIRE_TIME parameter in the sqlnet.ora file on the database server side. When the timer expires, SQL*Net on the server sends a 'probe' packet, essentially an empty SQL*Net packet, to the client. If the client end of the connection is still active, the probe is discarded, and the timer mechanism is reset. If the client has terminated abnormally, the server will receive an error from the send call issued for the probe, and SQL*Net on the server will signal the operating system to release the connection's resources."

Thus, if a PeopleSoft operator initiates an ad-hoc query that runs on the PSQRYSRV server for longer than the ICQuery service time-out (default 1200 seconds), then Tuxedo will terminate and restart the busy server process. However, the query will continue run on the database server until the current fetch operation returns. In the case of a query performing a large sort or hash operation, it might be a long time before the first row is returned. All the while, the query is continuing to consume resources on the database.

PeopleTools 8.44, also introduced the ability to kill a query via the Query Monitor that had reached a maximum run time. This is one of the functions of the PSMONITORSRV server process (see Oracle Support Note 624339.1. The maximum run time is specified in a permission list (see Security Administration PeopleBook Permission List Query Profile) and then the ability to kill queries that have timed out can be enabled or disabled system-wide (see PeopleSoft Query PeopleBook Query Administration. It will kill the application server process that submitted the query, but for the same reasons, the query may continue to run on the database.

Thus, setting Terminated Connection Timeout is not merely a good idea for a PeopleSoft system running on Oracle, it is effectively mandatory. Otherwise. some PeopleSoft functionality simply won't work as intended.

What is an appropriate value for SQLNET.EXPIRE_TIME?

The value for this parameter is the time between successive SQL*Net probes sent by the Oracle shadow server process to the client. Setting it is a balance between the maximum time that a query can be left to consume resources after a client process terminates, against the additional overhead of every client process sending a probe every few minutes.

The SQL*Net documents often talk about additional network traffic generated by DCD. This was a consideration in the past on client-server applications that ran across a wide area network. However, it is rarely a consideration in relatively modern systems such as PeopleTools 8, the database connections are made by Application Server and Process Scheduler, which are usually physically close to the database server.

The time-out can be set independently of any of the other time-outs for the Application Server and Web Server. Documents on Metalink often suggest 5 or 10 minutes, and I don't think that is unreasonable.

My thanks to Colin Kilpatrick who prompted me to look at this again.

Oracle Terminated Connection Timeout

I have recently come across situations on two different PeopleSoft sites where ad-hoc queries continue to run on the Oracle database server long after the Application Server process, which is the Oracle session client, has terminated. Often, queries perform poorly because they are poorly coded, but that is another story. To help guard against this situation Oracle has mechanism called Terminated Connection Timeout (also known as Dead Connection Detection (DCD) when it was introduced in Net8).

Oracle Support Note 615782.1 explains the mechanism. "DCD is initiated on the server when a connection is established. At this time SQL*Net reads the SQL*Net parameter files and sets a timer to generate an alarm. The timer interval is set by providing a non-zero value in minutes for the SQLNET.EXPIRE_TIME parameter in the sqlnet.ora file on the database server side. When the timer expires, SQL*Net on the server sends a 'probe' packet, essentially an empty SQL*Net packet, to the client. If the client end of the connection is still active, the probe is discarded, and the timer mechanism is reset. If the client has terminated abnormally, the server will receive an error from the send call issued for the probe, and SQL*Net on the server will signal the operating system to release the connection's resources."

Thus, if a PeopleSoft operator initiates an ad-hoc query that runs on the PSQRYSRV server for longer than the ICQuery service time-out (default 1200 seconds), then Tuxedo will terminate and restart the busy server process. However, the query will continue run on the database server until the current fetch operation returns. In the case of a query performing a large sort or hash operation, it might be a long time before the first row is returned. All the while, the query is continuing to consume resources on the database.

PeopleTools 8.44, also introduced the ability to kill a query via the Query Monitor that had reached a maximum run time. This is one of the functions of the PSMONITORSRV server process (see Oracle Support Note 624339.1. The maximum run time is specified in a permission list (see Security Administration PeopleBook Permission List Query Profile) and then the ability to kill queries that have timed out can be enabled or disabled system-wide (see PeopleSoft Query PeopleBook Query Administration. It will kill the application server process that submitted the query, but for the same reasons, the query may continue to run on the database.

Thus, setting Terminated Connection Timeout is not merely a good idea for a PeopleSoft system running on Oracle, it is effectively mandatory. Otherwise. some PeopleSoft functionality simply won't work as intended.

What is an appropriate value for SQLNET.EXPIRE_TIME?

The value for this parameter is the time between successive SQL*Net probes sent by the Oracle shadow server process to the client. Setting it is a balance between the maximum time that a query can be left to consume resources after a client process terminates, against the additional overhead of every client process sending a probe every few minutes.

The SQL*Net documents often talk about additional network traffic generated by DCD. This was a consideration in the past on client-server applications that ran across a wide area network. However, it is rarely a consideration in relatively modern systems such as PeopleTools 8, the database connections are made by Application Server and Process Scheduler, which are usually physically close to the database server.

The time-out can be set independently of any of the other time-outs for the Application Server and Web Server. Documents on Metalink often suggest 5 or 10 minutes, and I don't think that is unreasonable.

My thanks to Colin Kilpatrick who prompted me to look at this again.

Merge counter

A package to capture and report separate INSERT and UPDATE rowcounts from a MERGE statement. February 2004

my week in venn diagram form ...

(from thisisindexed.com)My take on this one has nothing to do with Bernie and everything to do with recent events on the big project. If I could add a small red 'x' in the middle of 'Doom' with an arrow and a 'You are Here' message, it would be perfect ...

SIOUG 2009 conference

I'll be presenting at the SIOUG 2009 conference that will take place in Portoroz, from 20th to 23rd September 2009.

The title of the presentation will be "CBO fundamentals: Understanding the different modes of System Statistics" and will be based on the series already available here on my blog.

So if you're interested in the details of the cost calculations performed by the recent releases of the Cost Based Optimizer and would like to here about this in a different style, this presentation might be something for you.

If you happen to be at the conference I'm looking forward to meeting you there.

Thanks to OakTable fellow Joze Senegacnik for inviting me to the conference.

Fundamentals of Software Performance Quick Reference Card

I just posted "Fundamentals of Software Performance Quick Reference Card" at the Method R company website:

This two-page quick reference card written by Cary Millsap sums up computer software performance the Method R way. The first page lists definitions of the terms you need to know: efficiency, knee, load, response time, and so on. The second page lists ten principles that are vital to your ability to think clearly about software performance. This document contains meaningful insight in a format that's compact enough to hang on your wall.

It's free, and there's no sign-up required. I hope you will enjoy it.

Oracle 10g Statistics History Retention in PeopleSoft

I have been working on a system where many Application Engine programs are running throughout the day, and are frequently collecting Optimizer statistics with the %UpdateStats macro on many working storage tables. Concurrent calls to dbms_stats are typical.

There are two new behaviours in Oracle 10g RDBMS that can in extreme cases, in combination with a system that calls dbms_stats very frequently, create a significant performance overhead.

From Oracle 10g, histograms may, by default, be collected automatically. That means that rows are concurrently deleted from and inserted into histgrm$ and hist_head$, leading to contention and consistent read.

  • Also from Oracle 10g, every time you collect statistics on a table the old statistics are retained in the SYS.WRI$_OPTSTAT%HISTORY tables. If histograms have previously been collected, these are also copied. DBMS_STATS has the additional overhead of writing this history. I found in excess of 10,000 versions of previous statistics for some tables, because the batch processes have updated statistics on working storage tables that many times.
  • dbms_stats also appears to be responsible for purging history older than the retention limit. The default retention period is 31 days. I have seen concurrent calls to dbms_stats blocked on row level locks on the statistics history tables. For me, this occurred 31 days after the system went live on a significantly increased volume.
  • #eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
    SELECT dbms_stats.get_stats_history_retention FROM dual;
    GET_STATS_HISTORY_RETENTION 
    --------------------------- 
                             31

Statistics history was designed to work in conjunction with schema wide statistics jobs that only refreshed stale statistics. There is an option on gather_schema_stats to collect only statistics on tables where the current statistics are stale. However, there is no such option on gather_table_stats. If you have decided to call this procedure for a particular table, then it is assumed you know you need to refresh the statistics. However, by calling dbms_stats from a batch program you can end up calling it much more frequently than is really necessary.

Recommendations

  • Disable statistics history by using dbms_stats.alter_stats_history_retention to set the retention period to zero. Unfortunately this can only be set at database level. The statistics history is there in case you want to revert to a previous version of the statistics should a new set of statistics produce a problem, but it is only used rarely, and I think this is a necessary sacrifice.
  • #eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
    EXECUTE dbms_stats.alter_stats_history_retention(retention=>0);
  • Use Oracle Optimizer Dynamic Sampling. However, I suggest increasing the level from the default of 2 to 4 to increase the situations in which it is used.
  • Introduce the new version of the PL/SQL wrapper package for dbms_stats so that you can specify the records for which statistics will be explicitly collected, and whether histograms are to be collect. Thus you can reduce the number of calls to dbms_stats.

If you have allowed the statistics history to grow before you disable history retention, then you might like to read John Hallas' posting on Purging statistics from the SYSAUX tablespace.

Oracle 10g Statistics History Retention in PeopleSoft

I have been working on a system where many Application Engine programs are running throughout the day, and are frequently collecting Optimizer statistics with the %UpdateStats macro on many working storage tables. Concurrent calls to dbms_stats are typical.

There are two new behaviours in Oracle 10g RDBMS that can in extreme cases, in combination with a system that calls dbms_stats very frequently, create a significant performance overhead.

From Oracle 10g, histograms may, by default, be collected automatically. That means that rows are concurrently deleted from and inserted into histgrm$ and hist_head$, leading to contention and consistent read.

  • Also from Oracle 10g, every time you collect statistics on a table the old statistics are retained in the SYS.WRI$_OPTSTAT%HISTORY tables. If histograms have previously been collected, these are also copied. DBMS_STATS has the additional overhead of writing this history. I found in excess of 10,000 versions of previous statistics for some tables, because the batch processes have updated statistics on working storage tables that many times.
  • dbms_stats also appears to be responsible for purging history older than the retention limit. The default retention period is 31 days. I have seen concurrent calls to dbms_stats blocked on row level locks on the statistics history tables. For me, this occurred 31 days after the system went live on a significantly increased volume.
  • #eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
    SELECT dbms_stats.get_stats_history_retention FROM dual;
    GET_STATS_HISTORY_RETENTION 
    --------------------------- 
                             31

Statistics history was designed to work in conjunction with schema wide statistics jobs that only refreshed stale statistics. There is an option on gather_schema_stats to collect only statistics on tables where the current statistics are stale. However, there is no such option on gather_table_stats. If you have decided to call this procedure for a particular table, then it is assumed you know you need to refresh the statistics. However, by calling dbms_stats from a batch program you can end up calling it much more frequently than is really necessary.

Recommendations

  • Disable statistics history by using dbms_stats.alter_stats_history_retention to set the retention period to zero. Unfortunately this can only be set at database level. The statistics history is there in case you want to revert to a previous version of the statistics should a new set of statistics produce a problem, but it is only used rarely, and I think this is a necessary sacrifice.
  • #eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
    EXECUTE dbms_stats.alter_stats_history_retention(retention=>0);
  • Use Oracle Optimizer Dynamic Sampling. However, I suggest increasing the level from the default of 2 to 4 to increase the situations in which it is used.
  • Introduce the new version of the PL/SQL wrapper package for dbms_stats so that you can specify the records for which statistics will be explicitly collected, and whether histograms are to be collect. Thus you can reduce the number of calls to dbms_stats.

If you have allowed the statistics history to grow before you disable history retention, then you might like to read John Hallas' posting on Purging statistics from the SYSAUX tablespace.

Dynamic sampling and set current_schema anomaly

Sometimes when I'm asked to check a particular SQL in a production like environment at first I only get rather limited access, e.g. a read-only user that has the required privileges to perform the SQL statement, but doesn't own the objects.

If I'm now supposed to analyse a rather complex SQL with dozens of table and/or view references I like to use the ALTER SESSION SET CURRENT_SCHEMA command to switch any unqualified object references to the schema set via CURRENT_SCHEMA rather than editing the SQL and add all the object references.

This basically works just fine, but I recently have come across an interesting anomaly in 10.2.0.4: If the optimizer uses dynamic sampling as part of the parsing then it doesn't do so anymore if the CURRENT_SCHEMA has been altered to a non-default schema.

The following simple test case demonstrates the issue in 10.2.0.4 Win32:

SQL>
SQL> drop user dynsamp_test cascade;
drop user dynsamp_test cascade
*
ERROR at line 1:
ORA-01918: user 'DYNSAMP_TEST' does not exist

SQL>
SQL> create user dynsamp_test identified by dynsamp_test;

User created.

SQL>
SQL> grant create session, alter session to dynsamp_test;

Grant succeeded.

SQL>
SQL> drop table dynsamp_test_table purge;
drop table dynsamp_test_table purge
*
ERROR at line 1:
ORA-00942: table or view does not exist

SQL>
SQL> create table dynsamp_test_table
2 as
3 select
4 *
5 from
6 all_objects
7 where
8 rownum <= 1000;

Table created.

SQL>
SQL> grant select on dynsamp_test_table to dynsamp_test;

Grant succeeded.

SQL>
SQL> -- this is the plan we get as schema owner
SQL> explain plan for
2 select /*+ dynamic_sampling(t 4) */
3 *
4 from
5 dynsamp_test_table
6 where
7 owner = 'NOT_A_VALID_OWNER_SINCE_TOO_LONG';

Explained.

SQL>
SQL> -- It uses obviously dynamic sampling as instructed
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1573551846

----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 128 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DYNSAMP_TEST_TABLE | 1 | 128 | 5 (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"='NOT_A_VALID_OWNER_SINCE_TOO_LONG')

Note
-----
- dynamic sampling used for this statement

17 rows selected.

SQL>
SQL> rollback;

Rollback complete.

SQL>
SQL> -- We get the same from different schema with qualified reference (or valid synonym)
SQL> connect dynsamp_test/dynsamp_test@orcl
Connect durchgeführt.
SQL>
SQL> alter session set nls_language = 'AMERICAN';

Session altered.

SQL>
SQL> explain plan for
2 select /*+ dynamic_sampling(t 4) */
3 *
4 from
5 cbo_test.dynsamp_test_table
6 where
7 owner = 'NOT_A_VALID_OWNER_SINCE_TOO_LONG';

Explained.

SQL>
SQL> -- Still uses dynamic sampling
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1573551846

----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 128 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DYNSAMP_TEST_TABLE | 1 | 128 | 5 (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"='NOT_A_VALID_OWNER_SINCE_TOO_LONG')

Note
-----
- dynamic sampling used for this statement

17 rows selected.

SQL>
SQL> rollback;

Rollback complete.

SQL>
SQL> -- Now switch the current_schema
SQL> alter session set current_schema = cbo_test;

Session altered.

SQL>
SQL> explain plan for
2 select /*+ dynamic_sampling(t 4) */
3 *
4 from
5 dynsamp_test_table
6 where
7 owner = 'NOT_A_VALID_OWNER_SINCE_TOO_LONG';

Explained.

SQL>
SQL> -- No more dynamic sampling!
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1573551846

----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10 | 1280 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DYNSAMP_TEST_TABLE | 10 | 1280 | 5 (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"='NOT_A_VALID_OWNER_SINCE_TOO_LONG')

13 rows selected.

SQL>
SQL> rollback;

Rollback complete.

SQL>
SQL> -- Check the 10053 trace file
SQL> -- It will show that the dynamic samping query fails
SQL> -- and therefore no dynamic sampling will be performed
SQL> -- presumably because the current_schema is not obeyed
SQL> -- by the recursively executed query
SQL> alter session set tracefile_identifier = 'dynamic_sampling_current_schema_10053';

Session altered.

SQL>
SQL> alter session set events '10053 trace name context forever, level 1';

Session altered.

SQL>
SQL> explain plan for
2 select /*+ dynamic_sampling(t 4) */
3 *
4 from
5 dynsamp_test_table
6 where
7 owner = 'NOT_A_VALID_OWNER_SINCE_TOO_LONG';

Explained.

SQL>
SQL> alter session set events '10053 trace name context off';

Session altered.

SQL> -- Check the 10046 trace file
SQL> -- which confirms a ORA-00942 while parsing the dynamic sampling query
SQL> alter session set tracefile_identifier = 'dynamic_sampling_current_schema_10046';

Session altered.

SQL>
SQL> alter session set sql_trace = true;

Session altered.

SQL>
SQL> explain plan for
2 select /*+ dynamic_sampling(t 4) */
3 *
4 from
5 dynsamp_test_table
6 where
7 owner = 'NOT_A_VALID_OWNER_SINCE_TOO_LONG';

Explained.

SQL>
SQL> alter session set sql_trace = false;

Session altered.

SQL>
SQL> rollback;

Rollback complete.

SQL>
SQL> spool off

So as you can see the simply query uses dynamic sampling as instructed (and in 10.2 would use dynamic sampling by default anyway since the table has no statistics), but when using the CURRENT_SCHEMA trick and then an unqualified reference, the dynamic sampling is not performed and the estimate is based on hardcoded defaults.

Checking the 10053 optimizer trace file confirms that the dynamic sampling is attempted, but the recursive query fails, and the 10046 SQL trace shows that an "ORA-00942 table or view does not exist" error is encountered while parsing the recursive dynamic sampling query.

So be careful when using this (rather unusual I have to admit) setup, the execution plans might be different if you're using a non-default CURRENT_SCHEMA and dynamic sampling is involved.

The issue seems to be fixed in 11.1.0.7 (dynamic sampling is performed as expected when using non-default CURRENT_SCHEMA), however I couldn't identify a corresponding bug on Metalink. If anyone comes across a bug number or document ID this would be helpful.

An Essay on Science

Richard Feynman defined science as "the belief in the ignorance of experts." Science begins by questioning established ideas. ...Even those ideas promoted by so-called experts.

The value of science that's obvious to everybody is the chance you might discover some valuable truth that nobody else has discovered before. That's the glamorous idea that might motivate you to begin the hard work that science sometimes requires. Science is also valuable to you when you learn that an established idea, no matter how much you may not like it, really is true after all. That second value of science is not as glamorous, but it's just as important. My little prayer with respect to that possibility is, "If an idea I believe is wrong, please let me find out before anybody else does."

Everyone can do science. Not just "scientists"; all of us. But you need to do science "right," or it's not science. Do it right, and you accumulate a little bit of truth. Do it wrong, and and you've wasted your time, or worse, you've doomed yourself to waste more of your time in the future, too.

The difference between "right" and "wrong" in science is not some snooty, bureaucratic concept. You don't need a license or a blessing to do science right. You just need to ensure that the cause-effect relationships you choose to believe are actually correct. One of the rules for doing science right is that you measure instead of just asserting your opinion.

Different people have different thresholds of skepticism. Some people believe new ideas, whether they're true or false, with very little persuasion. The people who are persuaded easily to believe false things cannot contribute much useful new knowledge to their communities (irrespective of how much they might publish).

People at the opposite end of the spectrum have very strict standards for what they accept as truth. They're careful about what rows they insert and commit into their minds. There aren't as many of those people, but they're more interesting, with respect to science, because they're the people who can contribute new knowledge to the community.

A lot of what we do in the Oracle world comes down to a person demonstrating, say, in sqlplus, that some certain cause produces some certain effect in some certain version of the database on some certain operating system, and so on. Then the next step is when you have to look at that result and decide for yourself (or perhaps with someone's help), how relevant that result is for you.

Innumerable Oracle debates funnel into the argument that, "Yes, I can see plainly that this situation can happen, but that situation will probably never happen for me." This is what happens, for example, when the doubter believes the prover is using an example that is too contrived to be realistic, or when the doubter's context is different from the prover's. ...Like when the doubter is running a data warehouse, but the prover talks only about transaction processing.

That's where another level of value begins, and it's another place where science can help. It's where the issue becomes proving how relevant a given proposal really is for a given circumstance. One of the nice things about software (and Oracle Database software in particular) is that it's usually easy to write code that will tell you how often something happens, or how long it takes when it does. With software, I don't have to guess. I can measure and know. Software is unusual in the world in that it can be used to measure itself.

At this week's ODTUG conference in Monterey, California, a debate has formed that I'm interested in following. The established idea of the experts in this debate is a passionate belief that you should declare, enable, and index foreign key (FK) relationships. The counter-argument is that you should not.

I've heard probably most of the arguments on the pro side of the debate. (a) If you don't declare/enable/index your FKs, then you have to ensure the correct relationships in your application code, which is error-prone in both obvious and highly subtle ways. (b) Not implementing full FK integrity in the database is developmentally inefficient, because it violates the important principle that you should never duplicate code in an application. (c) Furthermore, the absence of declared/enabled/indexed FK integrity is operationally inefficient at run time, because it blocks the Oracle query optimizer from using code paths that can be hundreds of times faster and more scalable than when it can't rely upon database-enforced FK integrity.

I haven't heard a single compelling argument for the other side of the debate. But, you see, there could be a compelling argument that I haven't heard.

This is what makes this new debate interesting for me. One side or the other is on the brink of learning something important, if the debate is conducted properly.

The first thing the two sides will need to do is agree on whether both sides really are in disagreement and, if they are, what exactly the disagreement is. In lots of debates I've seen, we've figured out after defining the terms and deciding the context of the argument (for example, what kind of application we're talking about), that there's really no debate of principle at all. When that happens, it's debate closed. Each side agrees with the other, and maybe the two sides learn a little more about life on the other side of the fence.

If after the suitable definition-of-terms process, the two sides really do still have a debate left, then there will be some kind of attestation of facts as both sides see them. One side, for example, will show sqlplus session output to demonstrate the subtle and not-so-subtle ways in which not doing the declare/enable/index thing causes corrupted data and horrific performance penalties. The other side of the argument will then show some contrary evidence that counts in favor of not doing the declare/enable/index thing.

If the sides can't agree on the truth of the "facts" presented, then the debate will collapse, and at least one side (possibly both) will have learned nothing. If each side succeeds in impressing the other side that the "facts" thus presented are actually true, then the debate will move to the discussions of the relevance of the facts just demonstrated. This is where one guy might say something like, "I know that queries with FKs in the database are faster, but I can't afford the performance penalty at data load time." To which the other guy might say, "Yes, loads are a little slower with referential integrity checking, but that's time spent executing necessary code path to ensure that only correct data can get into your database. And besides, it's not a good trade-off to endure a thousand slow queries a day so that one load can go faster." Rebuttal, counter-rebuttal, and so on. You get the picture.

I am biased in my estimation of how this will turn out. But I sincerely respect when someone thoughtfully and sincerely challenges an important idea in my professional domain, no matter how well entrenched that idea may be. In fact, the more entrenched, the better. The debate will remain interesting as long as the counter-argument is thoughtful and sincere. As soon as the evidence for the challenging new idea reveals itself to be nonsense, or if the counterargument context seems irrelevant to me (and the people I'm trying to represent), then I'll lose interest.

The best debate ends in a handshake (real or virtual) in which the people representing both sides learn something they didn't know before, one side perhaps more than the other. The best debaters value learning more than being right. The best debaters respect each other more after the debate because each has helped the other (and the community around them) to advance.

The worst debaters confuse the principles of factual correctness and personal correctness. When the debate shifts from factual to personal, it may become interesting to some people because of the enhanced drama, but the actual usefulness of the debate evaporates. ([Grin] I guess if the debate were solid to begin with, then the usefulness actually sublimates.)

No matter which idea wins this FK debate (right, I said idea, not people), I expect to be happy for the debate to have occurred. That's because I expect the debate to end with a resolution, and either I'm going to learn something completely new, or I'm going to fortify an existing belief. Something new is obviously exciting, and fortification will make me a more effective teacher of my existing belief. Either result benefits me and, I believe, the community.

With science, you get suspense, drama, plot twists, surprises, fortifications, .... Science is fun. I wish more people knew.