Search

OakieTags

Who's online

There are currently 0 users and 23 guests online.

Recent comments

Oakies Blog Aggregator

Multiple invisible indexes on the same column in #Oracle 12c

After invisible indexes got introduced in 11g, they have now been enhanced in 12c: You can have multiple indexes on the same set of columns with that feature. Why would you want to use that? Actually, this is always the first question I ask when I see a new feature – sometimes it’s really hard to answer :-)

Here, a plausible use case could be that you expect a new index on the same column to be an improvement over the existing old index, but you are not 100% sure. So instead of just dropping the old index, you make it invisible first to see the outcome:

 

[oracle@uhesse ~]$ sqlplus adam/adam

SQL*Plus: Release 12.1.0.2.0 Production on Tue Jul 28 08:11:16 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Tue Jul 28 2015 08:00:34 +02:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> col index_name for a10
SQL> select index_name,index_type,visibility from user_indexes;

INDEX_NAME INDEX_TYPE		       VISIBILIT
---------- --------------------------- ---------
BSTAR	   NORMAL		       VISIBLE

SQL> col segment_name for a10
SQL> select segment_name,bytes/1024/1024 from user_segments;

SEGMENT_NA BYTES/1024/1024
---------- ---------------
BSTAR		       160
SALES		       600

SQL> set timing on
SQL> select count(*) from sales where channel_id=3;

  COUNT(*)
----------
   2000000

Elapsed: 00:00:00.18
SQL> set timing off
SQL> @lastplan

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID	b7cvb9nu10qdb, child number 0
-------------------------------------
select count(*) from sales where channel_id=3

Plan hash value: 2525234362

---------------------------------------------------------------------------
| Id  | Operation	  | Name  | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	  |	  |	  |  3872 (100)|	  |
|   1 |  SORT AGGREGATE   |	  |	1 |	3 |	       |	  |

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
|*  2 |   INDEX RANGE SCAN| BSTAR |  2000K|  5859K|  3872   (1)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CHANNEL_ID"=3)


19 rows selected.

So I have an ordinary B* index here that supports my query, but I suspect that it would work better with a bitmap index. In older versions, you would get this if you try to create it with the old index still existing:

SQL> create bitmap index bmap on sales(channel_id) nologging;
create bitmap index bmap on sales(channel_id) nologging
                                  *
ERROR at line 1:
ORA-01408: such column list already indexed

Enter the 12c New Feature:

SQL> alter index bstar invisible;

Index altered.

SQL> create bitmap index bmap on sales(channel_id) nologging;

Index created.

Now I can check if the new index is really an improvement while the old index remains in place and is still being maintained by the system. So in case the new index turns out to be a bad idea – no problem to fall back on the old one!

SQL> select index_name,index_type,visibility from user_indexes;

INDEX_NAME INDEX_TYPE		       VISIBILIT
---------- --------------------------- ---------
BMAP	   BITMAP		       VISIBLE
BSTAR	   NORMAL		       INVISIBLE

SQL> select segment_name,bytes/1024/1024 from user_segments;

SEGMENT_NA BYTES/1024/1024
---------- ---------------
BMAP			 9
BSTAR		       160
SALES		       600

SQL> set timing on
SQL> select count(*) from sales where channel_id=3;

  COUNT(*)
----------
   2000000

Elapsed: 00:00:00.01
SQL> @lastplan

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
SQL_ID	b7cvb9nu10qdb, child number 0
------------------------------------------------------------------------------------
select count(*) from sales where channel_id=3

Plan hash value: 3722975061

------------------------------------------------------------------------------------
| Id  | Operation		    | Name | Rows  | Bytes | Cost (%CPU)| Time	   |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	    |	   |	   |	   |   216 (100)|	   |
|   1 |  SORT AGGREGATE 	    |	   |	 1 |	 3 |		|	   |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
|   2 |   BITMAP CONVERSION COUNT   |	   |  2000K|  5859K|   216   (0)| 00:00:01 |
|*  3 |    BITMAP INDEX SINGLE VALUE| BMAP |	   |	   |		|	   |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("CHANNEL_ID"=3)


20 rows selected.

Looks like everything is better with the new index, right? Let’s see what the optimizer thinks about it:

SQL> alter index bmap invisible;

Index altered.

SQL> select index_name,index_type,visibility from user_indexes;

INDEX_NAME INDEX_TYPE		       VISIBILIT
---------- --------------------------- ---------
BMAP	   BITMAP		       INVISIBLE
BSTAR	   NORMAL		       INVISIBLE

SQL> alter session set optimizer_use_invisible_indexes=true;

Session altered.

SQL> select count(*) from sales where channel_id=3;

  COUNT(*)
----------
   2000000

SQL> @lastplan

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------
SQL_ID	b7cvb9nu10qdb, child number 0
-------------------------------------------------------------------------------------
select count(*) from sales where channel_id=3

Plan hash value: 3722975061

------------------------------------------------------------------------------------
| Id  | Operation		    | Name | Rows  | Bytes | Cost (%CPU)| Time	   |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	    |	   |	   |	   |   216 (100)|	   |
|   1 |  SORT AGGREGATE 	    |	   |	 1 |	 3 |		|	   |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
|   2 |   BITMAP CONVERSION COUNT   |	   |  2000K|  5859K|   216   (0)| 00:00:01 |
|*  3 |    BITMAP INDEX SINGLE VALUE| BMAP |	   |	   |		|	   |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("CHANNEL_ID"=3)


20 rows selected.

The optimizer agrees that the new index is better. I could keep both indexes here in place, but remember that the old index still consumes space and requires internal maintenance. Therefore, I decide to drop the old index:

SQL> drop index bstar;

Index dropped.

SQL> alter index bmap visible;

Index altered.

Hope that helped to answer the question why you would want to use that 12c New Feature. As always: Don’t believe it, test it! :-)

Tagged: 12c New Features, Performance Tuning

Little things worth knowing: Data Guard Broker Setup changes in 12c

One of the problems I have seen when deploying Data Guard for systems such as RAC One Node and policy managed databases was the static listener configuration you needed in 11.2. This has changed with 12c for the better if you are using Grid Infrastructure.

http://docs.oracle.com/database/121/DGBKR/install.htm

In the section about static listener registration a little addendum can be found (thanks to Patrick Hurley/@phurley for pointing this out to me!):

“A static service needs to be defined and registered only if Oracle Clusterware or Oracle Restart is not being used.”

This is good news, let’s put it to the test; I’m a great fan of Oracle Restart. If I ever find the time I’d like to repeat this test with clustered Grid Infrastructure. I think the quote mentioned earlier still stands true but I would like to see it with my own eyes.

The Setup

To start with I used a dbca-created database named “NCDB” on my server named “server1”. It uses the DATA and RECO disk groups and is a non-CDB (although I’m quite sure that doesn’t matter). The patch level is current at the time of writing, I’m on 12.1.0.2.4 (which is the OJVM and PSU bundle patch for July 2015):

[oracle@server1 ~]$  /u01/app/oracle/product/12.1.0.2/dbhome_1/OPatch/opatch lspatches
21068507;Database PSU 12.1.0.2.4, Oracle JavaVM Component (Jul2015)
20831113;OCW Patch Set Update : 12.1.0.2.4 (20831113)
20831110;Database Patch Set Update : 12.1.0.2.4 (20831110)

OPatch succeeded.

[oracle@server1 ~]$  /u01/app/oracle/product/12.1.0.2/grid/OPatch/opatch lspatches
20831113;OCW Patch Set Update : 12.1.0.2.4 (20831113)
20831110;Database Patch Set Update : 12.1.0.2.4 (20831110)
20299018;ACFS Patch Set Update : 12.1.0.2.3 (20299018)
19872484;WLM Patch Set Update: 12.1.0.2.2 (19872484)

OPatch succeeded.

The future standby database will be called STDBY. This is for demonstration purposes only, I’m not a fan of using db_unique_name implying roles. Imagine you switch over to STDBY and run it as primary database for extended periods of time-that would be potentially confusing.

With that in mind, I created/updated a common tnsnames.ora on server1 and server2:

[oracle@server1 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora
# tnsnames.ora Network Configuration File: 
# /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

NCDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = server1.example.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = NCDB)
    )
  )

STDBY =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = server2.example.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = STDBY)
    )
  )

The duplicate command was a simple “duplicate target database for standby” after creating a simple pfile and auxiliary structures (audit_dump_dest, large pages etc) were in place. For this to work you need a copy of the controlfile for the standby in place as well as a backup of the primary database).

RMAN> duplicate target database for standby;

Starting Duplicate Db at 27-JUL-15
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''+DATA/STDBY/CONTROLFILE/current.258.886176617'', 
  ''+RECO/STDBY/CONTROLFILE/current.257.886176617'' comment=
 ''Set by RMAN'' scope=spfile";
   restore clone standby controlfile;
}
executing Memory Script

sql statement: alter system set  control_files =   ''+DATA/STDBY/CONTROLFILE/current.258.886176617'', 
 ''+RECO/STDBY/CONTROLFILE/current.257.886176617'' comment= ''Set by RMAN'' scope=spfile

Starting restore at 27-JUL-15
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece +RECO/NCDB/AUTOBACKUP/2015_07_27/s_886176583.273.886176583
channel ORA_AUX_DISK_1: ORA-19870: error while restoring backup piece +RECO/NCDB/AUTOBACKUP/2015_07_27/s_886176583.273.886176583
ORA-19505: failed to identify file "+RECO/NCDB/AUTOBACKUP/2015_07_27/s_886176583.273.886176583"
ORA-17503: ksfdopn:2 Failed to open file +RECO/NCDB/AUTOBACKUP/2015_07_27/s_886176583.273.886176583
ORA-15012: ASM file '+RECO/NCDB/AUTOBACKUP/2015_07_27/s_886176583.273.886176583' does not exist

failover to previous backup

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u01/oraback/NCDB/09qd3uq4_1_1
channel ORA_AUX_DISK_1: piece handle=/u01/oraback/NCDB/09qd3uq4_1_1 tag=TAG20150727T160940
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=+DATA/STDBY/CONTROLFILE/current.259.886176619
output file name=+RECO/STDBY/CONTROLFILE/current.258.886176619
Finished restore at 27-JUL-15
contents of Memory Script:
{
   sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database

contents of Memory Script:
{
   set newname for clone tempfile  1 to new;
   switch clone tempfile all;
   set newname for clone datafile  1 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   set newname for clone datafile  5 to new;
   set newname for clone datafile  6 to new;
   restore
   clone database
   ;
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to +DATA in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 27-JUL-15
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00004 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00005 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00006 to +DATA
channel ORA_AUX_DISK_1: reading from backup piece /u01/oraback/NCDB/05qd3uas_1_1
channel ORA_AUX_DISK_2: starting datafile backup set restore
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_2: restoring datafile 00001 to +DATA
channel ORA_AUX_DISK_2: restoring datafile 00003 to +DATA
channel ORA_AUX_DISK_2: reading from backup piece /u01/oraback/NCDB/06qd3uat_1_1
channel ORA_AUX_DISK_1: piece handle=/u01/oraback/NCDB/05qd3uas_1_1 tag=TAG20150727T160132
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_AUX_DISK_2: piece handle=/u01/oraback/NCDB/06qd3uat_1_1 tag=TAG20150727T160132
channel ORA_AUX_DISK_2: restored backup piece 1
channel ORA_AUX_DISK_2: restore complete, elapsed time: 00:00:15
Finished restore at 27-JUL-15

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script
datafile 1 switched to datafile copy
input datafile copy RECID=6 STAMP=886176640 file name=+DATA/STDBY/DATAFILE/system.261.886176625
datafile 3 switched to datafile copy
input datafile copy RECID=7 STAMP=886176641 file name=+DATA/STDBY/DATAFILE/sysaux.263.886176625
datafile 4 switched to datafile copy
input datafile copy RECID=8 STAMP=886176641 file name=+DATA/STDBY/DATAFILE/undotbs1.262.886176625
datafile 5 switched to datafile copy
input datafile copy RECID=9 STAMP=886176641 file name=+DATA/STDBY/DATAFILE/example.260.886176625
datafile 6 switched to datafile copy
input datafile copy RECID=10 STAMP=886176641 file name=+DATA/STDBY/DATAFILE/users.264.886176625
Finished Duplicate Db at 27-JUL-15

RMAN>

That’s a working standby database. I will have to register it with Grid Infrastructure next.

[oracle@server2 ~]$ srvctl add database -db STDBY -oraclehome /u01/app/oracle/product/12.1.0.2/dbhome_1 \
> -role physical_standby -startoption MOUNT -policy automatic -diskgroup data,reco

Now all I need to do is set standby_file_management to auto, add standby redo logs to both databases and enable the broker. I’ll not show these here.

Broker Configuration

The next step in the deployment of my standby database is the creation of a Data Guard Broker configuration. I always create it using dgmgrl.

[oracle@server1 ~]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@ncdb
Password:
Connected as SYSDBA.

DGMGRL> CREATE CONFIGURATION brokertest AS
>  PRIMARY DATABASE IS 'NCDB'
>  CONNECT IDENTIFIER IS 'NCDB';
Configuration "brokertest" created with primary database "NCDB"

DGMGRL>  add database 'STDBY' as connect identifier is 'STDBY';
Database "STDBY" added

DGMGRL> enable configuration;
Enabled.
DGMGRL> show configuration;

Configuration - brokertest

  Protection Mode: MaxPerformance
  Members:
  NCDB  - Primary database
    STDBY - Physical standby database 

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 5 seconds ago)

That’s looking good so far! I can see archived redo logs being exchanged, and I can see MRP0 applying data in real time on the standby.

Recap

This is a reference for the settings in the Data Guard Broker. I have not modified any listener.ora file on any host, all I did was to ensure that the tnsnames.ora file has identical contents on both sides. I have listed the database configuration for later reference here:

DGMGRL> show database verbose 'NCDB';

Database - NCDB

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    NCDB

  Properties:
    DGConnectIdentifier             = 'NCDB'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    RedoRoutes                      = ''
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyLagThreshold               = '0'
    TransportLagThreshold           = '0'
    TransportDisconnectedThreshold  = '30'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = ''
    LogFileNameConvert              = ''
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=server1.example.com)
           (PORT=1521))(CONNECT_DATA=(SERVICE_NAME=NCDB_DGMGRL)
           (INSTANCE_NAME=NCDB)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
SUCCESS

DGMGRL> show database verbose 'STDBY';

Database - STDBY

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 5.00 KByte/s
  Active Apply Rate:  0 Byte/s
  Maximum Apply Rate: 0 Byte/s
  Real Time Query:    OFF
  Instance(s):
    STDBY

  Properties:
    DGConnectIdentifier             = 'STDBY'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    RedoRoutes                      = ''
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyLagThreshold               = '0'
    TransportLagThreshold           = '0'
    TransportDisconnectedThreshold  = '30'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = ''
    LogFileNameConvert              = ''
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.52)
          (PORT=1521))(CONNECT_DATA=(SERVICE_NAME=STDBY_DGMGRL)(INSTANCE_NAME=STDBY)
          (SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
SUCCESS

DGMGRL> show configuration verbose;

Configuration - brokertest

  Protection Mode: MaxPerformance
  Members:
  NCDB  - Primary database
    STDBY - Physical standby database 

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'
    ExternalDestination1            = ''
    ExternalDestination2            = ''
    PrimaryLostWriteAction          = 'CONTINUE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Now the big question is: will this allow me to switch over? For the record I am still using my SSH connection (in screen!) on server1, but am now connected to the standby database in dgmgrl.

DGMGRL> validate database 'STDBY'

  Database Role:     Physical standby database
  Primary Database:  NCDB

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Flashback Database Status:
    NCDB:   Off
    STDBY:  Off

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (NCDB)                  (STDBY)
    1         3                       2                       Insufficient SRLs

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (STDBY)                 (NCDB)
    1         3                       0                       Insufficient SRLs
    Warning: standby redo logs not configured for thread 1 on NCDB

DGMGRL> switchover to 'STDBY';
Performing switchover NOW, please wait...
New primary database "STDBY" is opening...
Oracle Clusterware is restarting database "NCDB" ...
Switchover succeeded, new primary is "STDBY"
DGMGRL> show configuration

Configuration - brokertest

  Protection Mode: MaxPerformance
  Members:
  STDBY - Primary database
    NCDB  - Physical standby database 

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 13 seconds ago)

DGMGRL>

Well that seems to have worked!

It seems straight forward though: the SSH connection was made to the future standby host, which could have helped with Oracle Restart restarting the database. What if I try to switch back, but this time connect to server1 (currently in the standby role) and issue the switchover command? Here is the output:

[oracle@server1 ~]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@ncdb
Password:
Connected as SYSDBA.
DGMGRL> show configuration

Configuration - brokertest

  Protection Mode: MaxPerformance
  Members:
  STDBY - Primary database
    NCDB  - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 2 seconds ago)

DGMGRL> validate database 'NCDB';

  Database Role:     Physical standby database
  Primary Database:  STDBY

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Flashback Database Status:
    STDBY:  Off
    NCDB:   Off

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status       
              (STDBY)                 (NCDB)                               
    1         3                       2                       Insufficient SRLs

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status       
              (NCDB)                  (STDBY)                              
    1         3                       2                       Insufficient SRLs

Ready to switch over:

DGMGRL> switchover to 'NCDB'
Performing switchover NOW, please wait...
New primary database "NCDB" is opening...
Oracle Clusterware is restarting database "STDBY" ...
Switchover succeeded, new primary is "NCDB"
DGMGRL> show configuration

Configuration - brokertest

  Protection Mode: MaxPerformance
  Members:
  NCDB  - Primary database
    STDBY - Physical standby database 

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 54 seconds ago)

DGMGRL> 

OK, that worked too-very nice. It’s safe to say that in my lab conditions restarting of databases works pretty well.

Subquery Factoring (10)

What prompted me to write my previous note about subquerying was an upgrade to 12c, and a check that a few critical queries would not do something nasty on the upgrade. As ever it’s always interesting how many little oddities you can discover while looking closely as some little detail of how the optimizer works. Here’s an oddity that came up in the course of my playing around investigation in 12.1.0.2 – first some sample data:


create table t1
nologging
as
select * from all_objects;

create index t1_i1 on t1(owner) compress nologging;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1 for columns owner size 254'
        );
end;
/

The all_objects view is convenient as a tool for modelling what I wanted to do since it has a column with a small number of distinct values and an extreme skew across those values. Here’s a slightly weird query that shows an odd costing effect:


with v1 as (
        select /*+ inline */ owner from t1 where owner > 'A'
)
select count(*) from v1 where owner = 'SYS'
union all
select count(*) from v1 where owner = 'SYSTEM'
;

Since the query uses the factored subquery twice and there’s a predicate on the subquery definition, I expect to see materialization – and that’s what happens (even though I’ve engineered the query so that materialization is more expensive than executing inline). Here are the two plans from 12.1.0.2 (the same pattern appears in 11.2.0.4, though the costs are a little less across the board):


=======================
Unhinted (materializes)
=======================

---------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                            |     2 |   132 |    25  (20)| 00:00:01 |
|   1 |  TEMP TABLE TRANSFORMATION |                            |       |       |            |          |
|   2 |   LOAD AS SELECT           | SYS_TEMP_0FD9D661B_876C2CB |       |       |            |          |
|*  3 |    INDEX FAST FULL SCAN    | T1_I1                      | 85084 |   498K|    21  (15)| 00:00:01 |
|   4 |   UNION-ALL                |                            |       |       |            |          |
|   5 |    SORT AGGREGATE          |                            |     1 |    66 |            |          |
|*  6 |     VIEW                   |                            | 85084 |  5483K|    13  (24)| 00:00:01 |
|   7 |      TABLE ACCESS FULL     | SYS_TEMP_0FD9D661B_876C2CB | 85084 |   498K|    13  (24)| 00:00:01 |
|   8 |    SORT AGGREGATE          |                            |     1 |    66 |            |          |
|*  9 |     VIEW                   |                            | 85084 |  5483K|    13  (24)| 00:00:01 |
|  10 |      TABLE ACCESS FULL     | SYS_TEMP_0FD9D661B_876C2CB | 85084 |   498K|    13  (24)| 00:00:01 |
---------------------------------------------------------------------------------------------------------

=============
Forced inline
=============

--------------------------------------------------------------------------------
| Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |       |     2 |    12 |    22  (14)| 00:00:01 |
|   1 |  UNION-ALL             |       |       |       |            |          |
|   2 |   SORT AGGREGATE       |       |     1 |     6 |            |          |
|*  3 |    INDEX FAST FULL SCAN| T1_I1 | 38784 |   227K|    21  (15)| 00:00:01 |
|   4 |   SORT AGGREGATE       |       |     1 |     6 |            |          |
|*  5 |    INDEX RANGE SCAN    | T1_I1 |   551 |  3306 |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------

I’m not surprised that the optimizer materialized the subquery – as I pointed out in my previous article, the choice seems to be rule-based (heuristic) rather than cost-based. What surprises me is that the cost for the default plan is not self-consistent – the optimizer seems to have lost the cost of generating the temporary table. The cost of the materialized query plan looks as if it ought to be 21 + 13 + 13 = 47. Even if the optimizer were smart enough to assume that the temporary table would be in the cache for the second scan (and therefore virtually free to access) we ought to see a cost of 21 + 13 = 34. As it is we have a cost of 25, which is 13 + 13 (or, if you check the 10053 trace file, 12.65 + 12.65, rounded).

Since the choice to materialize doesn’t seem to be cost-based (at present) this doesn’t really matter – but it’s always nice to see, and be able to understand, self-consistent figures in an execution plan.

Footnote

It is worth pointing out as a side note that materialization can actually be more expensive than running in-line, even for very simple examples. Subquery factoring seems to have become more robust and consistent over recent releases in terms of consistency of execution plans when the subqueries are put back inline, but you still need to think a little bit before rewriting a query for cosmetic (i.e. totally valid “readability”) reasons just to check whether the resulting query is going to produce an unexpected, and unexpectedly expensive, materialization.

Password Manager Woes

I read a post this morning and it hit a raw nerve or two.

As followers of the blog will know, I use KeePass for all my work and personal passwords. I’ve come across a number of sites that prevent pasting passwords for “security reasons” and it drives me nuts. Fortunately, most of the them can’t prevent the auto-type feature, so at least that’s something…

This attitude goes beyond websites though. The policy at my current employer is all passwords should be strong and unique, but you are not allowed to use a password manager. Why? Because if someone installs a key-logger on your PC and gets the credentials for the password manager, they will have access to all your passwords. WTF? I think this attitude is moronic. I am not capable of remembering hundreds of unique, strong passwords. Using patterns is predictable, so that is also a fail.

I have seen the way some of my colleagues (past and present) deal with passwords and it is farcical.

  • One password to rule them all.
  • Kept in a text/word document on the desktop.
  • Kept in a text/word document on a network drive.
  • Kept on a piece of paper in their desk draw, that is never locked.
  • Freely shared amongst colleagues, so they can “test something using my account”.

For someone to step in and say we can’t use a tool that generates random, strong, completely unpredictable passwords and stores them in an encrypted format makes my blood boil.

Flippin’ morons!

Cheers

Tim…


Password Manager Woes was first posted on July 27, 2015 at 12:57 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Learn it or don’t. The choice is yours.

glasses-272399_1280-smallTechnology is scary for a lot of people, but the biggest problem I see out there is denial (It’s not just a river in Africa! :) ).

Newbies

For people who are new to technology, the biggest problem I see is they refuse to actually read what is on the screen. I’m not talking about those stupid End User License Agreement (EULA) screens that nobody reads. I’m talking about basic instructions. If a screen says,

“Enter your username and password, then click the Login button.”

I don’t think that should be a taxing problem for anyone, but for the less computer literate, if something doesn’t go *exactly* as they expect, they go into total melt down. People just have to take a deep breath and read what is in front of them.

Techies

The situation is not always much different for many techies when they are faced with learning new skills. All those lessons you learned in your core skill-set seem to go out of the window. Things like:

  • Read the manuals.
  • Check the log files.
  • Check the vendor support website.
  • Google it.
  • Raise a support call.

Instead, people throw their toys out of the pram and decide the product/feature is rubbish and give up.

This is exactly what happened to me when I started playing with the Multitenant option. I was in total denial for ages. When I finally made the decision to sit down and figure it out it wasn’t so bad. It was just different to what I was used to.

Learning is not a spectator sport!

(Shameless use of the title of Connor McDonald’s blog, which is in itself credited to D. Blocher.)

Learning stuff is all about time. The optimizer fairy didn’t visit Jonathan Lewis one day and tell him “the secret”. If you don’t spend the time, or you give up at the first hurdle, you are never going to get anywhere. You will probably start to make excuses. I’m too old. It’s too complicated. I’ve always been rubbish at learning new stuff. I don’t have time. My company doesn’t support me. We won’t use it for another 3 years, so I’ll leave it until later. The list is endless.

Next time you are sitting in front of the TV watching some trash, ask yourself what those “smart kids” are doing at the moment?

I don’t care what you do with your life. Your choices are no more or less valid than mine. Just don’t fool yourself. Be honest. If you wanted to learn it you would. The fact you haven’t means you really can’t be bothered. :)

Cheers

Tim…


Learn it or don’t. The choice is yours. was first posted on July 27, 2015 at 10:03 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Things that are there but you cannot use

I did a "desc" command on the STANDARD package today, the package that helps define PL/SQL, and saw the XOR function!

SQL> declare
  2    x boolean;
  3  begin
  4    x := XOR(true,true);
  5    dbms_output.put_line(case when x then 'TRUE' else 'FALSE' end);
  6
  7    x := XOR(true,false);
  8    dbms_output.put_line(case when x then 'TRUE' else 'FALSE' end);
  9
 10    x := XOR(false,false);
 11    dbms_output.put_line(case when x then 'TRUE' else 'FALSE' end);
 12
 13  end;
 14  /
FALSE
TRUE
FALSE

PL/SQL procedure successfully completed.

I cant find any reference to it in the Oracle PL/SQL documentation anywhere, so its definitely not supported so using it is probably off limits until you see officially in the documentation.

SQL Text / View Expansion

One of the common issues faced when debugging or investigating SQL performance is that tools often do not not show the underlying objects at play when a query involves views.

Oracle 12c now provides a capability to display the underlying objects; DBMS_UTILITY.EXPAND_SQL_TEXT will provide the entire syntax of the”exploded” query as shown below:

Example EXPAND_SQL_TEXT – View Definition:

create view dept_emps
 as
 select dname,loc,d.deptno,
     empno,ename,job,hiredate,mgr,sal,comm
   from emp e join dept d
     on e.deptno = d.deptno
   order by dname,ename;

Example EXPAND_SQL_TEXT Use

Here’s a query using the view, unless you know the schema you might not realize multiple database objects were in use.

select * from dept_emps;

Here’s the PL/SQL you might use to “expand” the SQL text to show the objects really involved:

DECLARE
   myclob CLOB;
BEGIN
   DBMS_UTILITY.expand_sql_text (
         input_sql_text => '#ff0000;">SELECT * FROM dept_emps',
         output_sql_text => myclob
   );
   DBMS_OUTPUT.put_line(myclob);
END;

Here’s the result of the PL/SQL (above) to expand the statement:

SELECT "A1"."DNAME" "DNAME","A1"."LOC" "LOC",
    "A1"."DEPTNO" "DEPTNO","A1"."EMPNO" "EMPNO",
    "A1"."ENAME" "ENAME","A1"."JOB" "JOB",
    "A1"."HIREDATE" "HIREDATE","A1"."MGR" "MGR",
    "A1"."SAL" "SAL","A1"."COMM" "COMM"
  FROM (SELECT "A2"."DNAME_9" "DNAME",
          "A2"."LOC_10" "LOC", 
          "A2"."QCSJ_C000000000400001_8""DEPTNO",
          "A2"."EMPNO_0" "EMPNO","A2"."ENAME_1" "ENAME",
          "A2"."JOB_2" "JOB","A2"."HIREDATE_4" "HIREDATE",
          "A2"."MGR_3" "MGR","A2"."SAL_5" "SAL",
          "A2"."COMM_6" "COMM"
          FROM (SELECT "A4"."EMPNO" "EMPNO_0",
                 "A4"."ENAME" "ENAME_1","A4"."JOB" "JOB_2",
                 "A4"."MGR" "MGR_3",
                 "A4"."HIREDATE" "HIREDATE_4",
                 "A4"."SAL" "SAL_5","A4"."COMM" "COMM_6",
                 "A4"."DEPTNO" "QCSJ_C000000000400000",
                 "A3"."DEPTNO" "QCSJ_C000000000400001_8",
                 "A3"."DNAME" "DNAME_9",
                 "A3"."LOC" "LOC_10"
                 FROM JOHN."EMP" "A4",JOHN."DEPT" "A3"
                 WHERE "A4"."DEPTNO"="A3"."DEPTNO") "A2"
           ORDER BY "A2"."DNAME_9","A2"."ENAME_1") "A1"

Note how the “SELECT * FROM DEPT_EMPS” query is expanded by DBMS_UTILITY; now you know what the real SQL is!

12c Parallel Execution New Features: Parallel FILTER Subquery Evaluation - Part 1: Introduction

12c introduces another interesting new Parallel Execution feature - the parallel evaluation of FILTER subqueries. In pre-12c FILTER subqueries always had to be evaluated in the Query Coordinator. This had several consequences, in particular the data driving the FILTER subquery always had to flow through the Query Coordinator, and hence represented a forced serial execution part of a parallel execution plan. This limitation also meant that depending on the overall plan shape the parallel plan was possibly decomposed into multiple DFO trees, leading to other side effects I've outlined in some of my other publications already.In 12c now the FILTER subquery can be evaluated in the Parallel Slaves, and the driving data no longer needs to be processed in the Query Coordinator. However, the resulting plan shape can be a little bit confusing. Let's have a look at a simple example:


create table t_1
compress
as
select /*+ use_nl(a b) */
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_1', method_opt=>'for all columns size 1')

alter table t_1 parallel 4;

create index t_1_idx on t_1 (id) invisible;

explain plan for
select /*+
--optimizer_features_enable('11.2.0.4')
*/ count(*) from
t_1 t
where exists (select /*+ no_unnest */ null from t_1 where t.id = t_1.id);

-- 11.2.0.4 plan shape with index invisible
----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 440M (2)| 04:47:04 | | | |
| 1 | SORT AGGREGATE | | 1 | 6 | | | | | |
|* 2 | FILTER | | | | | | | | |
| 3 | PX COORDINATOR | | | | | | | | |
| 4 | PX SEND QC (RANDOM)| :TQ20000 | 2000K| 11M| 221 (1)| 00:00:01 | Q2,00 | P->S | QC (RAND) |
| 5 | PX BLOCK ITERATOR | | 2000K| 11M| 221 (1)| 00:00:01 | Q2,00 | PCWC | |
| 6 | TABLE ACCESS FULL| T_1 | 2000K| 11M| 221 (1)| 00:00:01 | Q2,00 | PCWP | |
| 7 | PX COORDINATOR | | | | | | | | |
| 8 | PX SEND QC (RANDOM)| :TQ10000 | 1 | 6 | 222 (2)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 9 | PX BLOCK ITERATOR | | 1 | 6 | 222 (2)| 00:00:01 | Q1,00 | PCWC | |
|* 10 | TABLE ACCESS FULL| T_1 | 1 | 6 | 222 (2)| 00:00:01 | Q1,00 | PCWP | |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter( EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "T_1" "T_1" WHERE "T_1"."ID"=:B1))
10 - filter("T_1"."ID"=:B1)

-- 12.1.0.2 plan shape with index invisible
-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 1588M (2)| 17:14:09 | | | |
| 1 | SORT AGGREGATE | | 1 | 6 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 6 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 6 | | | Q1,00 | PCWP | |
|* 5 | FILTER | | | | | | Q1,00 | PCWC | |
| 6 | PX BLOCK ITERATOR | | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL| T_1 | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
|* 8 | TABLE ACCESS FULL | T_1 | 1 | 6 | 798 (2)| 00:00:01 | | | |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - filter( EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "T_1" "T_1" WHERE "T_1"."ID"=:B1))
8 - filter("T_1"."ID"=:B1)

-- 11.2.0.4 plan shape with index visible
----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 5973K (1)| 00:03:54 | | | |
| 1 | SORT AGGREGATE | | 1 | 6 | | | | | |
|* 2 | FILTER | | | | | | | | |
| 3 | PX COORDINATOR | | | | | | | | |
| 4 | PX SEND QC (RANDOM)| :TQ10000 | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 5 | PX BLOCK ITERATOR | | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | PCWC | |
| 6 | TABLE ACCESS FULL| T_1 | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
|* 7 | INDEX RANGE SCAN | T_1_IDX | 1 | 6 | 3 (0)| 00:00:01 | | | |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter( EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "T_1" "T_1" WHERE "T_1"."ID"=:B1))
7 - access("T_1"."ID"=:B1)

-- 12.1.0.2 plan shape with index visible
-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 5973K (1)| 00:03:54 | | | |
| 1 | SORT AGGREGATE | | 1 | 6 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 6 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 6 | | | Q1,00 | PCWP | |
|* 5 | FILTER | | | | | | Q1,00 | PCWC | |
| 6 | PX BLOCK ITERATOR | | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL| T_1 | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
|* 8 | INDEX RANGE SCAN | T_1_IDX | 1 | 6 | 3 (0)| 00:00:01 | | | |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - filter( EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "T_1" "T_1" WHERE "T_1"."ID"=:B1))
8 - access("T_1"."ID"=:B1)

I've included two variations of the setup, one without available index for evaluating the FILTER subquery and one with index.The pre-12c plan shape without index makes the former limitation particularly obvious: The FILTER operator is above the PX COORDINATOR and marked serial, and the table scan in the FILTER subquery gets parallelized as separate DFO tree (indicated among others by the two PX COORDINATOR operators), which means that each time this separate DFO tree starts, a separate set of Parallel Slave will be allocated/deallocated, adding possibly a lot of overhead to a probably already inefficient execution plan anyway - assuming the FILTER subquery needs to be evaluated many times.In 12c the FILTER operator is marked parallel and the need for a separate DFO tree is gone. What might be confusing with this plan shape is that the operations of the FILTER subquery are not marked parallel. In my opinion this is misleading and should actually be marked parallel, because at runtime the operations will be performed by the Parallel Slaves, and in case of a Full Table Scan each slave will run the entire full table scan (so no PX ITERATOR for dividing the scan into chunks / granules), which is comparable to what happens when a parallel Nested Loop join runs or the new PQ_REPLICATE feature gets used - and in those cases the operations are marked parallel:


-- 11.2.0.4 / 12.1.0.2 plan shape with index invisible
-- and subquery unnested using NL SEMI join
-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 12 | 442M (2)| 04:48:03 | | | |
| 1 | SORT AGGREGATE | | 1 | 12 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 12 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 12 | | | Q1,00 | PCWP | |
| 5 | NESTED LOOPS SEMI | | 2000K| 22M| 442M (2)| 04:48:03 | Q1,00 | PCWP | |
| 6 | PX BLOCK ITERATOR | | | | | | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL| T_1 | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
|* 8 | TABLE ACCESS FULL | T_1 | 2000K| 11M| 796 (2)| 00:00:01 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

8 - filter("T"."ID"="T_1"."ID")

Summary

So the operators of the FILTER subquery can now be run in the slaves, which is the main point of this feature, although being represented in a confusing way in the execution plan. Avoiding the potential decomposition into multiple DFO trees is another possible side effect. Decreased query duration should be possible if the evaluation of the FILTER subquery requires significant time and can now be run in the Parallel Slaves instead of serial execution through the Query Coordinator.Note that depending on the plan shape and SQL features used, it's still possible that 12c reverts to the old serial FILTER subquery evaluation plan shape, so the new feature doesn't get used always.There is more to say about this feature. In the next part of this instalment I'll focus on the different distribution methods possible with the new parallel FILTER operator - there is a new PQ_FILTER hint that allows controlling the distribution, but there are also some interesting points to make about how the optimizer seems to make its choice which distribution method to use automatically. In the examples shown here there's no separate distribution for the FILTER, by the way, but this can look differently, as I'll show in the next part.

OTN Tour of Latin America 2015 (Southern Leg)

ace-directorI put out a brief video a few days ago (re-uploaded today to fix typos) about my participation in the OTN Tour of Latin America (2015). I’ll be on the southern leg this year. Sorry to those countries who make up the northern leg. I will be back soon I hope.

Anyway, the southern leg of the tour shapes up like this.

  • 3/4 August Uruguay UYOUG
  • 5/6 August Argentina AROUG
  • 8 August Brazil GUOB
  • 10 August Chile CLOUG
  • 12 August Peru PEOUG

I’m looking forward to seeing everyone. See you soon!

After the Peru leg, the wife and I will be going off to see Machu Picchu.

Cheers

Tim…


OTN Tour of Latin America 2015 (Southern Leg) was first posted on July 26, 2015 at 1:41 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

The Queen of Social Media Mean

Unlike other forms of communication, social media can be a real gray area for rules on what dictates freedom of speech and how you have to endure with those that may not be pleasant to interact with.  I’m often told what etiquette some feel it necessary and that I should just disengage from some people online.

The truth is, not everyone gets social media and many just don’t understand boundaries or how to communicate productively when online.  You, as a member of the online community, have the right to choose who you communicate with and when it’s time to NOT communicate with another person online.  Although I often keep this type of post to Linked in, I decided to discuss this here, as I think it’s a question that many ask themselves when they run into social media disagreements-

When do I draw a line and how do I justify enough is enough?

Luckily, most people are busy with their day that they just don’t care what other people are posting.  I think the most disconcerting are those that think so deeply about what other’s have posted in a 140 character limit that it completely consumes their day and yes, they are out there.

You MUST Respond to Me

I’ve blogged about data I’ve gathered that demonstrated how the more interaction and attention you attain on social media, the more often negative reactions, (abusive incidents) will occur from certain personality types.  I also noticed that there were those that weren’t abusive or destructive, but were negative in their interaction.  Others on social media will recognize this type- you see their handle come up on your feed and think, “Sigh….what now….”  You are hesitant to look at their response to something you’ve posted, as you know when you do, you rarely feel good afterwards.  Their comments almost consistently have a “but” in them.  “I liked your post, BUT….” “The topic is good, but…” If you react, they think you’re sensitive to criticism when the truth is, their goal is meant to dismiss your post that you’ve taken the time to share.  These types commonly show up in your social media feed at the same time as abusive types when you analyze social media data.  Many online categories will class these posts along with abusive types as trolls, but I’ve chosen to categorize them with the title of the “pokers”.

drwhopoke

Why do I call them this?  Remember when you were a little kid and you had that one friend or more likely, a sister or brother that sat next to you in the back seat of the car and would poke you in the arm for the whole ride?  I haven’t met a parent yet that hasn’t gone through the, “He’s touching me!!” fight with their kids.

Enough Already!

The goal of the poker is the same as the sibling in the back seat of the car- distraction from something that’s bothering them.  Most often, they’re satiating their insecurities or frustrations by distracting you and/or your followers by responding to your posts.  Now we all remember what happened to that annoying friend or sibling that poked you repeatedly after a period of time being poked, right?

dont_poke_me_by_lonely_shadow_mutt

Once you’ve hit your breaking point and react, the response from the poker is one of amazement and often offense-

offended1

How could you insult them and think that they would act in such a way? How could you think they had anything but the most honest and best intentions?  They really don’t understand why you’ve suddenly reacted so negatively after they’ve responded this way on so many of your other posts over a period of time.  They weren’t hurting you, they just made a comment!

 

Don’t Get Mean, Get Logical

The fact is, this type of “troll” can be damaging to you and your personal brand.  This type of distraction can impact your productivity, as well as impact any social media work you are performing for your brand or the company you represent.  If you’ve engaged them and asked them to cease and they don’t stop, then there is only one course of action:

Block.

Disconnect.

Unfriend.

Now many of my connections and friends have been surprised when after blocking people we’re both connected to, they’ll send me a private message and ask, “Did you really just block XXX?  He says you blocked him!”  The percentage of those I’ve blocked, (no matter if it is publicly known that I’ve blocked them or kept private) of someone else having an issue with the “poker” after me is currently 100%.  Great percentage folks, congratulations!

Here’s the secondary reason for blocking, disconnecting and/or unfriending.  My feed is obviously causing the poker to be agitated.  The poker reacts when I’m either recognized for speaking, contributions or experienced a high amount of interaction in social media.  This level of social media activity is bringing out their insecurities, (just as it would with an online bully, but the poker isn’t reacting to the degree a bully would, which is why I do offer one request for them to stop and choose a different path of interaction with me.)  The added benefit of disconnecting and unfriending removes your social media contributions from their feed, providing an “out of sight, out of mind” benefit.  Removing as much interaction with this person as possible, you’re doing both parties a favor.

flabbergasted

Nothing to See Here

Once you remove yourself from this person’s view, don’t be surprised if they end up choosing a new target to “poke”.  It’s almost unheard of that this type won’t require some new person to use as an outlet to “poke” at.  By not engaging outside of a single warning to cease and desist, you’ll save yourself from continuing to be a target and can focus on your goals.

Finally, for those who’ve been blocked, unfriended or disconnected from my social media profiles.  I really don’t have any personal feelings one way or the other towards you. Let’s be honest-  my husband, my children and my job mean a lot to me, but you don’t.  My social media work is part of my professional life and this is a professional decision.  If you are derailing me from my professional goal and if I decide I’m causing you agitation and in response you react negatively, yes, I’m going to disengage from you.  It’s the logical thing to do and I wish more would take that step when this type of behavior happens online.

Habits-Castle2

Don’t email me and ask why I blocked you or act offended.  I don’t respond to my emotions, I respond to data and recognize that by accepting or engaging your behavior, your problem will then become mine.  Also, don’t ask my connections why you’ve been blocked.  They rarely, if ever know why I’ve blocked you, but are less surprised that you’ve alienated someone.  Accept that it’s for both our benefits and move on.

 



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [The Queen of Social Media Mean], All Right Reserved. 2015.