Search

OakieTags

Who's online

There are currently 0 users and 36 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Oracle Midlands : Event #5

Just a quick reminder that Oracle Midlands Event #5 is just around the corner (#000000;">Tuesday 16 September).

  • Boost Performance by Clustering Data - Martin Widlake
  • Data Virtualisation and Instant Cloning - Ron Ekins (Delphix)

Martin is always good value, so I’m looking forward to that session. I’ve seen a couple of talks on the Delphix stuff and it is seriously cool! I think this session will open a few people’s eyes… :)

Big thanks to the Red Gate Software folks for sponsoring the event, allowing it to remain free

You can get more details here. I’ve already registered. See you there!

Cheers

Tim…

 


Oracle Midlands : Event #5 was first posted on August 18, 2014 at 10:43 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

MERGE and IOT’s ….. unhappy bedfellows

Anyone who has used Oracle for a while will be familiar with the Parent/Child locking "issue" when it comes to tables and indexes on foreign keys.  For many years you’d hear people crying "bug" etc but thankfully most now know the reason, and accept it as sensible behaviour.

But lets take a look at a slight variation on that theme.

Lets start with a table called "LOC" which will be our parent table in this example. Note that it is an IOT, and we’ll also have a child table "LOC_CHILD", which is also an IOT.

SQL> CREATE TABLE LOC
  2  (
  3    LOC_ID              NUMBER(4)            NOT NULL,
  4    DATA                     NUMBER(6),
  5    CONSTRAINT LOC_PK
  6    PRIMARY KEY
  7    ( LOC_ID)
  8    ENABLE VALIDATE
  9  )
 10  ORGANIZATION INDEX
 11  /

Table created.

SQL> CREATE TABLE LOC_CHILD
  2  (
  3    CHILD_SEQ                  NUMBER(12)          NOT NULL,
  4    LOC_ID               NUMBER(4)           NOT NULL,
  5    CHILD_DATA  NUMBER(15,6),
  6    CONSTRAINT LOC_CHILD_PK
  7    PRIMARY KEY
  8    (CHILD_SEQ, LOC_ID)
  9    ENABLE VALIDATE
 10  )
 11  ORGANIZATION INDEX
 12  /

Table created.

SQL> insert into LOC
  2  select rownum,50
  3  from dual
  4  connect by level <= 5
  5  /

5 rows created.

Now being a good DBA :-) we’ve read all the "gloom and doom" nonsense about foreign keys being indexed, so just to be careful, we’ll add that index onto our child table before adding our foreign key back to LOC.

SQL> CREATE INDEX LOC_CHILD_IX ON LOC_CHILD
  2  (LOC_ID)
  3  /

Index created.

SQL> ALTER TABLE LOC_CHILD ADD (
  2    CONSTRAINT LOC_CHILD_FK
  3    FOREIGN KEY ( LOC_ID)
  4    REFERENCES LOC (LOC_ID)
  5    ENABLE VALIDATE)
  6  /

Table altered.

SQL> insert into LOC_CHILD
  2  select rownum,mod(rownum,5)+1,dbms_random.value(1000,5000)
  3  from dual
  4  connect by level <= 2000
  5  /

2000 rows created.

SQL> commit;

Commit complete.

So the scene is set..we’ve got our tables seeded with some data, and ready to go..

Lets update a row in the parent table LOC:

SQL> UPDATE loc
  2  SET    DATA = 99
  3  WHERE  LOC_ID = 2;

1 row updated.

Now we’ll pop into a new session and update the child table LOC_CHILD:

SQL> MERGE 
  2       INTO  LOC_CHILD
  3       USING (SELECT 500 CHILD_SEQ,
  4                     2 LOC_ID,
  5                     1000 CHILD_DATA
  6                FROM DUAL) M
  7          ON (    LOC_CHILD.CHILD_SEQ = M.CHILD_SEQ
  8              AND LOC_CHILD.LOC_ID = M.LOC_ID)
  9  WHEN MATCHED
 10  THEN
 11     UPDATE SET
 12        LOC_CHILD.CHILD_DATA =  NVL (LOC_CHILD.CHILD_DATA, 0) + M.CHILD_DATA
 13  WHEN NOT MATCHED
 14  THEN
 15     INSERT     (CHILD_SEQ,
 16                 LOC_ID,
 17                 CHILD_DATA)
 18         VALUES (M.CHILD_SEQ,
 19                 M.LOC_ID,
 20                 M.CHILD_DATA);

[stuck]

And splat…we’re stuck.  (Without evidence to support it) I’d hypothesize its due to the flexibility of the merge command.  A single merge can insert, update and even delete rows, so I’m guessing that to handle this flexibility (in particular, the DELETE option) then the locking errs on the side of safety.

Monitoring a Microsoft OS Failover Cluster

I know, I know-  none of you are using Microsoft Windows.  This is why I get so many questions on this topic and why there is so much interest in a white paper that no one thought I needed to write.  Well, while that pesky ol’ white paper is in review, I’m going to go onto a secondary topic of how to monitor a Microsoft Active/Passive cluster with Enterprise Manager 12c, release 4.

There are some disclaimers I’m going to put in this blog post-

1.  monitoring a clustered host is not explicitly stated as not supported with EM12c, but there is also some documentation that I’m still pouring over that is a bit contradicting… :)

2.  I am only covering MS Windows 2008 R2 server with this article.  It will be quite simple to cover MS Windows 2012 server, but no guarantees on 2003, as the security model for clustering, etc. is very different in the earlier releases.

3.  This is a blog post.  It’s not a MOS note, it’s not a white paper, so remember, this is just me blogging on how I accomplished this and there is no official support documentation behind this topic.

And so here we go!

The concept of OS level clustering can be very foreign to DBAs who’ve spent much of their time with application level clustering, (i.e. RAC/Real Application Clustering) but I believe it is important for us to understand different clustering models, along their benefits and drawbacks.

Microsoft OS level clustering, in an Active/Passive configuration for Microsoft includes the following basic, physical design-

  • Two hosts, identified by a physical name and physical IP Address
  • A Virtual Cluster name, along with Virtual Cluster address.
  • A quorum disk, (similar to a voting disk in RAC)
  • Shared storage

The overall design appears similar to the following when in an active/passive mode with the Management agent installed with two databases monitored:

failover_agent

The most important thing you need to recognize is that no matter what the HOSTS are named or their IP addresses, the databases and the management server, (along with the listener, that I didn’t include in my diagram!) is all configured with the VIRTUAL CLUSTER NAME.  They do not recognize the host names at all and are only monitoring and running on the ACTIVE host via the VIRTUAL cluster name.

This is achieved through the same method as you would any other Windows host.  I recommend a silent installation, even with EM12c, Release 4, (12.1.0.4) using a response file to enter the commands and ensure you are completing all information as you normally would, but with the cluster information.

1.  The first requirement is your cluster MUST BE STABLE.

If a Windows Server cluster is not stable, don’t proceed.  Correct whatever DNS, Active Directory or shared storage issues exist before proceeding to install Oracle or the agent.  This will save you a lot of headaches.

2.  Failover Group should exist to add the Management Agent to.

If an application failover group doesn’t already exist in the Microsoft Failover Cluster Administration Manager, then you will need to request a virtual host name and virtual IP Address to be used for the Central Access Point, (CAP).  This is the group service that will manage the failover for the databases, etc, (so if databases already exist, this should exist and all you will do is add the agent to it….)

Test out all cluster connectivity via Name Server (nslookup) commands

nslookup 
nslookup 
nslookup 
nslookup 
 

Install Software

The management agent is going to be installed on the shared storage, which means it will only know about the active host in the cluster, (disclaimer alert!) To perform the installation, we are going to use a new method of PsExec with Windows Servers.

Download Agent Software

Check your software library for the correct version of the agent and download it via EM CLI commands:

emcli get_supported_platforms
emcli get_agentimage -destination=G:/swlib/ –platform="Microsoft Windows x64 (64-bit)"

Exit from EM CLI, unzip the file and prep for installation.

Setting up PsExec on the OMS

Create your folder you wish to install to:  C:\agent12c

Download the PsExec bat file and the utility, following instructions in the DOC ID 1636851.1

Download the agentDeployPsExec.bat and the PsExec utility to a folder on C:\psexec.

Create the psexec.rsp response file with the following information and save it to the C:\psexec directory with the other files:

HOST_NAMES=
USER_NAME=
PASSWORD=
AGENT_IMAGE_PATH=C:\agent12c_ins
AGENT_BASE_DIR=C:\agent12c
OMS_HOST=
EM_UPLOAD_PORT=
AGENT_REGISTRATION_PASSWORD=
PSEXEC_DIR=C:\psexec

If you are unsure of the values for your upload port, etc., run the following on the OMS:

emctl status oms -details

It requires the SYSMAN password and will return all information pertinent to your EM environment.

Once you have this all filled in, you are ready to deploy from the OMS to the 1st node of the OS clustered server.

Run the following:

agentDeployPsExec.bat AGENT_BASE_DIR= RESPONSE_FILE=\psexec.rsp
C:\psexec>agentDeployPsExec.bat PROPERTIES_FILE=C:\psexec\psexec.rsp

C:\psexec>echo off
===================================================================
Agent deployment started on host : host1.us.oracle.com

Creating installation base directory ...

Note:  You must have remote admin privileges to the target host to perform this successfully.  If the account in the response file does NOT have privileges granted to create directories, start remote services and such, it will fail.

It will take some time to deploy the agent and once complete, will show the following:

Agent deployed successfully.

Duplicate registry and services to second host

Click on Start, Run and type in regedit.exe on the first host of the failover cluster.

Go to HKEY_LOCAL_MACHINE\SOFTWARE\oracle and right click on the folder and choose Export.  Save the registry file in a secure location.

FTP the registry file to the second host.

Log into the second host and double click on the registry file.  It will ask you if you are sure you want to copy the registry key, click OK.

Create the service

The service for the agent must now be duplicated on the second host.  This is OS level clustering, so no application level service creation should be performed, (emctl, emcli, oradim…)  Use sc commands, (Windows Service Control) to create the service.

Open up a command prompt in administrator mode and the duplicate the OracleAgent12c1:

The syntax for creating the service is as follows:

sc create  binPath= "" start= auto

By opening up Windows services on the first host, you can go to the OracleAgent12c1 and double click on it to view the values you need for above:

ora_agent

Run the sc command on the second host to create the support service for the installation performed on the first host.

Add the Agent to the Failover Group

In the Server Manager, go to the Failover Management and open up the Failover group:

failover_cluster_1

Right click on Add a resource and choose Generic Service.  You can then choose the Agent service listed, (for the example above, Oracleagent12c1Agent) and follow through with the defaults to finish.

The service for the agent is now set to be cluster aware and will failover if the first host it was installed becomes unavailable for some reason.

Failover Validation

The installation is now complete on the shared storage of the failover cluster for the first host and you’ve now copied over the registry settings and duplicated service, so you are ready to test the failover and ensure the agent is successful.

There are a couple ways to test the failover:

1.  Reboot the first host-  this will cause a failover.

2.  Right click on the Failover group and click on More Actions, then Simulate failure of this resource.

You should now see the drives and services, including the agent, failover and start on the second host.  Verify that all services come online and log into the EMCC , (EM12c console) to verify uploading occurs to the OMS.  Verify that all the targets you added are showing correctly for the virtual hostname.

Test failover multiple times to both hosts.  If a failure occurs on the second host, check the services, comparing to the 2nd to 1st host and/or dependencies on start up of your Failure group.

Drawbacks

The clear drawback of OS level clustering through the agents is that only one host is being monitored at a time.  As the targets, (MSSQL database, applications, etc.) that are being monitored are active on only one host at a time, there would be manual intervention required if dual agents were deployed.

A workaround in the form of a monitoring script to ping the hosts at all times, only alerting if no response received is a second level of host monitoring availability.

I’m also inspecting the option of Failsafe with EM12c, (not currently supported) and the new Partner Agents to see if there are more opportunities to monitoring OS level clustering.

 

 

 

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Monitoring a Microsoft OS Failover Cluster], All Right Reserved. 2014.

In-memory limitation

I’ve been struggling to find time to have any interaction with the Oracle community for the last couple of months – partly due to workload, partly due to family matters and (okay, I’ll admit it) I really did have a few days’ holiday this month. So making my comeback with a bang – here’s a quick comment about the 12.1.0.2 in-memory feature, and how it didn’t quite live up to my expectation; but it’s also a comment about assumptions, tests, and inventiveness.

One of the 12.1.0.2 manuals tells us that the optimizer can combine the in-memory columnar storage mechanism with the “traditional” row store mechanisms – unfortunately it turned out that this didn’t mean quite what I had hoped; I had expected too much of the first release. Here’s a quick demo of what doesn’t happen, what I wanted to happen, and how I made it happen, starting with a simple definition (note – this is running 12.1.02 and the inmemory_size parameter has been set to enable the feature):


create table t1 nologging
as
select	*
from	all_objects
where	rownum <= 50000
;

alter table t1 inmemory
no inmemory (object_id, object_name)
inmemory memcompress for query low (object_type)
-- all other columns implicitly inmemory default
;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

begin
	dbms_stats.gather_table_stats(user, 't1', method_opt=>'for all columns size 1');
end;
/

rem
rem	Needs select on v$_im_column_level granted
rem

select
	table_name,
	column_name,
	inmemory_compression
from
	v$im_column_level
where	owner = user
and	table_name = 'T1'
order by
	segment_column_id
;

explain plan for
select
	last_ddl_time, created
from
	t1
where	t1.created > trunc(sysdate)
and	t1.object_type = 'TABLE'
and	t1.subobject_name is not null
;

select * from table(dbms_xplan.display);

All I’ve done at this point is create a table with most of its columns in-memory and a couple excluded from the columnar store. This is modelling a table with a very large number of columns where most queries are targeted at a relatively small subset of the data; I don’t want to have to store EVERY column in-memory in order to get the benefit of the feature, so I’m prepared to trade lower memory usage in general against slower performance for some queries. The query against v$im_column_level shows me which columns are in-memory, and how they are stored. The call to explain plan and dbms_xplan then shows that a query involving only columns that are declared in-memory could take advantage of the feature. Here’s the resulting execution plan:

-----------------------------------------------------------------------------------
| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |     1 |    27 |    73   (9)| 00:00:01 |
|*  1 |  TABLE ACCESS INMEMORY FULL| T1   |     1 |    27 |    73   (9)| 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - inmemory("T1"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1"."OBJECT_TYPE"='TABLE' AND "T1"."CREATED">TRUNC(SYSDATE@!))
       filter("T1"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1"."OBJECT_TYPE"='TABLE' AND "T1"."CREATED">TRUNC(SYSDATE@!))

Note that the table access full includes the inmemory keyword; and the predicate section shows the predicates that have taken advantage of in-memory columns. The question is – what happens if I add the object_id column (which I’ve declared as no inmemory) to the select list.  Here’s the resulting plan:


--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    32 |  1818   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    32 |  1818   (1)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("T1"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1"."OBJECT_TYPE"='TABLE' AND "T1"."CREATED">TRUNC(SYSDATE@!))

There’s simply no sign of an in-memory strategy – it’s just a normal full tablescan (and I didn’t stop with execution plans, of course, I ran other tests with tracing, snapshots of dynamic performance views etc. to check what was actually happening at run-time).

In principle there’s no reason why Oracle couldn’t use the in-memory columns that appear in the where clause to determine the rowids of the rows that I need to select and then visit the rows by rowid but (at present) the optimizer doesn’t generate a plan to do that. There’s no reason, though, why we couldn’t try to manipulate the SQL to produce exactly that effect:


explain plan for
select
        /*+ no_eliminate_join(t1b) no_eliminate_join(t1a) */
        t1b.object_id, t1b.last_ddl_time, t1b.created
from
        t1 t1a, t1 t1b
where   t1a.created > trunc(sysdate)
and     t1a.object_type = 'TABLE'
and     t1a.subobject_name is not null
and     t1b.rowid = t1a.rowid
;

select * from table(dbms_xplan.display);

------------------------------------------------------------------------------------
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |     1 |    64 |    74   (9)| 00:00:01 |
|   1 |  NESTED LOOPS               |      |     1 |    64 |    74   (9)| 00:00:01 |
|*  2 |   TABLE ACCESS INMEMORY FULL| T1   |     1 |    31 |    73   (9)| 00:00:01 |
|   3 |   TABLE ACCESS BY USER ROWID| T1   |     1 |    33 |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - inmemory("T1A"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1A"."OBJECT_TYPE"='TABLE' AND "T1A"."CREATED">TRUNC(SYSDATE@!))
       filter("T1A"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1A"."OBJECT_TYPE"='TABLE' AND "T1A"."CREATED">TRUNC(SYSDATE@!))

I’ve joined the table to itself by rowid, hinting to stop the optimizer from getting too clever and eliminating the join. In the join I’ve ensured that one reference to the table can be met completely from the in-memory columns, isolating the no inmemory columns to the second reference to the table. It is significant that the in-memory tablescan is vastly lower in cost than the traditional tablescan – and there will be occasions when this difference (combined with the knowledge that the target is a relatively small number of rows) means that this is a very sensible strategy. Note – the hints I’ve used happen to be sufficient to demonstrate method but I’d be much more thorough in a production system (possibly using an SQL baseline to fix the execution plan).

Of course, this method is just another example of the “visit a table twice to improve the efficiency” strategy that I wrote about a long time ago; and it’s this particular variant of the strategy that allows you to think of the in-memory columnar option as an implementation of OLTP bitmap indexes.

Oracle Database 12c In-Memory Feature – Part V. You Can’t Use It If It’s Not “Enabled.” Not Being Able To Use A Feature Is An Important “Feature.”

This is part 5 in a series: Part I, Part II, Part III, Part IV, Part V.

Synopsis

This blog post is the last word on the matter.

Enabled?  It’s About Usage!

You don’t get charged for Oracle feature usage unless you use the feature. So why does Oracle inconsistently use the word enabled when we care about usage? If enabled precedes usage then enabled is a sanctified term. Please read on…

It’s All About Getting The Last Word? No, It’s About Taking Care Of Customers.

On August 6, 2014  Oracle shared their last word and official statement on the matter of bug-ridden tracking of the Oracle Database 12c  In-Memory feature usage in a quote to the press at CBR. I’ll paraphrase first and then quote the article. Here is what I hear when I read the words of Oracle’s spokesman:

Yeah, my bad, we have a bug. The defective code erroneously tracks feature usage for an Enterprise Edition additional cost option priced at $23,000 per processor core. Don’t worry. When we track this particular feature usage we’ll ignore it should you be audited. You have our spoken word that we’ll just shine this one on. Here, let me trade a few confusing words about usage without using the word enabled or disabled since those are taboo.

My paraphrase probably draws a more serene picture than the visions of tip-toeing and side-stepping conjured up by the following words I’ll quote from the CBR article. Bear in mind the fact that the bug spoken of in the quote is 19308780–a bug, by the way, that is not readable by maintenance contract holders. Now I’ll quote the article:

Recording that the In-Memory option is in use in this case is a bug and we will fix it in the first patchset update coming in October.

Yes, we knew it was a bug. I merely had to do the hard work of getting Oracle to acknowledge it. The article continued with the following quote. Please ignore the fact that Oracle’s spokesman refers to me common. Focus instead on the fact that throughout parts 1 through 4 in my series I suffered erroneous  feature usage reporting because of a bug (software defect). I quote:

Kevin initially claimed that feature tracking could report In-Memory usage, and therefore impact licensing, without the end-user doing anything. This was and is still not the case. Customer licensing of Oracle Database In-Memory is not impacted by the bug that Maria notes in her blog. When an end-user explicitly undertakes actions to set the INMEMORY attribute on a table but the In-Memory column store has not been allocated (by setting the inmemory_size parameter to a non zero value), the bug results in feature tracking incorrectly reporting In-Memory ‘in use’. However as no column store has been allocated, the feature is not in use and therefore there is no licensing impact.

 

Ah yes. The old, “it’s not in use but it reports it’s in use situation.” That’s could have been conveyed in very short sentences…could have.

Since the bug spoken of in the above quote is not visible to contract holders I’m just going to let you mull over the circular logic.  This whole situation could be a lot simpler if Oracle would either a) make a bug description visible to contract holders so customers know what is broken and how to test whether it got fixed when the patch is eventually applied and/or b) add this defect to MOS 1309070.1 which is a bug that tracks all the other bugs in feature usage reporting. Yes, indeed, there are other bugs of this sort with other features. All software has bugs.

Last Word On The Matter

My last word on the matter has to do with the fact that the feature cannot be unlinked. It is a very expensive–and very useful, important feature. As I pointed out in Part II the feature cannot be absolutely disabled at the executable level as is the case for other high cost options like Real Application Clusters and Partitioning.  I think Oracle is trying to tell us it is impossible computer science to make it an unlinkable feature–at least that’s how I interpret the following words in a blog post at Oracle.com:

Oracle Database In-Memory is not a bolt on technology to the Oracle Database. It has been seamlessly integrated into the core of the database as a new component of the Shared Global Area (SGA). When the Oracle Database is installed, Oracle Database In-Memory is installed. They are one and the same. You can’t unlink it or choose not to install it.

Now maybe this is not saying there is no way to code the feature as unlinkable. Maybe it’s saying the choice was made to not make it unlinkable. I don’t know. If, however, we are to believe that the mere fact the feature uses the SGA makes  it some sort of atomic-level symbiotic parasite, well, that argument doesn’t  hold water. Indeed, Real Application Clusters is massively integrated with the SGA. Ever heard of Cache Fusion? With Cache Fusion data blocks get shuttled from one SGA to another across hosts in a cluster! Real Application Clusters is unlinkable–that’s unthinkable!

 

What Is Unlinkable Anyway

There might be folks that don’t know what we mean when we say a feature is unlinkable. This doesn’t mean all the code for the feature is yanked out of the binary. It simply means that a single–or perhaps a few–binary objects are linked into the Oracle executable that enables the feature. If unlinked there is absolutely no way to use the feature–as is the case with, for instance, Real Application Clusters.

And not being able to use the feature is an important feature!

So let’s ponder the insurmountable computer science that must surely be involved in implementing the In-Memory Column Store feature as unlinkable.

Oracle has told us the INMEMORY_SIZE initialization parameter is the on/off  button for the feature. That means there is a single, central on/off button that is, indeed, able to be manipulated even by the user. Can you imagine how difficult it must be to implement a global variable–even a simple boolean–that get’s linked in and checked when one boots the database? Not hard to grasp. What if the variable had a silly name like inmemory_deactivated. What if the feature activation module–let’s call it inmem.o–had inmemory_deactived=TRUE but an alternate module called  inmemON.o had inmemory_deactivated=FALSE. In much the same way we relink Real Application Clusters, the link scripts manipulate the file name so that the default (with feature deactivated) gets replaced with the activated module–only if the user wants the possibility of using the feature. How would all this deep, dark, complex code come together? Well, when the database instance is booted inmemory_deactivated is evaluated and regardless of the user’s setting of INMEMORY_SIZE the In-Memory feature is really, truly, disabled–and most importantly not usable. No possibility for confusion. Would that be better than a game of Licensed-Feature Usage Prevention Twister(tm)?

1966_Twister_Cover

Intensely Deep Engineering Difficulty

Now, imagine that. We didn’t even have to use the back of a cocktail napkin to draw out a solution to the mysteries behind how utterly unlinkable the In-Memory Database feature must surely be. We simply a)  drew upon our understanding of other SGA-integrated features like Real Application Clusters and b) recalled how unlinking works for other features and c) drew upon our basic level understanding of the C programming language vis a vis global variables and object linking.

Let me summarize all that: There is a single user-modifiable boot-time parameter that disables In-Memory Database as per Oracle’s blog and spokesman assertions. Um, that’s a pretty simple focal point to make the feature unlinkable.

Summary

Yes, Oracle could implement a method for making the In-Memory Column Store feature an unlinkable option just like they did for Real Application Clusters. I can only imagine why they chose not to (visions of USD $23,000 per processor core).

Filed under: oracle

Exadata Zone Maps

Just a quick post on a new Exadata feature called Zone Maps. They’re similar to storage indexes on Exadata, but with more control (you can define the columns and how the data is refreshed for example). People have complained for years that storage indexes provided no control mechanisms, but now we have a way to exert our God given rights as DBA’s to control yet another aspect of the database. Here’s a link to the 12.1.0.2 documentation which resides in the Data Warehousing Guide: Zone Map Documentation

Zone Maps are restricted to Exadata storage by the way (well probably they work on ZFS and Pillar too). Have a look at the Oracle error messages file:

 
>grep -i "storage type" $ORACLE_HOME/rdbms/mesg/oraus.msg | grep -i "not supported"
 
/u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/mesg/oraus.msg:31969, 00000, "ZONEMAP not supported for table stored in tablespace of this storage type"
/u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/mesg/oraus.msg:64307, 00000, " Exadata Hybrid Columnar Compression is not supported for tablespaces on this storage type" 
/u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/mesg/oraus.msg:64309, 00000, " Hybrid Columnar Compression with row-level locking is not supported for tablespaces on this storage type."
/u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/mesg/oraus.msg:65425, 00000, "CLUSTERING clause not supported for table stored in tablespace of this storage type"
/u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/mesg/oraus.msg:65451, 00000, "Advanced index compression is not supported for tablespaces on this storage type."

So according to the messages file, there are a handful of features that are restricted in this fashion (Zone Maps, HCC, Attribute Clustering and Advanced Index Compression).

As a bit of totally irrelevant history, zone maps were actually included in the 12.1.0.1 release, but the documentation on them was removed. So they worked, but they were undocumented.

Here’s an example on a 12.1.0.1 DB on a non-Exadata platform.

 
SQL*Plus: Release 12.1.0.1.0 Production on Wed Aug 13 15:41:46 2014
 
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
 
 
INSTANCE_NAME    STARTUP_TIME      CURRENT_TIME         DAYS    SECONDS
---------------- ----------------- ----------------- ------- ----------
LAB1211          13-AUG-2014 09:54 13-AUG-2014 15:41     .24      20820
 
SYS@LAB1211> create table kso.junk1 (col1 number, col2 number) clustering by linear order (col1,col2);
create table kso.junk1 (col1 number, col2 number) clustering by linear order (col1,col2)
*
ERROR at line 1:
ORA-65425: CLUSTERING clause not supported for table stored in tablespace of this storage type
 
 
SYS@LAB1211> create table kso.junk1 (col1 number, col2 number);
 
Table created.
 
SYS@LAB1211> create materialized zonemap skew_zonemap on kso.junk1(col1);
create materialized zonemap skew_zonemap on kso.junk1(col1)
                                                          *
ERROR at line 1:
ORA-31969: ZONEMAP not supported for table stored in tablespace of this storage type

Note that both zone maps and attribute clustering were disallowed with the “not supported for table stored in tablespace of this storage type” error message.

By the way, attribute clustering is another interesting new feature of 12g that allows you to declaratively instruct Oracle to store data on disk in a sorted order. This physical ordering can have big benefit for storage indexes or zone maps (or any btree index where clustering factor is important for that matter). Oracle’s new In-Memory column store also has a min/max pruning feature (storage indexes) which means physical ordering on disk is important with that feature as well.

Anyway, here’s a link to the 12.1.0.2 documentation on attribute clustering which also resides in the Data Warehousing Guide: Attribute Clustering Documentation

And here’s another example using 12.1.0.2 on an Exadata.

 
SQL*Plus: Release 12.1.0.2.0 Production on Wed Aug 13 15:42:18 2014
 
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
 
 
INSTANCE_NAME    STARTUP_TIME      CURRENT_TIME         DAYS    SECONDS
---------------- ----------------- ----------------- ------- ----------
INMEM            24-JUL-2014 18:35 13-AUG-2014 15:42   19.88    1717600
 
Elapsed: 00:00:00.00
SYS@INMEM> @test_zonemap
SYS@INMEM> create table kso.junk1 (col1 number, col2 number) clustering by linear order (col1,col2);
 
Table created.
 
Elapsed: 00:00:00.01
SYS@INMEM> create materialized zonemap skew_zonemap on kso.junk1(col1);
 
Materialized zonemap created.
 
Elapsed: 00:00:00.15
SYS@INMEM> 
SYS@INMEM> -- so as expected, we're able to create an attribute clustered table and a zone map on Exadata
SYS@INMEM> 
SYS@INMEM> -- Let's try creating a tablespace that is not on Exa storage (even though the DB is on EXA platform)
SYS@INMEM> 
SYS@INMEM> create tablespace KSO_NON_EXA datafile '/home/oracle/KSO_NON_EXA.dbf' size 100M;
 
Tablespace created.
 
Elapsed: 00:00:00.38
SYS@INMEM> @tablespaces
 
TABLESPACE_NAME STATUS    CONTENTS  LOGGING   EXTENT_MGT ALLOC_TYP SPACE_MGT BLOCK_SIZE PREDICA
--------------- --------- --------- --------- ---------- --------- --------- ---------- -------
CLASS_DATA      ONLINE    PERMANENT LOGGING   LOCAL      SYSTEM    AUTO            8192 STORAGE
EXAMPLE         ONLINE    PERMANENT NOLOGGING LOCAL      SYSTEM    AUTO            8192 STORAGE
KSO_NON_EXA     ONLINE    PERMANENT LOGGING   LOCAL      SYSTEM    AUTO            8192 HOST      <=== 
SYSAUX          ONLINE    PERMANENT LOGGING   LOCAL      SYSTEM    AUTO            8192 STORAGE
SYSTEM          ONLINE    PERMANENT LOGGING   LOCAL      SYSTEM    MANUAL          8192 STORAGE
TEMP            ONLINE    TEMPORARY NOLOGGING LOCAL      UNIFORM   MANUAL          8192 STORAGE
UNDOTBS1        ONLINE    UNDO      LOGGING   LOCAL      SYSTEM    MANUAL          8192 STORAGE
USERS           ONLINE    PERMANENT LOGGING   LOCAL      SYSTEM    AUTO            8192 STORAGE
 
8 rows selected.
 
Elapsed: 00:00:00.02
SYS@INMEM> 
SYS@INMEM> -- note that tablespace KSO_NON_EXA is on local disk, not Exadata storage servers, so PREDICATE_EVALUATION is set to HOST.
SYS@INMEM> 
SYS@INMEM> drop table kso.junk1;
 
Table dropped.
 
Elapsed: 00:00:00.01
SYS@INMEM> create table kso.junk1 (col1 number, col2 number) clustering by linear order (col1,col2) tablespace kso_non_exa;
 
Table created.
 
Elapsed: 00:00:00.01
SYS@INMEM> select owner, table_name, tablespace_name from dba_tables where table_name like 'JUNK1';
 
OWNER                TABLE_NAME                     TABLESPACE_NAME
-------------------- ------------------------------ ---------------
KSO                  JUNK1                          KSO_NON_EXA
 
Elapsed: 00:00:00.01
SYS@INMEM> 
SYS@INMEM> -- wow - that's a bit of a surprise, clustered table create worked on non-Exa storage
SYS@INMEM> -- maybe the check is done on some other level than the tablespace
SYS@INMEM> 
SYS@INMEM> create materialized zonemap skew_zonemap on kso.junk1(col1);
create materialized zonemap skew_zonemap on kso.junk1(col1)
                                                          *
ERROR at line 1:
ORA-31969: ZONEMAP not supported for table stored in tablespace of this storage type
 
 
Elapsed: 00:00:00.00

So as you can see, attempting to create the zone map on non-Exa storage failed as expected. But I was able to create a clustered table on non-Exa storage, which is a little weird. So while the error message for attribute clustering exists in the messages file, it doesn’t appear that there is a check in the code, at least at the tablespace level. I don’t have a 12.1.0.2 install on a non-Exadata platform at the moment to test it out, but if you do, please let me know.

That’s it for now. I hope to do some more detailed posts on In-Memory, Zone Maps, Attribute Clustering in the near future. As always, your comments are welcomed.

Can you justify your data ?

People ask me to justify use of Delphix. I can understand. Delphix is pretty new and often, most of my friends who are DBAs respond with “I can copy a database, so what, I can do it a little faster with Delphix.” Well that’s missing the whole boat. The question won’t be why you should use Delphix but “can you  justify working without Delphix?”

I see Delphix as amazingly positioned at nexus of data concerns right now – the right place at the right time :

  • Data is exploding
  • Big Data is hyped up
  • Data management is getting harder
  • Compliance issues are mounting
  • Security issues are mounting
  • Consumerization of IT is pivotal

Delphix makes managing data  easy and efficient which means people get access to data faster, easier and cheaper which is what people are all wanting with the big data hype but rarely getting with big data “solutions”. Before people try to harness “big data”, which in my mind is the data that people haven’t even tried to harness yet, it would make more sense that people harness the data they understand. As an analyst I want fresh copies of data immediately. I don’t want it a weeks old. I also want to work freely on it with out worrying about corrupting it for others or impacting others usage of the data. With Delphix, analysts can get data for themselves, fresh data, immediately and do what they want with it and then throw it away.

This has enormous implications for compliance where companies are under more and more compliance pressure and may be called upon at any time to supply past versions of data for audits. Delphix makes it easy, cheap and fast to keep old versions of data and then to provision those versions in minutes.

This has implications in financial closes where identifying and correcting general ledger errors on live systems can be prohibitive. Delphix allows fast easy sandboxes.

Security is a huge issue. With all the copies of production databases in Dev, QA, UAT, Reporting, testing, sandboxes etc it can be extremely difficult to track. On the other hand if all those copies point back to the data on Delphix it’s easy to guarantee that it is all masked and to track chain of custody.

As UIs, hardware and software languages and frameworks advance and as more and more people have access to technology, technology is exploding in areas where the UI is user friendly. The Web browser itself is a powerful example of that.  People are demanding more and more consumerized interfaces. It makes sense. Why have arcane, obfuscated, difficult interfaces that are the domain of experts only and because the interfaces are clunky they lead to human error. Instead we are demanding and benefiting from the power of clear, fast, task oriented UIs that are easy to use and guide and protect the user during the interactions. On the UI front, Delphix has just been added to http://usercentricit.com/.

Boys Telling Secrets

New Oracle Bug alert (Bug 19384287)

Heads up to all the folks running 11.2.0.4 and above if you're using function-based indexes! There's a new Oracle bug 19384287. I'll fill you in with a complete post over at Toad World.

Styling the Squarespace Flatiron

Squarespace's Flatiron template makes eye-grabbing use of image-intensive index pages. Figure 1 shows such an index page, and Figure 2 shows another view of the same page after clicking the top/middle index image. Is it possible to write CSS style rules to affect the two views differently? The answer is yes. 

Figure 1. Home view of a Flatiron index page

Figure 1. Home view of a Flatiron index page

Figure 2. The same page after making a selection

Figure 2. The same page after making a selection

I refer to Figure 1 as the home view, and to Figure 2 as the detail view. You can distinguish between the two by a class in the tag. The following code shows that tag and the many class names that are mentioned within it in the home view. Squarespace adds a class named index-detail to denote the detail view shown in Figure 2.

<body class="page-index mobile-style-available site-alignment-center 
             site-titlelogo-position-right header-position-fixed 
             project-hover-zoom project-hover-panning project-squares 
             blog-borders show-author  social-icon-style-normal 
             show-category-navigation product-list-titles-overlay 
             product-list-alignment-center product-item-size-11-square 
             product-image-auto-crop product-gallery-size-11-square 
             product-gallery-auto-crop show-product-price 
             show-product-item-nav product-social-sharing   
             event-thumbnails event-thumbnail-size-32-standard 
             event-date-label event-date-label-time event-excerpts 
             event-list-date event-list-time event-list-address 
             event-icalgcal-links      opentable-style-light 
             newsletter-style-dark small-button-style-solid 
             small-button-shape-square medium-button-style-solid 
             medium-button-shape-square large-button-style-solid 
             large-button-shape-square button-style-default 
             button-corner-style-square native-currency-code-usd 
             collection-layout-default collection-type-index 
             collection-53dfdf00e4b0f730baf5c6df view-list 
             homepage" 
      id="collection-53dfdf00e4b0f730baf5c6df">

What's tricky is that you'll see the exact same tag for both pages when you view the HTML source for the pages behind Figures 1 and 2. The index-detail class is added by JavaScript when you click an index image, and is not visible when you view the page source. It is visible however, when you inspect the body element using your browser's object inspector. Figure 3 shows the view from the inspector, and you can see the index-detail class highlighted in red.

Figure 3. The index-detail class added to the body element

Figure 3. The index-detail class added to the body element

Knowing about the index-detail class allows you to style your detail view differently from your home view. For example, you can inhibit the display of the index grid in the detail view by adding the following rule to your Custom CSS Editor:

.index-detail #grid {display: none !important}

You can also work things in reverse by making changes to the style of the home view without affecting the detail view. For example, you can inhibit the display of your site's footer from the home view while leaving the footer visible from all the other views and pages in your site. Do that by writing the following rule:

.homepage:not(.index-detail) #bottomBar {display: none;}

Figure 4 shows the effect of the rule to inhibit the display of the index grid from the detail view. (Compare with Figure 2 earlier). Figure 5 shows the effect of the rule to inhibit the display of the footer. The footer is visible in Figure 4's detail view, but is now omitted from Figure 5's home view.

Figure 4. The detail view with no grid underneath

Figure 4. The detail view with no grid underneath

Figure 5. The home view with no footer

Figure 5. The home view with no footer

The rule inhibiting the footer uses a CSS selector you may not be familiar with. Here's a step-by-step description of how that rule is put together:

  1. .homepage refers to a class identifying my home page, which in this case is an index page. Display of the footer will be inhibited only on that page.
  2. :not(.index-detail) further restricts the rule to the home page when not in detail view. The rule thus applies to the specific case of the home page in home view.
  3. The rest of the rule is straightforward. #bottomBar is the identifier of Flatiron's footer. Its display attribute is set to none when the rule is triggered.

Writing CSS for Flatiron sites can be tricky because clicking an image from an index page's home view never really takes you to a new page. You're always on the same page, and seeing different views of that page's content. Knowing about the index-detail class gives you what you need to be able to style the two views differently from one another.

Tip: The design of Flatiron index pages has implications when posting links to Facebook. Read my post on Facebooking the Squarespace Flatiron to learn what to do when you want to post a link to an individual gallery item and have the correct image and summary text display on your Facebook wall. 

Learn CSS for Squarespace

9.99

Learn CSS for Squarespace is a short, seven-chapter introduction to working with CSS on the Squarespace platform. The book is delivered as a mobi file that is readable from the Kindle Fire HDX or similar devices. A complimentary sample chapter is available. Try before you buy.

Add To Cart

Gmail and the Mystery of Auntie Spam

Gmail is sometimes too aggressive about sending messages to spam. This can be especially frustrating when you're a member of an email discussion group and find that you're missing out because some of those discussions are bypassing your inbox and going straight to your spam folder. Fret no longer! There is a solution.

The solution lies in creating a what is termed a filter. You can prevent list messages from going to spam, and you can label and organize them at the same time. 

Begin by reading a list message from Gmail's web interface. Then select Filter messages like these from the More menu at the top of the message. Figure 1 shows that menu, and the item you want to choose is the last one in the list. 

Figure 1. Selecting to add a filter

Figure 1. Selecting to add a filter

A dialog will open like that in Figure 2. Here is where you tell Gmail how to identify incoming messages as being list messages. Many email discussion lists are configured to specify a list-specific address in the To line. Look at your list messages. See whether they are all sent to the same address. Type that address into the To field in Figure 2's dialog. 

Figure 2. Identifying messages as being list messages

Figure 2. Identifying messages as being list messages

You've given Gmail the criteria by which to identify incoming message as being from the list. Now click the Continue link to reach the dialog in Figure 3. Here is where you tell Gmail how to respond when a list message is received. 

Figure 3. Specifying Gmail's response to each incoming message from the list

Figure 3. Specifying Gmail's response to each incoming message from the list

Figure 3 shows the settings that I personally use to manage one of my own lists. These settings do the following:

  • Skip the Inbox reduces clutter in my inbox by archiving the list mail upon receipt. I don't see the messages until I specifically sit down to read them. 
  • Apply the label tags each message with an identifying label. That label appears as a link to the left side of the Gmail interface. I can see all the list messages anytime I want by clicking on that label link. 
  • Never send it to Spam is the magic sauce for never missing a discussion. Check this box to prevent list mail from ever being sent to your spam folder. 

Never send it to Spam is the one option you want to be sure to select so that you never lose a message to spam. What you do with the other two options depends upon your approach to organizing your email. 

Caution! If you are a Gmail user who reads his email through Microsoft Outlook, then be aware that skipping the inbox may mean that Outlook does not download those messages. In that case, avoid selecting Skip the Inbox when creating a filter. 

You can make different choices for different lists to accommodate your preferences. I have one list set to skip my inbox, and another list that I allow to go into my inbox so that I am alerted immediately to each incoming message. I make different choices for different lists. You can too, and that level of control helps make your inbox a lot more manageable.