Who's online

There are currently 0 users and 32 guests online.

Recent comments

Oakies Blog Aggregator

Oracle Database In-Memory Test Drive Workshop: Canberra 28 April 2015

I’ll be running a free Oracle Database In-Memory Test Drive Workshop locally here in Canberra on Tuesday, 28th April 2015. Just bring a laptop with at least 8G of RAM and I’ll supply a VirtualBox image with the Oracle Database 12c In-Memory environment. Together we’ll go through a number of hands-on labs that cover: Configuring the Product Easily […]

Sqlplus is my second home, part 8: Embedding multiple sqlplus arguments into one variable

I’ve updated some of my ASH scripts to use these 4 arguments in a standard way:

  1. What ASH columns to display (and aggregate by)
  2. Which ASH rows to use for the report (filter)
  3. Time range start
  4. Time range end

So this means whenever I run ashtop (or dashtop) for example, I need to type in all 4 parameters. The example below would show top SQL_IDs only for user SOE sessions from last hour of ASH samples:

SQL> @ashtop sql_id username='SOE' sysdate-1/24 sysdate

  Seconds     AAS %This   SQL_ID        FIRST_SEEN          LAST_SEEN           DIST_SQLEXEC_SEEN
--------- ------- ------- ------------- ------------------- ------------------- -----------------
     2271      .6   21% | 56pwkjspvmg3h 2015-03-29 13:13:16 2015-03-29 13:43:34               145
     2045      .6   19% | gkxxkghxubh1a 2015-03-29 13:13:16 2015-03-29 13:43:14               149
     1224      .3   11% | 29qp10usqkqh0 2015-03-29 13:13:25 2015-03-29 13:43:32               132
      959      .3    9% | c13sma6rkr27c 2015-03-29 13:13:19 2015-03-29 13:43:34               958
      758      .2    7% |               2015-03-29 13:13:16 2015-03-29 13:43:31                 1

When I want more control and specify a fixed time range, I can just use the ANSI TIMESTAMP (or TO_DATE) syntax:

SQL> @ashtop sql_id username='SOE' "TIMESTAMP'2015-03-29 13:00:00'" "TIMESTAMP'2015-03-29 13:15:00'"

  Seconds     AAS %This   SQL_ID        FIRST_SEEN          LAST_SEEN           DIST_SQLEXEC_SEEN
--------- ------- ------- ------------- ------------------- ------------------- -----------------
      153      .2   22% | 56pwkjspvmg3h 2015-03-29 13:13:29 2015-03-29 13:14:59                 9
      132      .1   19% | gkxxkghxubh1a 2015-03-29 13:13:29 2015-03-29 13:14:59                 8
       95      .1   14% | 29qp10usqkqh0 2015-03-29 13:13:29 2015-03-29 13:14:52                 7
       69      .1   10% | c13sma6rkr27c 2015-03-29 13:13:31 2015-03-29 13:14:58                69
       41      .0    6% |               2015-03-29 13:13:34 2015-03-29 13:14:59                 1

Note that the arguments 3 & 4 above are in double quotes as there’s a space within the timestamp value. Without the double-quotes, sqlplus would think the script has total 6 arguments due to the spaces.

I don’t like to type too much though (every character counts!) so I was happy to see that the following sqlplus hack works. I just defined pairs of arguments as sqlplus DEFINE variables as seen below (also in init.sql now):

  -- geeky shorcuts for producing date ranges for various ASH scripts
  define     min="sysdate-1/24/60 sysdate"
  define  minute="sysdate-1/24/60 sysdate"
  define    5min="sysdate-1/24/12 sysdate"
  define    hour="sysdate-1/24 sysdate"
  define   2hours="sysdate-1/12 sysdate"
  define  24hours="sysdate-1 sysdate"
  define      day="sysdate-1 sysdate"
  define    today="TRUNC(sysdate) sysdate"

And now I can type just 3 arguments instead of 4 when I run some of my scripts and want some predefined behavior like seeing last 5 minutes’ activity:

SQL> @ashtop sql_id username='SOE' &5min

  Seconds     AAS %This   SQL_ID        FIRST_SEEN          LAST_SEEN           DIST_SQLEXEC_SEEN
--------- ------- ------- ------------- ------------------- ------------------- -----------------
      368     1.2   23% | gkxxkghxubh1a 2015-03-29 13:39:34 2015-03-29 13:44:33                37
      241      .8   15% | 56pwkjspvmg3h 2015-03-29 13:40:05 2015-03-29 13:44:33                25
      185      .6   12% | 29qp10usqkqh0 2015-03-29 13:39:40 2015-03-29 13:44:33                24
      129      .4    8% | c13sma6rkr27c 2015-03-29 13:39:35 2015-03-29 13:44:32               129
      107      .4    7% |               2015-03-29 13:39:34 2015-03-29 13:44:33                 1

That’s it, I hope this hack helps :-)

By the way – if you’re a command line & sqlplus fan, check out the SQLCL command line “new sqlplus” tool from the SQL Developer team! (you can download it from the SQL Dev early adopter page for now).


Video Tutorial: XPLAN_ASH Active Session History - Part 4

#333333; font-family: Verdana, Arial, sans-serif; font-size: 13px; line-height: 16.8999996185303px;">The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.
#333333; font-family: Verdana, Arial, sans-serif; font-size: 13px; line-height: 16.8999996185303px;" />
#333333; font-family: Verdana, Arial, sans-serif; font-size: 13px; line-height: 16.8999996185303px;" />#333333; font-family: Verdana, Arial, sans-serif; font-size: 13px; line-height: 16.8999996185303px;">More parts to follow.
#333333; font-family: Verdana, Arial, sans-serif; font-size: 13px; line-height: 16.8999996185303px;">

#333333; font-family: Verdana, Arial, sans-serif; font-size: 13px; line-height: 16.8999996185303px;">

In-memory – can you REALLY drop those indexes ?

Its not a hard sell to convince anyone that a whopping huge great chunk of compressed column-based memory is going to assist with those analytic-style queries.  The In-memory option seems a natural fit, especially for those people where the bottleneck in their infrastructure is the capacity to consume data from storage.

What perhaps a more interesting area of investigation is the claim that In-memory will also be of benefit in OLTP systems, where (to paraphrase many presenters at OpenWorld 2014), we wont be needing all those indexes on transactional tables.

So I thought I’d have an initial play around in that space, and see what happens.

We’ll start with a simple test bed – a transactional table "T", being a child of parent table "PAR",  with a few indexes representing those "additional" OLTP indexes.

SQL*Plus: Release Production on Sat Mar 28 11:58:03 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> drop table PAR cascade constraints purge;

Table dropped.

SQL> create table PAR ( p int primary key);

Table created.

SQL> insert into PAR
  2  select rownum-1 from dual
  3  connect by level <= 100;

100 rows created.


SQL> commit;

Commit complete.


SQL> exec dbms_stats.gather_table_stats('','PAR');

PL/SQL procedure successfully completed.


SQL> drop table T purge;

Table dropped.


SQL> create table T (
  2    pk int,
  3    dist50  int,
  4    dist1000 int,
  5    dist100000 int,
  6    stuff char(30),
  7    numeric_stuff number(10,2)
  8  );

Table created.


SQL> drop sequence seq;

Sequence dropped.

SQL> create sequence seq cache 1000;

Sequence created.


SQL> insert /*+ APPEND */ into T
  2  select
  3    seq.nextval,
  4    trunc(dbms_random.value(1,50)),
  5    trunc(dbms_random.value(1,1000)),
  6    trunc(dbms_random.value(1,100000)),
  7    'x',
  8    dbms_random.value(1,1000)
  9  from
 10  ( select 1 from dual connect by level <= 1000 ),
 11  ( select 1 from dual connect by level <= 1000 )
 12  /

1000000 rows created.


SQL> commit;

Commit complete.


SQL> exec dbms_stats.gather_table_stats('','T');

PL/SQL procedure successfully completed.


SQL> alter table T add primary key ( pk );

Table altered.

SQL> create index IX1 on T ( dist50 );

Index created.

SQL> create index IX2 on T ( dist1000 );

Index created.

SQL> create index IX3 on T ( dist100000 );

Index created.


SQL> alter table T add constraint FK foreign key ( dist50 )
  2  references PAR ( p );

Table altered.

So at this point, we’ve got our table pre-populated with 1million rows, with some columns of varying levels of cardinality, each indexed to support adhoc query.

Now we’ll create our "application" that sits on top of this table.  We have a routine for logging a transaction (this is the "OLTP" part of our application)

SQL> create or replace
  2  procedure TRANS_LOGGER is
  3  begin
  4  insert into T values (
  5    seq.nextval,
  6    trunc(dbms_random.value(1,50)),
  7    trunc(dbms_random.value(1,1000)),
  8    trunc(dbms_random.value(1,100000)),
  9    'x',
 10    dbms_random.value(1,1000)
 11  );
 12  commit work write wait immediate;
 13  end;
 14  /

Procedure created.

We’re using the extended commit syntax, because when we run this routine in a PL/SQL loop, we dont want to ‘cheat’ by using the PL/SQL optimization that avoids waiting for commits to complete. Now we’ll add the other half of our application, the support for adhoc queries on those non-primary key columns.

SQL> create or replace
  2  procedure adhoc_query is
  3    x int;
  4    res number;
  5  begin
  6    x := trunc(dbms_random.value(1,50));
  7    select sum(numeric_stuff) into res
  8    from t where dist50 = x;
 10    x := trunc(dbms_random.value(1,1000));
 11    select sum(numeric_stuff) into res
 12    from t where dist1000 = x;
 14    x := trunc(dbms_random.value(1,100000));
 15    select  sum(numeric_stuff) into res
 16    from t where dist100000 = x;
 18  end;
 19  /

Procedure created.

So there we have it. Our "application" is ready to go :-) Now we need to make it "popular", that is, have lots of people use it. To do this, I’ll create a routine which simulates a moderately busy application server thread. We’ll be firing off lots of these later to crank up the load.

SQL> create or replace 
  2  procedure app_server_thread is 
  3    think_time number := 0.05; 
  4    iterations int := 600 * ( 1 / think_time ); 
  5  begin 
  6    for i in 1 .. iterations loop 
  7      dbms_application_info.set_client_info(i||' of '||iterations); 
  8      trans_logger; 
  9      dbms_lock.sleep(think_time); 
  10    end loop; 
  11  end ; 
  12  / 

Procedure created. 

So the "app_server_thread" procedure, will iterate a number of times, calling TRANS_LOGGER with a little bit of "think time", and will roughly run for 10 minutes (the 600 seconds in the iterations expression above).

We’ll also have a another procedure doing a similar thing, but for adhoc query. It will have a little longer think time, representing that our app is mainly transactional focussed with intermittent query.

SQL> create or replace
  2  procedure app_server_thread_query is
  3    think_time number := 0.25;
  4    iterations int := 600 * ( 1 / think_time );
  5  begin
  6   for i in 1 .. iterations loop
  7     dbms_application_info.set_client_info(i||' of '||iterations);
  8     adhoc_query;
  9     dbms_lock.sleep(think_time);
 10   end loop;
 11  end ;
 12  /

Procedure created.

We’re pretty much ready to go now.  I’m running this on a Linux VM which has 12 "virtual" cores, and 128G of RAM hooked up to a Netapp filer for storage.  Using some shell script, we can fire off concurrent session of our "app server" routines.  I’m taking an AWR before and after so we can poke around in the results.


print "
connect / as sysdba
exec dbms_workload_repository.create_snapshot
exit" | sqlplus /nolog 1>awr.out 2>&1

print "
connect xxx/yyy
set timing on
exec app_server_thread
exit" | sqlplus /nolog 1>txnthread.out.$1 2>&1

print "
connect xxx/yyy
set timing on
exec app_server_thread_query
exit" | sqlplus /nolog 1>querythread.out.$1 2>&1

echo AWR

echo TxnThreads
for i in 0 1 2 3 4 5 6 7 8 9
  TxnThread $i &

echo QueryThreads
for i in 1 2 3 4 5
  QueryThread $i &


echo AWR

echo Done

and wait 10 minutes for for it to finish. 

The simplest analysis is how long did it take – we know that a good chunk of the time is sleep time, but we’re just going to compare this later with an in-memory test.

# grep Elapsed *out*
querythread.out.1:Elapsed: 00:12:09.42
querythread.out.2:Elapsed: 00:12:08.45
querythread.out.3:Elapsed: 00:12:08.20
querythread.out.4:Elapsed: 00:12:09.04
querythread.out.5:Elapsed: 00:12:08.74
txnthread.out.0:Elapsed: 00:10:20.91
txnthread.out.1:Elapsed: 00:10:20.92
txnthread.out.2:Elapsed: 00:10:21.01
txnthread.out.3:Elapsed: 00:10:21.11
txnthread.out.4:Elapsed: 00:10:20.90
txnthread.out.5:Elapsed: 00:10:21.00
txnthread.out.6:Elapsed: 00:10:21.06
txnthread.out.7:Elapsed: 00:10:21.10
txnthread.out.8:Elapsed: 00:10:20.86
txnthread.out.9:Elapsed: 00:10:20.95

So about 130 seconds (over the 10mins) for the query threads, and 21 seconds (over the 10mins) for the transactional threads. Now I’ve re-run the exact same setup script above, and added a few more steps:

SQL> alter table T inmemory priority critical;

Table altered.

SQL> alter index ix1 unusable;

Index altered.

SQL> alter index ix2 unusable;

Index altered.

SQL> alter index ix3 unusable;

Index altered.

SQL> select segment_name, bytes from v$im_segments;

SEGMENT_NAME                                  BYTES
---------------------------------------- ----------
T                                          67108864

So now our table is loaded into the in-memory store, and the non-primary key indexes have been tossed out. Now we re-run our benchmark, and see what pops out

# grep Elapsed *out*
querythread.out.1:Elapsed: 00:10:53.10
querythread.out.2:Elapsed: 00:10:53.10
querythread.out.3:Elapsed: 00:10:53.10
querythread.out.4:Elapsed: 00:10:53.07
querythread.out.5:Elapsed: 00:10:53.09
txnthread.out.0:Elapsed: 00:10:20.55
txnthread.out.1:Elapsed: 00:10:21.02
txnthread.out.2:Elapsed: 00:10:20.82
txnthread.out.3:Elapsed: 00:10:20.77
txnthread.out.4:Elapsed: 00:10:20.92
txnthread.out.5:Elapsed: 00:10:20.82
txnthread.out.6:Elapsed: 00:10:20.86
txnthread.out.7:Elapsed: 00:10:20.91
txnthread.out.8:Elapsed: 00:10:20.75
txnthread.out.9:Elapsed: 00:10:20.75

Well thats certainly showing some promise. Our transaction times dont seem to have been adversely affected, and our query performance is better. We can also look at the AWR reports and see how in-memory has influenced things:

Without in-memory



With in-memory


One of the positive outcomes is that redo generation shrinks; less indexes to update means less redo.  If you’ve got crappy storage and you’re suffering with redo write issues, then perhaps in-memory is an avenue worth exploring (although you’d want to be carefully considering license costs versus storage upgrade costs!)

Taking a look at the wait events is also interesting

Without in-memory


With in-memory


With in-memory enabled, buffer busy waits have dropped…but they have been "replaced" with a new event "IM buffer busy".  Always remember – Memory is memory – if multiple people want to manipulate it, then some sort of serialisation is going to occur.  Similarly, note the appearance of the "latch free" event once in-memory comes into play.  But (in this simple demo), the enabling of in-memory has not hindered the transactional characteristics of the app, and looks to have yielded some redo and query benefits.

This is by no means a definitive "yes, you are fine to drop those indexes" and similarly, not a definitive "you must keep those indexes" .  Your own application will have its own particular idiosyncracies.  But the results are promising – if you’ve got the license dollars for in-memory, you may as well look at exploiting that expenditure in every way possible.

ANSI expansion

Here’s a quirky little bug that appeared on the OTN database forum in the last 24 hours which (in 12c, at least) produces an issue which I can best demonstrate with the following cut-n-paste:

SQL> desc purple
 Name                                Null?    Type
 ----------------------------------- -------- ------------------------
 G_COLUMN_001                        NOT NULL NUMBER(9)
 P_COLUMN_002                                 VARCHAR2(2)

SQL> select p.*
  2  from GREEN g
  3    join RED r on g.G_COLUMN_001 = r.G_COLUMN_001
  4    join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001;
  join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001
ERROR at line 4:
ORA-01792: maximum number of columns in a table or view is 1000

SQL> select p.g_column_001, p.p_column_002
  2  from GREEN g
  3    join RED r on g.G_COLUMN_001 = r.G_COLUMN_001
  4    join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001;

no rows selected

A query that requires “star-expansion” fails with ORA-01792, but if you explicitly expand the ‘p.*’ to list all the columns it represents the optimizer is happy. (The posting also showed the same difference in behaviour when changing “select constant from  {table join}” to “select (select constant from dual) from {table join}”)

The person who highlighted the problem supplied code to generate the tables so you can repeat the tests very easily; one of the quick checks I did was to modify the code to produce tables with a much smaller number of columns and then expanded the SQL to see what Oracle would have done with the ANSI. So, with only 3 columns each in table RED and GREEN, this is what I did:

set serveroutput on
set long 20000

variable m_sql_out clob

    m_sql_in    clob :=
                        select p.*
                        from GREEN g
                        join RED r on g.G_COLUMN_001 = r.G_COLUMN_001
                        join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001



column m_sql_out wrap word
print m_sql_out

The dbms_utility.expand_sql_text() function is new to 12c, and you’ll need the execute privilege on the dbms_utility package to use it; but if you want to take advantage of it in 11g you can also find it (undocumented) in a package called dbms_sql2.

Here’s the result of the expansion (you can see why I reduced the column count to 3):

SELECT "A1"."G_COLUMN_001_6" "G_COLUMN_001","A1"."P_COLUMN_002_7" "P_COLUMN_002"
FROM  (SELECT "A3"."G_COLUMN_001_0" "G_COLUMN_001","A3"."G_COLUMN_002_1"
"G_COLUMN_002","A3"."G_COLUMN_003_2" "G_COLUMN_003","A3"."G_COLUMN_001_3"
"G_COLUMN_001","A3"."R_COLUMN__002_4" "R_COLUMN__002","A3"."R_COLUMN__003_5"
"R_COLUMN__003","A2"."G_COLUMN_001" "G_COLUMN_001_6","A2"."P_COLUMN_002"
"P_COLUMN_002_7" FROM  (SELECT "A5"."G_COLUMN_001"
"G_COLUMN_001_0","A5"."G_COLUMN_002" "G_COLUMN_002_1","A5"."G_COLUMN_003"
"G_COLUMN_003_2","A4"."G_COLUMN_001" "G_COLUMN_001_3","A4"."R_COLUMN__002"
"R_COLUMN__002_4","A4"."R_COLUMN__003" "R_COLUMN__003_5" FROM
"A5"."G_COLUMN_001"="A4"."G_COLUMN_001") "A3","TEST_USER"."PURPLE" "A2" WHERE
"A3"."G_COLUMN_001_0"="A2"."G_COLUMN_001") "A1"

Tidying this up:

        A1.G_COLUMN_001_6 G_COLUMN_001,
        A1.P_COLUMN_002_7 P_COLUMN_002
FROM    (
                A3.G_COLUMN_001_0 G_COLUMN_001,
                A3.G_COLUMN_002_1 G_COLUMN_002,
                A3.G_COLUMN_003_2 G_COLUMN_003,
                A3.G_COLUMN_001_3 G_COLUMN_001,
                A3.R_COLUMN__002_4 R_COLUMN__002,
                A3.R_COLUMN__003_5 R_COLUMN__003,
                A2.G_COLUMN_001 G_COLUMN_001_6,
                A2.P_COLUMN_002 P_COLUMN_002_7
        FROM    (
                        A5.G_COLUMN_001 G_COLUMN_001_0,
                        A5.G_COLUMN_002 G_COLUMN_002_1,
                        A5.G_COLUMN_003 G_COLUMN_003_2,
                        A4.G_COLUMN_001 G_COLUMN_001_3,
                        A4.R_COLUMN__002 R_COLUMN__002_4,
                        A4.R_COLUMN__003 R_COLUMN__003_5
                        TEST_USER.GREEN A5,
                        TEST_USER.RED A4
                ) A3,
                TEST_USER.PURPLE A2
        ) A1

As you can now see, the A1 alias lists all the columns in GREEN, plus all the columns in RED, plus all the columns in PURPLE – totalling 3 + 3 + 2 = 8. (There is a little pattern of aliasing and re-aliasing that turns the join column RED.g_column_001 into G_COLUMN_001_3, making it look at first glance as if it has come from the GREEN table).

You can run a few more checks, increasing the number of columns in the RED and GREEN tables, but essentially when the total number of columns in those two tables goes over 998 then adding the two extra columns from PURPLE makes that intermediate inline view break the 1,000 column rule.

Here’s the equivalent expanded SQL if you identify the columns explicitly in the select list (even with several hundred columns in the RED and GREEN tables):

        A1.G_COLUMN_001_2 G_COLUMN_001,
        A1.P_COLUMN_002_3 P_COLUMN_002
FROM    (
                A3.G_COLUMN_001_0 G_COLUMN_001,
                A3.G_COLUMN_001_1 G_COLUMN_001,
                A2.G_COLUMN_001 G_COLUMN_001_2,
                A2.P_COLUMN_002 P_COLUMN_002_3
        FROM    (
                        A5.G_COLUMN_001 G_COLUMN_001_0,
                        A4.G_COLUMN_001 G_COLUMN_001_1
                        TEST_USER.GREEN A5,
                        TEST_USER.RED A4
                ) A3,
                TEST_USER.PURPLE A2
        ) A1

As you can see, the critical inline view now holds only the original join columns and the columns required for the select list.

If you’re wondering whether this difference in expansion could affect execution plans, it doesn’t seem to; the 10053 trace file includes the following (cosmetically altered) output:

Final query after transformations:******* UNPARSED QUERY IS *******
        P.G_COLUMN_001 G_COLUMN_001,
        P.P_COLUMN_002 P_COLUMN_002
        TEST_USER.GREEN   G,
        TEST_USER.RED     R,

So it looks as if the routine to transform the syntax puts in a lot of redundant text, then the optimizer takes it all out again.

The problem doesn’t exist with traditional Oracle syntax, by the way, it’s an artefact of Oracle’s expansion of the ANSI syntax, and is quite happy to handle the text generated by the ANSI transformation when there are well over 1,000 columns in the inline view.


There was a little conversation on Oracle-L about ASH (active session history) recently which I thought worth highlighting – partly because it raised a detail that I had got wrong until Tim Gorman corrected me a few years ago.

Once every second the dynamic performance view v$active_session_history copies information about active sessions from v$session. (There are a couple of exceptions to the this rule – for example if a session has called dbms_lock.sleep() it will appear in v$session as state = ‘ACTIVE’, but it will not be recorded in v$active_session_history.) Each of these snapshots is referred to as a “sample” and may hold zero, one, or many rows.

The rows collected in every tenth sample are flagged for copying into the AWR where, once they’ve been copied into the underlying table, they can be seen in the view dba_hist_active_sess_history.  This is where a common misunderstanding occurs: it is not every 10th row in v$active_session_history it’s every 10th second; and if a sample happens to be empty that’s still the sample that is selected (which means there will be a gap in the output from dba_hist_active_sess_history). In effect dba_hist_active_sess_history holds copies of the information you’d get from v$session if you sampled it once every 10 seconds instead of once per second.

It’s possible to corroborate this through a fairly simple query as the rows from v$active_session_history that are going to be dumped to the AWR are as they are created:

        distinct case is_awr_sample when 'Y' then 'Y' end flag,
        sample_time > sysdate - 1/1440
order by

- ---------- --------------------------------
     3435324 26-MAR-15 PM
     3435325 26-MAR-15 PM
     3435326 26-MAR-15 PM
     3435327 26-MAR-15 PM
     3435328 26-MAR-15 PM
     3435329 26-MAR-15 PM
     3435330 26-MAR-15 PM
     3435331 26-MAR-15 PM
Y    3435332 26-MAR-15 PM
     3435333 26-MAR-15 PM
     3435334 26-MAR-15 PM
     3435335 26-MAR-15 PM
     3435336 26-MAR-15 PM
     3435337 26-MAR-15 PM
     3435338 26-MAR-15 PM
     3435339 26-MAR-15 PM
     3435340 26-MAR-15 PM
     3435341 26-MAR-15 PM
Y    3435342 26-MAR-15 PM
     3435343 26-MAR-15 PM
     3435344 26-MAR-15 PM
     3435345 26-MAR-15 PM
     3435346 26-MAR-15 PM
     3435347 26-MAR-15 PM
     3435348 26-MAR-15 PM
     3435349 26-MAR-15 PM
     3435350 26-MAR-15 PM
     3435351 26-MAR-15 PM
Y    3435352 26-MAR-15 PM
     3435355 26-MAR-15 PM
     3435358 26-MAR-15 PM
     3435361 26-MAR-15 PM
     3435367 26-MAR-15 PM
     3435370 26-MAR-15 PM
     3435371 26-MAR-15 PM
     3435373 26-MAR-15 PM
     3435380 26-MAR-15 PM
     3435381 26-MAR-15 PM
Y    3435382 26-MAR-15 PM
     3435383 26-MAR-15 PM

40 rows selected.

As you can see at the beginning of the output the samples have a sample_time that increases one second at a time (with a little slippage), and the flagged samples appear every 10 seconds at 5.53.01, 5.53.11 and 5.53.21; but then the instance becomes fairly idle and there are several sample taken over the next 20 seconds or so where we don’t capture any active sessions; in particular there are no rows in the samples for 5.53.31, and 5.53.41; but eventually the instance gets a little busy again and we see that we’ve had active sessions in consecutive samples for the last few seconds, and we can see that we’ve flagged the sample at 5.53.51 for dumping into the AWR.

You’ll notice that I seem to be losing about 1/100th second every few seconds; this is probably a side effect of virtualisation and having a little CPU-intensive work going on in the background. If you see periods where the one second gap in v$active_session_history or 10 second gap in dba_hist_active_sess_history has been stretched by several percent you can assume that the CPU was under pressure over that period. The worst case I’ve seen to date reported gaps of 12 to 13 seconds in dba_hist_active_sess_history.  The “one second” algorithm is “one second since the last snapshot was captured” so if the process that’s doing the capture doesn’t get to the top of the runqueue in a timely fashion the snapshots slip a little.

When the AWR snapshot is taken, the flagged rows from v$active_session_history are copied to the relevant AWR table. You can adjust the frequency of sampling for both v$active_session_history, and dba_hist_active_sess_history, of course – there are hidden parameters to control both: _ash_sampling_interval (1,000 milliseconds) and _ash_disk_filter_ratio (10). There’s also a parameter controlling how much memory should be reserved in the shared pool to hold v$active_session_history.: _ash_size (1048618 bytes per session in my case).  The basic target is to keep one hour’s worth of data in memory, but if there’s no pressure for memory you can find that the v$active_session_history holds more than the hour; conversely, if there’s heavy demand for memory and lots of continuously active sessions you may find that Oracle does “emergency flushes” of v$active_session_history between the normal AWR snapshots. I have heard of people temporarily increasing the memory and reducing the interval and ratio – but I haven’t yet felt the need to do it myself.


PowerPoint 2013 Always Starts Minimized

This post has nothing to do with Oracle or Enterprise Manager at all, so if that’s all you’re interested in you can stop reading now. :)

Yesterday I ran into the situation where PowerPoint would only open minimized on the task bar and nothing I could do would get it to budge from there. The only way I could get PowerPoint to start and be displayed on any monitor was to start it in safe mode. [As an aside, to start PowerPoint (or indeed any Office product) in Safe Mode you just need to hold down the Control key while double-clicking either the PowerPoint program or any PowerPoint document.] After starting PowerPoint successfully in Safe Mode, I could see PowerPoint on my screen, but when I closed it down and tried to restart in normal mode, it would always start minimized.

So one answer is simply to start always in Safe Mode, which of course means a variety of things are disabled or don’t work fully. Not a very good answer. :) I searched a bit further afield, and found some Microsoft notes that basically explained the screen X-Y coordinates for PowerPoint may be corrupt (actually the notes were talking about Microsoft Word, not PowerPoint, but the explanation applies in either case). In the support note, it suggests 3 different methods to fix the issue:

  1. Maximize the program
  2. Use the move feature to reposition the program into view
  3. Edit the registry

Obviously you should work through these from top to bottom, but I had already found that the first two did nothing in my situation, so I was left with editing the registry.

IMPORTANT: As always, as the Microsoft note says, “Before you modify the registry, make sure to back it up and make sure that you understand how to restore the registry if a problem occurs. For information about how to back up, restore, and edit the registry, click the following article number to view the article in the Microsoft Knowledge Base: 256986 Description of the Microsoft Windows Registry”. In other words, do the rest of this at your own risk!!!

The note that I found from Microsoft talked about removing a registry key that for me didn’t exist. Of course, it was somewhat dated and there may be a more up to date version that I hadn’t found. In any case, here’s how I fixed the problem:

  • Run regedit.exe to start the Registry Editor
  • Navigate through the tree to HKEY_CURRENT_USER\Software\Microsoft\Office\15.0\PowerPoint
  • Right-click on “Options” and select “Export” to save a backup in case I needed it (which in fact I did – see below for the reason)
  • Delete the “Options” key completely

Voila! PowerPoint now started and displayed on the screen as I expected.

NOTE: I might also have been able to simply delete the entries Bottom, Left, Right and Top in the Options key, but I didn’t try that directly. Maybe if someone else that has this situation arise can test this and put a comment back on this post, we can see if that solution works as well. There were other options listed such as PersonalTemplates, ToolbarConfigSaved and so on that it could have been useful to keep. I added those back by editing the export I had taken beforehand to remove the Bottom, Left, Right and Top entries, then opening the Registry Editor and importing them back by choosing File -> Import, which merged the entries from my backup into the Options key.

Who Plans The Content of UKOUG Tech15?

<..Who are the Tech15 committee and my role
….submitting an abstract..>

When you go to a conference like UKOUG Tech15 there are hundreds of talks given over several days and a dozen or so streams. Who decides what is presented and how do they decide?

You do. Well, I’d say you have about 60-70% of the input, if you are a member of the UKOUG (and I know many reading this are not – but you are probably members {via your employer, if not personally} of other user groups. And, if you are not, you can probably benefit from joining one.) The point is, the members of the UK Oracle User Group have a fair say in what gets talked about at the UKOUG conferences. And, though not all are run in the same way, I know several of the large oracle conferences run on similar principles. You also provide the raw material, the proposed talks. That is open to each and every one of you, member or not. Anyone can offer a talk.

What about the other 30-40% of the input? Well, that would be me :-). {Note, British ironic humour}. As I mentioned in my first post about organising Tech15 I am the Lead for the database area this year, and some people did blame me last year for the content – but being the Lead does not put me in charge. There is a technical committee that decides what they feel should be the overall structure of the conference and have the final 30-40% say in what talks are given.

I’ll go into more details about aspect of the paper selection process in future posts, but the general structure is thus:

  • The steering committee meet for a kick-off meeting and decide on:
    • Who is in which committee (though this is pretty much sorted out before the meeting).
    • the general structure of the event – The major areas (Database, Middleware, Development, Business Analytics and Hardware/OS/Engineered), the number of streams each major area gets each day, the length of sessions and if anything is happening outside the main 3 days of the conference.
    • How we handle the labeling of topics in our streams (endless discussions there!).
    • Topics and considerations that we feel are important to our streams that should be mentioned in the call for papers.
    • How we will run the sub-committees and overall committee – again, this is generally known but we look at what we learnt the prior year and change accordingly.
  • The call for papers goes out (it will be the 13th April to 10th May this year). This is advertised by the UKOUG, being sent to previous paper submitters, the User Group members and is announced in the UKOUG mailings, tweeted and several other avenues. The committee will have suggested areas to submit for, but what is submitted is up to the presenting community – and this can alter our thoughts on content.
  • Judging – From 20th April to close to the Agenda Planning Day, volunteers and members of UKOUG are asked to judge the paper abstracts. These scores are important for the next step…
  • Agenda Planning Day – the steering committee members get together and spend pretty much a whole day reviewing the abstracts, the judging scores, the slots available, what we know of the speakers and presentations, the spread of topics, percentage of established and new speakers and half a dozen other things to come up with the rough agenda. It’s a bit of a bun fight, but we get there in the end. Every abstract is looked at along with it’s judging score.
  • Speakers are informed if their papers are accepted, rejected or we would like them as reserves – and the speakers confirm or decline acceptance or reserves (and occasionally question rejections). Sometimes a speaker will be asked if they would modify a submission.
  • The technical committees may well try and source some papers where we feel a topic is under-represented or to fit with some other aim (like a stream at a given level).
  • Reserves are slotted in to replace any speakers who decline and any clashes, alterations and agenda tweaks are dealt with as they arise.
  • The agenda is launched (ie we say what is on it) mid July.
  • From the agenda launch to the start of the conference, any paper changes are handled as they come up – usually a speaker pulling out or needing to change dates but occasionally other issues.

Why is it called “Paper Selection” when people are talking? Why do we talk about abstracts? Well, conferences pretty much started off as scientific conferences and you would submit you scientific paper – and then read it out to the conference. The abstract is a brief “why you should read my 35 page argument with long, impressive words for why I think hyaenas are more closely related to cats than dogs” {they are}. We inherit those terms.

So you can see that the steering committee has a fair input, so how do WE get chosen? Fundamentally, it is via a call for volunteers from the UKOUG community. The UKOUG ask people to volunteer in their regular emails to members/volunteers. (Volunteers have to be members of the UKOUG but the membership may well belong to a company. The UKOUG keeps track of the nominated contacts for an organisation, who are responsible for the membership management, but also the individuals who have helped out at any time under that membership. As an example, someone in purchasing or HR may be the nominated contact for the memberships a company has with UKOUG, but it is members of the technical IT staff who come to the events and may start helping out).
The office UKOUG staff/board members may well ask one or two of the experienced volunteers known to them to take a lead and help chose which volunteers to accept. Or, more commonly, to go and pester people they know to step up and volunteer! New volunteers are always part of the mix, we recognise that without new people and perspectives we will stagnate, and they challenge us when we say “we always do it this way”.

I have not mentioned Oracle Corporation involvement. Strictly speaking, people from Oracle are not volunteers and are certainly not members. They are Oracle Liaisons. The UKOUG gets good support from Oracle, we have talks from them, we have some SIG meetings in their offices. Oracle Corporation of course is happy to talk about the latest/greatest aspects of Oracle and if they can get us all fired up for an extra cost option, so much the better for them. But the relationship is generally balanced and varies over the years – and is influenced by individuals. Some people who work for Oracle will push to be allowed to help out the UKOUG, some product managers are more than happy to come and give talks about free, standard or old features as well as the shiny new stuff. Others I am sure see us as an annoyance. The input we get from the Oracle Liaisons is very helpful and appreciated – but don’t think it buys acceptance of whatever Oracle Corp want. I had to help deal with an Oracle product manager last year who was upset that their area had very few talks. It got as far as them almost demanding some slots. However, the number of talks submitted and the poor judging scores for those few that were told us on the committee that the user community were not currently interested in that topic. So no talks. Faye and I talked it over, I gave the logic and reason and she was good enough to then deal with Upset Product Manager.

I have helped with the agenda planning day a couple of time – I think I got pestered to help way back in 2007 or 8! – and I have been a SIG chair and deputy chair as well as a regular presenter, so I am a known soft-touch for helping the UKOUG. A key aspect to my being the Lead is simply that I have more free time than most other volunteers, so I can be got hold of and can spend a bit of time thinking about things and making decisions. This can be important on the run-up to the actual event as you sometimes need to make decisions quickly and a group discussion may not be the best way to do it. I might check with a couple of others (and I usually do) but the key thing is to make a decision in the timeframe allowed.

So that is who the Agenda Planning committee are and where we fit in. We are volunteers, filtered and guided by some old hands but with new blood each year. We aim to guide and give structure but the talks submitted are what anyone wants to submit. Judging scores by the community are key to paper selection and though Oracle Corp supports they don’t get to dictate.

And if all else fails, blame the committee Leads.

12c MView refresh

Some time ago I wrote a blog note describing a hack for refreshing a large materialized view with minimum overhead by taking advantage of a single-partition partitioned table. This note describes how Oracle 12c now gives you an official way of doing something similar – the “out of place” refresh.

I’ll start by creating a matieralized view and creating a couple of indexes on the resulting underlying table; then show you three different calls to refresh the view. The materialized view is based on all_objects so it can’t be made available for query rewrite (ORA-30354: Query rewrite not allowed on SYS relations) , and I haven’t created any materialized view logs so there’s no question of fast refreshes – but all I intend to do here is show you the relative impact of a complete refresh.

create materialized view mv_objects nologging
build immediate
refresh on demand

		ownname		 => user,
		tabname		 =>'mv_objects',
		method_opt 	 => 'for all columns size 1'

create index mv_obj_i1 on mv_objects(object_name) nologging compress;
create index mv_obj_i2 on mv_objects(object_type, owner, data_object_id) nologging compress 2;

This was a default install of 12c, so there were about 85,000 rows in the view. You’ll notice that I’ve created all the objects as “nologging” – this will have an effect on the work done during some of the refreshes.

Here are the three variants I used – all declared explicitly as complete refreshes:

		list			=> 'MV_OBJECTS',
		method			=> 'C',
		atomic_refresh		=> true

		list			=> 'MV_OBJECTS',
		method			=> 'C',
		atomic_refresh		=> false

		list			=> 'MV_OBJECTS',
		method			=> 'C',
		atomic_refresh		=> false,
		out_of_place		=> true

The first one (atomic_refresh=>true) is the one you have to use if you want to refresh several materialized views simultaneously and keep them self consistent, or if you want to ensure that the data doesn’t temporarily disappear if all you’re worried about is a single view. The refresh works by deleting all the rows from the materialized view then executing the definition to generate and insert the replacement rows before committing. This generates a lot of undo and redo – especially if you have indexes on the materialized view as these have to be maintained “row by row” and may leave users accessing and applying a lot of undo for read-consistency purposes. An example at a recent client site refreshed a table of 6.5M rows with two indexes, taking about 10 minutes to refresh, generating 7GB of redo as it ran, and performing 350,000 “physical reads for flashback new”. This strategy does not take advantage of the nologging nature of the objects – and as a side effect of the delete/insert cycle you’re likely to see the indexes grow to roughly twice their optimal size and you may see the statistic “recursive aborts on index block reclamation” climbing as the indexes are maintained.

The second option (atomic_refresh => false) is quick and efficient – but may result in wrong results showing up in any code that references the materialized view (whether explicitly or by rewrite). The session truncates the underlying table, sets any indexes on it unusable, then reloads the table with an insert /*+ append */. The append means you get virtually no undo generated, and if the table is declared nologging you get virtually no redo. In my case, the session then dispatched two jobs to rebuild the two indexes – and since the indexes were declared nologging the rebuilds generated virtually no redo. (I could have declared them with pctfree 0, which would also have made them as small as possible).

The final option is the 12c variant – the setting atomic_refresh => false is mandatory if we want  out_of_place => true. With these settings the session will create a new table with a name of the form RV$xxxxxx where xxxxxx is the hexadecimal version of the new object id, insert the new data into that table (though not using the /*+ append */ hint), create the indexes on that table (again with names like RV$xxxxxx – where xxxxxx is the index’s object_id). Once the new data has been indexed Oracle will do some name-switching in the data dictionary (shades of exchange partition) to make the new version of the materialized view visible. A quirky detail of the process is that the initial create of the new table and the final drop of the old table don’t show up in the trace file  [Ed: wrong, see comment #1] although the commands to drop and create indexes do appear. (The original table, though it’s dropped after the name switching, is not purged from the recyclebin.) The impact on undo and redo generation is significant – because the table is empty and has no indexes when the insert takes place the insert creates a lot less undo and redo than it would if the table had been emptied by a bulk delete – even though the insert is a normal insert and not an append; then the index creation honours my nologging definition, so produces very little redo. At the client site above, the redo generated dropped from 7GB to 200MB, and the time dropped to 200 seconds which was 99% CPU time.

Limitations, traps, and opportunities

The manuals say that the out of place refresh can only be used for materialized views that are joins or aggregates and, surprisingly, you actually can’t use the method on a view that simply extracts a subset of rows and columns from a single table.  There’s a simple workaround, though – join the table to DUAL (or some other single row table if you want to enable query rewrite).

Because the out of place refresh does an ordinary insert into a new table the resulting table will have no statistics – you’ll have to add a call to gather them. (If you’ve previously been using a non-atomic refreshes this won’t be a new problem, of course). The indexes will have up to date statistics, of course, because they will have been created after the table insert.

The big opportunity, of course, is to change a very expensive atomic refresh into a much cheaper out of place refresh – in some special cases. My client had to use the atomic_refresh=>true option in 11g because they couldn’t afford to leave the table truncated (empty) for the few minutes it took to rebuild; but they might be okay using the out_of_place => true with atomic_refresh=>false in 12c because:

  • the period when something might break is brief
  • if something does go wrong the users won’t get wrong (silently missing) results, they’ll an Oracle error (probably ORA-08103: object no longer exists)
  • the application uses this particular materialized view directly (i.e. not through query rewrite), and the query plans are all quick, light-weight indexed access paths
  • most queries will probably run correctly even if they run through the moment of exchange

I don’t think we could guarantee that last statement – and Oracle Corp. may not officially confirm it – and it doesn’t matter how many times I show queries succeeding but it’s true. Thanks to “cross-DDL read-consistency” as it was called in 8i when partition-exchange appeared and because the old objects still exist in the data files, provided your query doesn’t hit a block that has been overwritten by a new object, or request a space management block that was zero-ed out on the “drop” a running query can keep on using the old location for an object after it has been replaced by a newer version. If you want to make the mechanism as safe as possible you can help – put each relevant materialized view (along with its indexes) into its own tablespace so that the only thing that is going to overwrite an earlier version of the view is the stuff you create on the next refresh.


Try Oracle 12c VM with Delphix download

 photo by #212124;" title="Go to Jose Maria Cuellar's photostream" href="" data-track="attributionNameClick" data-rapid_p="67">Jose Maria Cuellar (CC 2.0)

Thanks to Leighton Nelson who pointed out  that :

Oracle has a pre-installed Linux VM with 12c

Delphix as well has a pre-installed   trial version 

#000000;">Download both of these and start them up in Virtualbox and you can start virtualizing your PDBs.
#000000;">The Oracle pre-installed VM has a few eccentricities that have to be addressed before using it. There is no IP address and there is no history of the install and it’s missing a glibc library. After we address these in the instructions below you can link to the PDB and then provision that PDB back as a virtual PDB (vPDB) or provision that PDB to another 12c instance on another machine as a vPDB.

Here is a video of linking a PDB to Delphix and provisioning  a vPDB

before doing the above with the Oracle pre-installed VM, follow these steps:

#000000;">After importing and starting, change network to “Bridge Adaptor” and restarted services
#000000;">get the ip address with “ifconfig -a” and vi  /etc/hosts to add hostname “oraclevm” with the  ip address. Also increase /dev/shm or otherwise will get “MEMORY_TARGET not supported on this system”
#0000ff;">su -
#0000ff;">service network restart
#0000ff;"># note the IP of the machine
#0000ff;">ifconfig -a

#0000ff;"># add hostname “oraclevm” and current IP to hosts
#0000ff;">vi /etc/hosts
#0000ff;">hostname oraclevm
#0000ff;">echo oraclevm  >  /proc/sys/kernel/hostname

#0000ff;">#umount tmpfs might give errors, but seems to do the trick anway
#0000ff;">umount tmpfs
#0000ff;">mount -t tmpfs shmfs -o size=1500m /dev/shm

#0000ff;">mkdir /mnt/provision
chmod 755 /mnt
chmod 755 /mnt/provision


#0000ff;">yum install glibc.i686

#000000;">yum might get error

Existing lock /var/run/ another copy is running as PID ….

#000000;">Waited a few minutes and was able to run. Oracle VM must run some yum stuff just after starting up. After one try, the running yum process never seemed to exit, so rebooted VM and upon reboot was able to run yum




Back as Oracle unset TWO_TASK as it blocks “/ as sysdba” connections and get rid of the .bash output as it messes up scp and Delphix trying to put the toolkit onto the box

#0000ff;">unset TWO_TASK
#0000ff;">cp .bashrc .bashrc.orig
#0000ff;">cat .bashrc.orig | grep -v cat > .bashrc

Set up  Delphix c##delphix container user and delphix PDB user

#0000ff;">sqlplus / as sysdba << EOF
#0000ff;">create#0000ff;"> #0000ff;">user#0000ff;"> c##delphix #0000ff;">identified#0000ff;"> #0000ff;">by#0000ff;"> delphix;
#0000ff;">grant#0000ff;"> #0000ff;">create#0000ff;"> #0000ff;">session#0000ff;"> #0000ff;">to#0000ff;"> c##delphix;
#0000ff;">alter#0000ff;"> #0000ff;">user#0000ff;"> c##delphix #0000ff;">set#0000ff;"> container_data=#0000ff;">all#0000ff;"> container=#0000ff;">current#0000ff;">;
#0000ff;">grant#0000ff;"> #0000ff;">select#0000ff;"> #0000ff;">any#0000ff;"> dictionary #0000ff;">to#0000ff;"> c##delphix;
#0000ff;">create#0000ff;"> #0000ff;">or#0000ff;"> replace #0000ff;">view#0000ff;"> v_x\$kccfe #0000ff;">as#0000ff;"> #0000ff;">select#0000ff;"> * #0000ff;">from#0000ff;"> x\$kccfe;
#0000ff;">grant#0000ff;"> #0000ff;">select#0000ff;"> #0000ff;">on#0000ff;"> v_x\$kccfe #0000ff;">to#0000ff;"> c##delphix;
#0000ff;">create#0000ff;"> #0000ff;">synonym#0000ff;"> c##delphix.x\$kccfe #0000ff;">for#0000ff;"> v_x\$kccfe;
#0000ff;">alter#0000ff;"> #0000ff;">session#0000ff;"> #0000ff;">set#0000ff;"> container=orcl;
#0000ff;">create#0000ff;"> #0000ff;">user#0000ff;"> delphix #0000ff;">identified#0000ff;"> #0000ff;">by#0000ff;"> delphix;
#0000ff;">grant#0000ff;"> #0000ff;">create#0000ff;"> #0000ff;">session#0000ff;"> #0000ff;">to#0000ff;"> delphix;
#0000ff;">grant#0000ff;"> #0000ff;">select#0000ff;"> #0000ff;">any#0000ff;"> dictionary #0000ff;">to#0000ff;"> delphix;
#0000ff;">create#0000ff;"> #0000ff;">or#0000ff;"> replace #0000ff;">view#0000ff;"> v_x\$kccfe #0000ff;">as#0000ff;"> #0000ff;">select#0000ff;"> * #0000ff;">from#0000ff;"> x\$kccfe;
#0000ff;">grant#0000ff;"> #0000ff;">select#0000ff;"> #0000ff;">on#0000ff;"> v_x\$kccfe #0000ff;">to#0000ff;"> delphix;
#0000ff;">create#0000ff;"> #0000ff;">synonym#0000ff;"> delphix.x\$kccfe #0000ff;">for#0000ff;"> v_x\$kccfe;

make a toolkit directory

#0000ff;">mkdir ~/toolkit

Add the inventory directory and the inventory file with version info that Delphix uses

#0000ff;">mkdir $ORACLE_HOME/inventory
mkdir $ORACLE_HOME/inventory/ContentsXML
cat << EOF >  $ORACLE_HOME/inventory/ContentsXML/comps.xml
<?xml version=”1.0″ standalone=”yes” ?>

Sep.19 15:31:29 EDT” INST_LOC=”/home/oracle/base/oracle12102/oracle.server”>


Put database in archive log mode

#0000ff;">sqlplus / as sysdba << EOF
shutdown immediate;
startup mount;
alter database archivelog;
alter database open;
alter system switch logfile;
alter system switch logfile;

Use the machine name in listener.ora and tnsnames.ora instead of IP from
(ADDRESS = (PROTOCOL = TCP)(HOST = oraclevm )(PORT = 1521))

#0000ff;">vi $ORACLE_HOME/network/admin/listener.ora
lsnrctl stop
lsnrctl start
sqlplus / as sysdba << EOF
alter system register;

Then in Delphix UI, go to

Manage -> Environments
click “+” sign
add host
after host is added click on the database tab  (far right disk icon)
on database tab for environment, click on “discover”
for login/password give  c##delphix / delphix  (the CDB login we created)
this will discover the PDBs
now click on Delphix or Manage-> Databases -> My databases
click “+” sign
select the orcl PDB on cdb1
for login/password give delphix / delphix (the PDB login we created)


After creating the  vPDB it shows up for the listener

lsnrct stat

Service “vorc_fd7” has 1 instance(s).
Instance “cdb1″, status READY, has 1 handler(s) for this service…

I add an entry to $ORACLE_HOME/network/admin/tnsnames.ora

(ADDRESS = (PROTOCOL = TCP)(HOST = oraclevm)(PORT = 1521))
(SERVICE_NAME = vorc_fd7)

and then connect with SQL*Plus

sqlplus delphix/delphix@DELPHIX



photo by #212124;" title="Go to Yannis's photostream" href="" data-track="attributionNameClick" data-rapid_p="48">Yannis (CC 2.0)