Search

OakieTags

Who's online

There are currently 0 users and 37 guests online.

Recent comments

Oakies Blog Aggregator

With Modern Storage the Oracle Buffer Cache is Not So Important.

With Oracle’s move towards engineered systems we all know that “more” is being done down at the storage layer and modern storage arrays have hundreds of spindles and massive caches. Does it really matter if data is kept in the Database Buffer Cache anymore?

Yes. Yes it does.

Time for a cool beer

Time for a cool beer

With much larger data sets and the still-real issue of less disk spindles per GB of data, the Oracle database buffer cache is not so important as it was. It is even more important.

I could give you some figures but let’s put this in a context most of us can easily understand.

You are sitting in the living room and you want a beer. You are the oracle database, the beer is the block you want. Going to the fridge in the kitchen to get your beer is like you going to the Buffer Cache to get your block.

It takes 5 seconds to get to the fridge, 2 seconds to pop it open with the always-to-hand bottle opener and 5 seconds to get back to your chair. 12 seconds in total. Ahhhhh, beer!!!!

But – what if there is no beer in the fridge? The block is not in the cache. So now you have to get your car keys, open the garage, get the car out and drive to the shop to get your beer. And then come back, pop the beer in the fridge for half an hour and now you can drink it. That is like going to storage to get your block. It is that much slower.

It is only that much slower if you live 6 hours drive from your beer shop. Think taking the scenic route from New York to Washington DC.

The difference in speed really is that large. If your data happens to be in the memory cache in the storage array, that’s like the beer already being in a fridge – in that shop 6 hours away. Your storage is SSD-based? OK, you’ve moved house to Philadelphia, 2 hours closer.

Let's go get beer from the shop

Let’s go get beer from the shop

To back this up, some rough (and I mean really rough) figures. Access time to memory is measured in Microseconds (“us” – millionths of a second) to hundreds of Nanoseconds (“ns” – billionths of a second). Somewhere around 500ns seems to be an acceptable figure. Access to disc storage is more like Milliseconds (“ms” – thousandths of a second). Go check an AWR report or statspack or OEM or whatever you use, you will see that db file scattered reads are anywhere from low teens to say 2 or 3 ms, depending on what your storage and network is. For most sites, that speed has hardly altered in years as, though hard discs get bigger, they have not got much faster – and often you end up with fewer spindles holding your data as you get allocated space not spindles from storage (and the total sustainable speed of hard disc storage is limited to the total speed of all the spindles involved). Oh, the storage guys tell you that your data is spread over all those spindles? So is the data for every system then, you have maximum contention.

However, memory speed has increased over that time, and so has CPU speed (though CPU speed has really stopped improving now, it is more down to More CPUs).

Even allowing for latching and pinning and messing around, accessing a block in memory is going to be at the very least 1,000 times faster than going to disc, maybe 10,000 times. Sticking to a conservative 2,000 times faster for memory than disc , that 12 seconds trip to the fridge equates to 24,000 seconds driving. That’s 6.66 hours.

This is why you want to avoid physical IO in your database if you possibly can. You want to maximise the use of the database buffer cache as much as you can, even with all the new Exadata-like tricks. If you can’t keep all your working data in memory, in the database buffer cache (or in-memory or use the results cache) then you will have to do that achingly slow physical IO and then the intelligence-at-the-hardware comes into it’s own, true Data Warehouse territory.

So the take-home message is – avoid physical IO, design your database and apps to keep as much as you can in the database buffer cache. That way your beer is always to hand.

Cheers.

Update. Kevin Fries commented to mention this wonderful little latency table. Thanks Kevin.

“Here’s something I’ve used before in a presentation. It’s from Brendan Gregg’s book – Systems Performance: Enterprise and the Cloud”

Lab Report: Oracle Database on EMC XtremIO. A Compression Technology Case Study.

If you are interested in array-level data reduction services and how such technology mixes with Oracle Database application-level compression (such as Advanced Compression Option), I offer the link below to an EMC Lab Report on this very topic.

To read the entire Lab Report please click the following link:   Click Here.

The following is an excerpt from the Lab Report:

Executive Summary
EMC XtremIO storage array offers powerful data reduction features. In addition to thin provisioning, XtremIO applies both deduplication and compression algorithms to blocks of data when they are ingested into the array. These features are always on and intrinsic to the array. There is no added licensing, no tuning nor configuration involved when it comes to XtremIO data reduction.

Oracle Database also supports compression. The most common form of Oracle Database compression is the Advanced Compression Option—commonly referred to as ACO. With Oracle Database most “options” are separately licensed features and ACO is one such option. As of the publication date of this Lab Report, ACO is licensed at $11,000 per processor core on the database host1. Compressing Oracle Database blocks with ACO can offer benefits beyond simple storage savings. Blocks compressed with ACO remain compressed as they pass through the database host. In short, blocks compressed with ACO will hold more rows of data per block. This can be either a blessing or a curse. Allowing Oracle to store more rows per block has the positive benefit of caching more application data in main memory (i.e., the Oracle SGA buffer pool). On the other hand, compacting more data into each block often results in increased block-contention.

Oracle offers tuning advice to address this contention in My Oracle Support note 1223705.12. However, the tuning recommendations for reducing block contention with ACO also lower the compression ratios. Oracle also warns users to expect higher CPU overhead with ACO as per the following statement in the Oracle Database product documentation:

Compression technology uses CPU. Ensure that you have enough available CPU to handle the additional load.

Application vendors, such as SAP, also produce literature to further assist database administrators in making sensible choices about how and when to employ Advanced Compression Option. The importance of understanding the possible performance impact of ACO are made quite clear in such publications as SAP Note 14363524 which states the following about SAP performance with ACO:

Overall system throughput is not negatively impacted and may improve. Should you experience very long runtimes (i.e. 5-10 times slower) for certain operations (like mass inserts in BW PSA or ODS tables/partitions) then you should set the event 10447 level 50 in the spfile/init.ora. This will reduce the overhead for insertion into compressed tables/partitions.

The SAP note offers further words of caution regarding transaction logging (a.k.a., redo) in the following quote:

Amount of redo data generated can be up to 30% higher

Oracle Database Administrators, with prior ACO experience, are largely aware of the trade-offs where ACO is concerned. Database Administrators who have customarily used ACO in their Oracle Database deployments may wish to continue to use ACO after adopting EMC XtremIO. For this reason Database Administrators are interested in learning how XtremIO compression and Advanced Compression Option interact.

This Lab Report offers an analysis of space savings with and without ACO on XtremIO. In addition, a performance characterization of an OLTP workload manipulating the same application data in ACO and non-ACO tablespaces will be covered…please click the link above to continue reading…

 

Filed under: oracle

Smart Scan enhancements in Oracle Exadata 12c-inline LOBs

As part of the 12c release of the database and cellos Oracle introduced the ability to perform Smart Scans on inline LOBs. The enhancement is certainly for Secure Files only. And as always, if you would like to read up about Secure Files you can head over to Tim Hall’s website for the details.

To demonstrate the new behaviour I have used an 11.2.0.3 BP 22 database on Exadata 11.2.3.3.1 and 12.1.0.2.2 database on Exadata 12.1.2.1.0. The test case evolves around the following table:

CREATE TABLE loboffload (
 id number primary key,
 comments clob)
enable row movement 
tablespace users
lob (comments) store as securefile (
 enable storage in row
);

This is the short version, Oracle fills in the blanks and converts the table DDL to

SQL> select dbms_metadata.get_ddl('TABLE','LOBOFFLOAD') from dual;

DBMS_METADATA.GET_DDL('TABLE','LOBOFFLOAD')
--------------------------------------------------------------------------------

  CREATE TABLE "MARTIN"."LOBOFFLOAD"
   (    "ID" NUMBER,
        "COMMENTS" CLOB,
         PRIMARY KEY ("ID")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS"  ENABLE
   ) SEGMENT CREATION IMMEDIATE
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS"
 LOB ("COMMENTS") STORE AS SECUREFILE (
  TABLESPACE "USERS" ENABLE STORAGE IN ROW CHUNK 8192
  NOCACHE LOGGING  NOCOMPRESS  KEEP_DUPLICATES
  STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))  ENABLE ROW
 MOVEMENT

The idea is that I am using the comments lob column to simulate a common approach in application design where you define a free text field on a GUI screen for the user to provide additional information about the record to be stored in the database. The design is not very practical though.

The important information is in the section about the LOB. Here you find the request to store the LOB as a SECUREFILE in-line with the table. Out of line LOBs are not offloadable as of 12.1.2.1.0 as far as I know.

To feed some data into the table I copied added a bit of data and then used the insert … append technique to populate it:

alter session enable parallel dml;

-- with nods to Jonathan Lewis who published this in one of his presentations on generating data
insert /*+ append parallel(4) */ into loboffload
with v1 as (
 SELECT rownum n FROM dual CONNECT BY level <= 10000
)
SELECT  rownum id,
  CASE
    WHEN mod(rownum,100000) = 0     THEN 'THIS WAS EXCEPTIONAL' 
    WHEN mod(rownum,10000) = 0      THEN 'THIS WAS VERY GOOD'
    WHEN mod(rownum,1000) = 0       THEN 'THIS WAS GOOD'
    WHEN mod(rownum,100) = 0        THEN 'OK'
    ELSE 'NO COMPLAINTS'
  END 
FROM v1,
  v1
WHERE rownum <= 1e6;

create sequence loboffload_s cache 10000 start with 1000001;


insert /*+ append parallel(4) */ into loboffload
select /*+ parallel(4) */ loboffload_s.nextval, comments from loboffload;

I ended up with 16 million rows in the end. I used impdp over a network link to move the table to the other database.

Test with 11.2.0.3

The test in 11.2.0.3 is used to confirm that inline LOBs are not offloadable. Here is the query and result:

MARTIN@ora11203> select /*+ monitor noparallel full(t) */ count(*) from loboffload t where comments like '%GOOD%';

  COUNT(*)
----------
     15840

Elapsed: 00:01:33.48

MARTIN@ora11203> -- removing irrelevant statistics from the output
MARTIN@ora11203> @mystats stop t=1
==========================================================================================
MyStats report : 25-MAY-2015 02:33:24
==========================================================================================


------------------------------------------------------------------------------------------
1. Summary Timings
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
TIMER   snapshot interval (seconds)                                                  95.28
TIMER   CPU time used (seconds)                                                      93.00


------------------------------------------------------------------------------------------
2. Statistics Report
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
STAT    CPU used by this session                                                     9,303
...
STAT    DB time                                                                      9,353
...
STAT    cell flash cache read hits                                                   9,454
STAT    cell physical IO interconnect bytes                                  1,233,436,672
...
STAT    consistent gets                                                            150,794
STAT    consistent gets - examination                                                    3
STAT    consistent gets direct                                                     150,558
STAT    consistent gets from cache                                                     236
STAT    consistent gets from cache (fastpath)                                          231
STAT    cursor authentications                                                           3
...
STAT    execute count                                                                   13
...
STAT    logical read bytes from cache                                           14,393,344
STAT    no work - consistent read gets                                             150,558
...
STAT    parse count (hard)                                                               1
STAT    parse count (total)                                                             13
STAT    parse time cpu                                                                   1
STAT    parse time elapsed                                                               1
STAT    physical read IO requests                                                    9,459
STAT    physical read bytes                                                  1,233,436,672
STAT    physical read requests optimized                                             9,454
STAT    physical read total IO requests                                              9,459
STAT    physical read total bytes                                            1,233,436,672
STAT    physical read total bytes optimized                                  1,233,395,712
STAT    physical read total multi block requests                                     9,255
STAT    physical reads                                                             150,566
STAT    physical reads cache                                                             8
STAT    physical reads direct                                                      150,558
...
STAT    table scan blocks gotten                                                   150,558
STAT    table scan rows gotten                                                  16,000,000
STAT    table scans (direct read)                                                        1
STAT    table scans (long tables)                                                        1
...

------------------------------------------------------------------------------------------
3. About
------------------------------------------------------------------------------------------
- MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
- Based on the SNAP_MY_STATS utility by Jonathan Lewis

==========================================================================================
End of report
==========================================================================================

So none of the essential cell% events recorded except for Flash Cache read hits. Without the cell scans statistics incremented by 1 (for the table segment) you can conclude that no Smart Scan happened here.

Test on 12.1.0.2/12.1.2.1.0

The first result was not too surprising, as it is the documented behaviour. The second test should hopefully reveal some offloading.

MARTIN@ORA12c> select /*+ monitor noparallel full(t) */ count(*) from loboffload t where comments like '%GOOD%';

  COUNT(*)
----------
     15840

Elapsed: 00:00:01.65

MARTIN@ORA12c> @mystats stop t=1
==========================================================================================
MyStats report : 25-MAY-2015 02:29:46
==========================================================================================


------------------------------------------------------------------------------------------
1. Summary Timings
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
TIMER   snapshot interval (seconds)                                                  14.61
TIMER   CPU time used (seconds)                                                       0.03


------------------------------------------------------------------------------------------
2. Statistics Report
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
STAT    CPU used by this session                                                         9
...
STAT    cell IO uncompressed bytes                                           1,234,296,832
STAT    cell blocks helped by minscn optimization                                  150,666
STAT    cell blocks processed by cache layer                                       150,671
STAT    cell blocks processed by data layer                                        150,671
STAT    cell blocks processed by txn layer                                         150,671
STAT    cell flash cache read hits                                                   1,143
STAT    cell num smartio automem buffer allocation attempts                              1
STAT    cell physical IO bytes eligible for predicate offload                1,234,296,832
STAT    cell physical IO interconnect bytes                                      6,273,368
STAT    cell physical IO interconnect bytes returned by smart scan               6,273,368
STAT    cell scans                                                                       1
...
STAT    consistent gets                                                            151,053
STAT    consistent gets direct                                                     150,671
STAT    consistent gets examination                                                      3
STAT    consistent gets examination (fastpath)                                           3
STAT    consistent gets from cache                                                     382
STAT    consistent gets pin                                                            379
STAT    consistent gets pin (fastpath)                                                 379
...
STAT    parse count (total)                                                             13
STAT    physical read IO requests                                                    1,205
STAT    physical read bytes                                                  1,234,296,832
STAT    physical read requests optimized                                             1,143
STAT    physical read total IO requests                                              1,205
STAT    physical read total bytes                                            1,234,296,832
STAT    physical read total bytes optimized                                  1,171,423,232
STAT    physical read total multi block requests                                     1,189
STAT    physical reads                                                             150,671
STAT    physical reads direct                                                      150,671
...
STAT    table scan blocks gotten                                                    15,744
STAT    table scan disk non-IMC rows gotten                                      1,670,692
STAT    table scan rows gotten                                                   1,670,692
STAT    table scans (direct read)                                                        1
STAT    table scans (short tables)                                                       1
...

------------------------------------------------------------------------------------------
3. About
------------------------------------------------------------------------------------------
- MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
- Based on the SNAP_MY_STATS utility by Jonathan Lewis

==========================================================================================
End of report
==========================================================================================

Good news, what Oracle said is true. The same query that did not offload in 11.2.0.3 is now offloaded in 12.1.0.2 with Exadata 12.1.2.1.0. You find proof in the existence of all the cell% statistics, especially cell scans plus the … processed by … layer statistic.

Temp Table Transformation Cardinality Estimates - 1

Having published recently two notes about the Temp Table Transformation highlighting the heuristics based decision and other weaknesses, for example regarding the projection of columns, it's time to publish some more notes about it.The transformation can also have significant impact on cardinality estimates, both join and single table cardinality.Looking at the difference in the join cardinality estimates of following simple example:


create table t1
as
select
rownum as id
, mod(rownum, 10) + 1 as id2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1000
;

exec dbms_stats.gather_table_stats(null, 't1')

alter session set tracefile_identifier = 'temp_trans_join_card';

alter session set events '10053 trace name context forever, level 1';

explain plan for
with
cte as (
select /* inline */ id + 1 as id from t1 t
where 1 = 1
)
select /*+
--opt_estimate(@"SEL$2" join("A"@"SEL$2" "B"@"SEL$2") rows=1000)
no_merge(a) no_merge(b)
*/ * from cte a, cte b
where a.id = b.id
;

alter session set events '10053 trace name context off';

-- 11.2.0.x Plan with TEMP transformation
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 26 |
| 1 | TEMP TABLE TRANSFORMATION | | | |
| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D660C_27269C | | |
| 3 | TABLE ACCESS FULL | T1 | 1000 | 4000 |
|* 4 | HASH JOIN | | 1 | 26 |
| 5 | VIEW | | 1000 | 13000 |
| 6 | TABLE ACCESS FULL | SYS_TEMP_0FD9D660C_27269C | 1000 | 4000 |
| 7 | VIEW | | 1000 | 13000 |
| 8 | TABLE ACCESS FULL | SYS_TEMP_0FD9D660C_27269C | 1000 | 4000 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("A"."ID"="B"."ID")

-- 11.2.0.x Plan with INLINE hint
----------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 10000 | 253K|
|* 1 | HASH JOIN | | 10000 | 253K|
| 2 | VIEW | | 1000 | 13000 |
| 3 | TABLE ACCESS FULL| T1 | 1000 | 4000 |
| 4 | VIEW | | 1000 | 13000 |
| 5 | TABLE ACCESS FULL| T1 | 1000 | 4000 |
----------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("A"."ID"="B"."ID")

the following becomes obvious:- There are vastly different cardinality estimates possible depending on whether the transformation gets used or not. In principle due to the NO_MERGE hints used, the transformation shouldn't have any impact on the estimates, but it does- Looking at the optimizer trace file some information seems to get lost when the transformation gets used, in particular column related statistics- This information loss, like in the example here, can lead to join cardinality estimates of 0 (rounded to 1 in the plan output)- And even worse, at least in pre-12c versions, such a 0 cardinality estimate can't be corrected via OPT_ESTIMATE hints, since somehow the correction gets ignored/lost although being mentioned in the optimizer trace:


11.2.0.1:
Column (#1): ID(
AvgLen: 22 NDV: 1 Nulls: 0 Density: 0.000000
Column (#1): ID(
AvgLen: 22 NDV: 1 Nulls: 0 Density: 0.000000
Join Card: 0.000000 = = outer (1000.000000) * inner (1000.000000) * sel (0.000000)
>> Join Card adjusted from 0.000000 to: 1000.000000, prelen=2
Adjusted Join Cards: adjRatio=1.00 cardHjSmj=0.000000 cardHjSmjNPF=0.000000 cardNlj=0.000000 cardNSQ=1000.000000 cardNSQ_na=0.000000
Join Card - Rounded: 1 Computed: 0.00

The behaviour regarding the OPT_ESTIMATE hint changes in 12c, but then there are other oddities introduced in 12c that are not there in pre-12c - have a look at the "Query Block" section when using the INLINE variant of the query - there are two identical fully qualified object names, clearly a bug, making hinting using global hint syntax impossible for that query block.Although my simple example here can be corrected via extended statistics on the join column expression used in the CTE query my point here is that depending on whether the transformation gets used or not vastly different and extreme cardinality estimates are possible - and those extreme cases even can't be corrected in pre-12c versions.For example I recently had a real life case where two columns were joined that had a significant number of NULL values, one coming from a temp table transformation row source. Without the transformation the join cardinality estimates were reasonable, but the transformation again lead to such a 0 cardinality estimate (that couldn't be corrected via a (correctly specified) OPT_ESTIMATE hint), ruining the whole plan.

Datapump TTS quick tip

I was trying to transport a large number of tablespace datafiles from one database to another, and impdp kept crashing with “PL/SQL: numeric or value error: character string buffer too small.”

It turns out there is a limit on what you can put on a single line in a parameter file.  Hence if you have something like:

transport_datafiles=/long/path/file1,/long/path/file2,/long/path/file3,....

then you might run into trouble. It’s easily fixed however – just put the entries one per line

transport_datafiles=/long/path/file1
transport_datafiles=/long/path/file2
transport_datafiles=/long/path/file3
....

Oracle 12c – PL/SQL Implicit Result Sets

Implicit Result Sets

Beginning with Oracle 12c it is possible for SELECT statements to send results back to a client from PL/SQL without need for REF CURSOR definitions. Previously, PL/SQL required definition of a REF CURSOR as an OUT parameter adding complexity (and precision) not found in other products (argh… the older I get the more I believe that a module should do exactly one thing (and do it well)). However, it is common in some database products to use a Stored Procedure to return multiple result sets.

Using Oracle 12c’s newly implemented implicit result sets will allow reuse of SQL Server (and other database) stored procedures that return implicit result sets. Implicit result sets were originally targeted at JDBC clients but will probably work well in other environments like C#.NET or SQL Server TSQL.

Oracle 12c PL/SQL’s DBMS_SQL.RETURN_RESULT procedure makes result cursors available to the client easily (procedures do not need out parameter for procedure to return cursor results). The general process is as follows:

  • In PL/SQL Procedure
    • Open cursorname FOR SELECT
    • Call DBMS_SQL.SQL_RETURN_RESULT(cursorname)
  • In Java JDBC
    • Call stored procedure returning implicit result sets
    • Locate next result set from procedure with getMoreResults() or getMoreResults(nn)
    • Retrieve result set using getResultSet()
  • In PL/SQL
    • Supported in PL/SQL with DBMS_SQL.GET_NEXT_RESULT

Example PL/SQL Procedure using Implicit Result Set

The procedure below creates two implicit result sets that may be read by client software such as Java, C#, TSQL, or PL/SQL:

create or replace PROCEDURE MY_IMPLICIT_TABLES
(in_deptid in emp.deptno%type)
as
mycursor sys_refcursor;
mycursor2 sys_refcursor;
begin
if in_deptid is not null
then
open mycursor for
select empno,ename,job,sal
from emp
where deptno = in_deptid
order by ename;
 dbms_sql.return_result(mycursor);
/* now for 2nd cursor */
open mycursor2 for
select employee_id,last_name,job_id,salary
from hr.employees
where department_id = in_deptid
order by last_name;
dbms_sql.return_result(mycursor2);
end if;
END MY_IMPLICIT_TABLES;

 The following JDBC code calls the stored procedure, then processes each of the result sets returned. Note that returning two or more result sets means that the client-side code must have some intelligence about how to process each result set in turn; this is pretty fragile design so be careful.

The resulting output from the sample Java program is shown below:

package samples;

import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.sql.Types;

/** Sample use of Oracle Procedure with Implicit Results
*

* Note: Exception handling omitted to simplify example
*  *
* @version 1.01 04/01/2015
* @author  John Jay King
* */
public class OraThinJDBCImplicitResults {
public OraThinJDBCImplicitResults() throws SQLException  {
// Load the Oracle JDBC driver — not needed for JDBC 4 Direct Connection
DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());

// Direct connection to database; “real applications” should
// probably use a Connection Pool for best system performance
Connection conn =
DriverManager.getConnection (“jdbc:oracle:thin:@192.168.0.61:1539/pdb1.localdomain”,
“john”, “king”);

// Create a Statement
CallableStatement stmt1 = conn.prepareCall(“{CALL my_implicit_tables(?)}”);
ResultSet rs1 = null;
int rsetCount = 0;
stmt1.setInt(1,20);
stmt1.executeQuery();
while (stmt1.getMoreResults()) {
System.out.println(“Processing Data From Result Set ” + ++rsetCount);
rs1= stmt1.getResultSet();
processResults(rs1);
}

if (rs1 != null) {
rs1.close();
}
if (stmt1 != null) {
stmt1.close();
}
if (conn != null) {
conn.close();
}
}

private void processResults(ResultSet rset) throws SQLException {
ResultSetMetaData myRSMetaData = rset.getMetaData();
System.out.println(“ResultSet”);
System.out.println(“\tColumn Count:” + myRSMetaData.getColumnCount());
int ctr = 0;
int coltype;
int colcount = myRSMetaData.getColumnCount();
while (rset.next()) {
for (ctr = 1; ctr<(colcount+1); ctr++) {
System.out.println(“\t** Column ” + ctr + ” ****************”);
System.out.println(“\t\tName: ” + myRSMetaData.getColumnName(ctr));
System.out.println(“\t\tColtype: ” + myRSMetaData.getColumnType(ctr));
coltype = myRSMetaData.getColumnType(ctr);
System.out.print(“\t\tValue is “);
switch (coltype) {
case     Types.VARCHAR :
System.out.println(rset.getString(ctr));
break;
case     Types.DOUBLE :
System.out.println(rset.getDouble(ctr));
break;
case     Types.INTEGER:
System.out.println(rset.getInt(ctr));
break;
case     Types.NUMERIC:
System.out.println(rset.getDouble(ctr));
break;
case Types.TIMESTAMP:
System.out.println(rset.getDate(ctr));
break;
default :
System.out.println(rset.getString(ctr));
}
}
}
}

public static void main (String args [])
throws SQLException
{
OraThinJDBCImplicitResults myir = new OraThinJDBCImplicitResults();

} // end main
} // end OraThinJDBCStoredProc

Processing Data From Result Set 1

ResultSet
Column Count:4
** Column 1 ****************
Name: EMPNO
Coltype: 2
Value is 7876.0
** Column 2 ****************
Name: ENAME
Coltype: 12
Value is ADAMS
** Column 3 ****************
Name: JOB
Coltype: 12
Value is CLERK
** Column 4 ****************
Name: SAL
Coltype: 2
Value is 1100.0
** Column 1 ****************
Name: EMPNO
Coltype: 2
Value is 7902.0
** Column 2 ****************
Name: ENAME
Coltype: 12
Value is FORD
** Column 3 ****************
Name: JOB
Coltype: 12
Value is ANALYST
** Column 4 ****************
Name: SAL
Coltype: 2
Value is 3000.0
** Column 1 ****************
Name: EMPNO
Coltype: 2
Value is 7566.0
** Column 2 ****************
Name: ENAME
Coltype: 12
Value is JONES
** Column 3 ****************
Name: JOB
Coltype: 12
Value is MANAGER
** Column 4 ****************
Name: SAL
Coltype: 2
Value is 2975.0
v** Column 1 ****************
Name: EMPNO
Coltype: 2
Value is 7788.0
** Column 2 ****************
Name: ENAME
Coltype: 12
Value is SCOTT
** Column 3 ****************
Name: JOB
Coltype: 12
Value is ANALYST
** Column 4 ****************
Name: SAL
Coltype: 2
Value is 3000.0
** Column 1 ****************
Name: EMPNO
Coltype: 2
Value is 7369.0
** Column 2 ****************
Name: ENAME
Coltype: 12
Value is SMITH
** Column 3 ****************
Name: JOB
Coltype: 12
Value is CLERK
** Column 4 ****************
Name: SAL
Coltype: 2
Value is 800.0

Processing Data From Result Set 2

 ResultSet
Column Count:4
** Column 1 ****************
Name: EMPLOYEE_ID
Coltype: 2
Value is 202.0
** Column 2 ****************
Name: LAST_NAME
Coltype: 12
Value is Fay
** Column 3 ****************
Name: JOB_ID
Coltype: 12
Value is MK_REP
** Column 4 ****************
Name: SALARY
Coltype: 2
Value is 6000.0
** Column 1 ****************
Name: EMPLOYEE_ID
Coltype: 2
Value is 201.0
** Column 2 ****************
Name: LAST_NAME
Coltype: 12
Value is Hartstein
** Column 3 ****************
Name: JOB_ID
Coltype: 12
Value is MK_MAN
** Column 4 ****************
Name: SALARY
Coltype: 2
Value is 13000.0

Oracle’s Implicit Result set capability will allow reuse of existing stored procedure designs (if desired); be prepared for the complexity of code needing to know about the number of result sets and their contents.

From Product X to SQL Developer

I recently worked at a company that used "Product X" for all of it SQL and PL/SQL activities.  There’s no real need to reveal what "Product X" is, because this isn’t a post about whether as a product it was good, bad or somewhere in between.  "Product X" met the needs of the Oracle developers in the company, and that’s all that matters.  There was just one issue – for Product X, the company was paying a little under $100k annually in license fees.

Enter SQL Developer.  Similarly, it’s not particularly relevant whether I think, or anyone else at the company thinks its better or worse than "Product X".  What does matter is – can it do the required tasks of the developers.  And the answer to that is a resounding yes.  And once that question is answered, then paying any kind of annual license fee for Product X becomes not an intelligent business choice.  So the organisation is in the process of moving to SQL Developer.  The trick now is how to enforce assist :-) the developers with moving to SQL Developer.  And the reality is – once you show developers how to re-map the shortcut keys in SQL Developer to match what they were used to in Product X, then the task is virtually done.

But some developers still clung to Product X…and hence were threatening a violation of a the no-longer current license agreement.  So the task then became – how to stop dissuade :-) them from still using Product X.  This post gives you a few ideas on how you might want to tackle it.

Version 1

I assumed the best of our developers :-) and put a simple login trigger to check if the program they’d connected with was "ProductX.exe".  If it was, then a job was submitted (via dbms_job) to pop an email to them to let them know that Product X was no longer the company standard. 

Version 2

But old habits die hard…and within a few days, some developers had renamed ProductX.exe to SQL Developer.exe and were back on the database thrilled with their ingenuity :-) So the trigger was adjusted to also check the MODULE, as ProductX also set its session module value to "Product X".  But, you can’t keep a good developer down, and with a little assistance from hex editor, the module was now cleared.

Version 3

Most development tools, when they connect to the database, run a series of queries to check things like privileges and the like.  So by tracing a connection by ProductX, a list of cursors that were executed could be identified.  So the login trigger was altered to always submit a job.  The job would check the program and module were checked as before, but it would also check V$OPEN_CURSOR for the new session. If it contained a selection of the statements that Product X would execute, then the trap has been sprung :-)

 

So Version 3 has been active for a couple of weeks now, and plenty of emails have been sent.  It will soon be amended to terminate connections, and thus, the migration to SQL Developer will be complete :-)

EM Express versus EM Cloud Control

Recently I was asked the question “What is the real difference between EM Cloud Control [NOTE: I’ll refer to this as EM12c through the rest of this post] and EM Database Express in 12c?” It was (for me) a pretty easy question to answer, but I wanted to provide the questioner with a link to the place in the Enterprise Manager documentation where it covers that in detail. Somewhat to my surprise, I wasn’t able to find such a link – well, not quickly anyway. I think the reason for that is the documentation for EM Express (as it’s more commonly abbreviated to) falls under the database documentation which is owned by a different group in Oracle than the Enterprise Manager documentation. Well, that’s my speculation anyway. It may just be there in the documentation and I couldn’t find it in my quick search. :) Be that as it may, if I couldn’t find it in a quick search then I suspect others would have the same issue, so I thought it was worthwhile to bash out a quick post on the subject. And of course, what was originally going to be a quick post turned into something much bigger, so if you want, use the links below to move through the post:

What is EM Express?
What is EM12c?
What are the Differences Between the Two?

What is EM Express?

Before I start on what the differences are, it’s probably worthwhile to spend a little time explaining what EM Express actually is. The first thing to note is what it replaces, and that’s the Database Control product. Database Control is no longer available as of Oracle Database 12c. In its place, the tool to monitor a single database is EM Express. EM Express – or to give it its full name, Oracle Enterprise Manager Database Express 12c (now you can see why it’s normally called just EM Express!) – is an integrated GUI management tool for database administration of a single Oracle Database 12c database. Let’s explain that in a bit more detail.

Integrated Installation

EM Express is preconfigured and installed when the database kernel is installed, so when you install your ORACLE_HOME, you have also installed EM Express. If you have used DBCA to create a database, one of the questions you will be asked is whether you want to manage this database with EM Express or EM12c. You can actually select both checkboxes, though I found the last time I did that (admittedly quite some time ago when Oracle Database 12c was relatively new) that the installer got a tad confused by selecting both and it was better to choose one and then set up the other later. EM Express runs inside the database and has no extra middleware components (this is one area of differentiation between EM Express and the earlier Database Control product). EM Express leverages the XDB server inside the database. It support both single instance and Real Application Clusters (RAC) databases.

Small Footprint

EM Express has a fairly small footprint in terms of disk storage, memory and CPU usage. The disk storage added to the kernel by EM Express is only around 20 Mb. It requires minimal CPU and memory – in fact, none until you actually connect to it. All the UI rendering is done in the browser being used, reducing the footprint even more, and the database server is only used to run the SQL needed to retrieve the information being displayed in the GUI.

Comprehensive Administration

OK, comprehensive may be optimistic. :) This is one area where the tool doesn’t quite measure up as yet compared to Database Control, in that EM Express provides only basic administration support for configuration management, storage management, and security management. No doubt as new releases of the tool come out, this coverage will expand. Currently, the tool provides the following functionality:

  • Configuration Management – initialization parameters, memory, database feature usage, and current database properties
  • Storage Management – tablespaces, undo management, redo log groups, archive logs, and control files
  • Security Management – users, roles, and profiles
  • Performance Management – Performance Hub and SQL Tuning Advisor

One area where EM Express far outstrips Database Control (and in fact, even surpasses similar areas in EM12c) is the performance side. EM Express has quite advanced performance diagnostics and tuning capabilities. However, you do need to be licensed for the Diagnostics and Tuning packs to use these pages.

You can find a little bit more on managing Oracle Database 12c with EM Express in my presentation from the APAC OTN Tour back in 2013, located here.

What is Enterprise Manager Cloud Control 12c?

OK, so that’s what EM Express handles. Now let’s have a quick overview of the EM12c product.

To start with, EM12c is more of a product line or product family than a single product, but it still allows you to manage your entire Oracle infrastructure from a single pane of glass, from application to disk. It even allows you to manage your non-Oracle infrastructure, with a variety of plugins and connectors to manage everything from non-Oracle databases (like SQL Server and DB2) to non-Oracle middleware (like WebSphere, Microsoft Active Directory and JBoss Application Server) and so on. And if there’s something it doesn’t yet have a plugin or connector for, you can use the Extensibility capabilities to write your own.

If you look at the key capabilities of EM12c, it really boils down to three main areas:

  • Cloud Lifecycle Management – EM12c helps enterprises discover their IT assets and plan their cloud by providing consolidation recommendations, and has a guided cloud setup wizard to help you define services such as Infrastructure as a Service, Middleware as a Service, and Database as a Service. With later releases, you also have Schema as a Service, Pluggable Database as a Service, Testing as a Service, and Snap Clone functionality. It also includes an out of the box self service portal capability, along with API’s that can be used to create your own self service portal. Finally, it has chargeback capability that allows you to provide business users a comprehensive report of how much computing resources they are using, and even to invoice them for the costs of those resources if you want to.
  • Complete Application to Disk Management – EM12c can be used to monitor and manage every layer of your infrastructure from the application all the way down to the storage, including:
    • Application Management – EM12c provides application management solution for packaged, third party and custom applications, including E-Business Suite, Siebel, PeopeSoft, JD Edwards, Fusion Apps, and Java or SOA based custom and third party apps.
    • Middleware Management – EM12c also manages the many Fusion Middleware components, such as WebLogic Server, SOA Suits, Coherence, Identity Management, Business Intelligence, Glass Fish, Jolt and WebCenter
    • Databases Management – this is the area EM12c has been well known for over many years. Indeed, the first version of Enterprise Manager I ever saw, the 0.76 beta release, already contained a lot of database management capabilities. Obviously, those have all expanded over the years as the database capabilities also have, but now includes additional functionality on top of the native database capabilities in database lifecycle management areas like automating mass database upgrades and patching.
    • Hardware and Virtualization Management – Traditionally, Enterprise Manager has been focused on software management. EM12c expands that to provide management capabilities like provisioning, patching, monitoring, administration and configuration management for both hardware and virtualization layers too. This is particularly true in the case of Oracle’s Sun hardware line.
    • Heterogeneous Management – Even though Oracle’s footprint has been expanding into so many different areas over the years, there are still going to be some non-Oracle products at most customer sites that need managing as well. Oracle has a number of plugins and management connectors that can be used to monitor and manage these third-party products. EM12c also includes an Extensibility Development Kit (EDK) to allow you to write your own management plugins. Many of these are available for other customers to use via the Extensibility Exchange.
    • Engineered Systems Management – EM12c is tightly integrated with a range of Oracle’s Engineered Systems, such as Exadata, Exalogic, Exalytics, SuperCluster, Virtual Compute Appliance (VCA), Big Data Appliance, and Oracle Database Appliance (ODA). That list just keeps expanding as more engineered systems are released, so customers can take advantage of integrated hardware, software and management capabilities that are engineered together to form a complete solution.
    • Application Quality Management – EM’s AQM products provide an end-to-end testing solution, including Application Testing Suite (OATS) which has test management, functional testing and load testing capabilities, Application Replay and Real Application Testing to provide production-scale testing of the application and database infrastructure, and test data management capabilities like data masking and subsetting
    • Application Performance Management – APM delivers business driven application management with end to end monitoring that includes user experience management (which includes both real user monitoring via RUEI and synthetic transaction monitoring via SLM beacons), business transaction management (which allows you to monitor and trace transactions from a business perspective), Java and database monitoring and diagnostics, and application performance analytics.
  • Enterprise-grade management – Not only is it important that you have the ability to manage your Oracle infrastructure with EM12c, but it’s also important that EM12c itself can support mission critical workloads. Way back when it really didn’t matter most of the time if your management tool went down every so often, but now people are realizing it’s as important to have mission critical management for your mission critical workloads. It’s not unusual to find Enterprise Manager installations protected by RAC databases with Data Guard as well, along with software load balancers and the whole enchilada. EM12c has been thoroughly tested for scalability (see here for one of my very early posts on this, and just recently we saw that very same environment hit 2 million targets!). EM12c’s Self Update framework ensures you can easily incorporate management of the latest and greatest products from Oracle. And in addition, the security model introduced with EM12c allows tightly controlled access to the EM12c product itself. So all of these points allow you to be sure EM12c can manage the needs of even the largest environments
  • So What are the Differences Between the Two?

    Now you know what the two products are capable of, you can probably pick out some of the differences. :) But let’s be 100% clear on these.

    The most obvious difference is that EM Express is a management tool that manages a single database, while EM12c manages your entire Oracle (and non-Oracle) data center. If all you are needing to manage is a few Oracle databases, then EM Express will be more than capable of meeting your needs. You will have to connect to each database separately to manage it though. However, as soon as you need to manage more than a few databases, or you want to manage more than just databases (i.e. you want to manage middleware, apps and so on), EM12c is your tool of choice. We have customers with literally thousands of database targets they need to manage. Try doing that with EM Express! :)

    The second difference is that EM Express is an agentless architecture. Everything you need to connect to EM Express is built into the database kernel. You just need to point a browser to the correct page, and away you go. EM12c, on the other hand, uses an agent to both discover and then manage targets. For each host that you add to an EM12c environment, you add an agent to that host and it then discovers and promotes all the targets (such as databases, listeners, WLS environments and so on) that are on that host. When you need to perform a task on those targets (such as shutting down a database), EM12c communicates with the agent on the host containing that database and it sends the message to the database to shut down. If you think about it, using the agents like this is really the only way to ensure a scalable solution in large environments.

    The third difference is functionality. As I mentioned above, EM Express provides a specific set of functionality, and if you want more you need to step outside of EM Express to either a tool like SQL Developer (I haven’t touched on that here because the article would be even longer, but if you want more on that go and visit Jeff Smith’s site for more details) or EM12c. Just a couple of examples – metric extensions, Data Guard management, and chargeback are all only available from EM12c rather than EM Express.

    The fourth difference is security. To log in to EM Express, you need to be a user that has the DBA, EM_EXPRESS_BASIC or EM_EXPRESS_ALL roles. The EM_EXPRESS_BASIC role provides read-only access to EM Express, so users with that role can view the UI but not make any changes. This role also includes the SELECT_CATALOG_ROLE role. The EM_EXPRESS_ALL role obviously grants a lot more access. :) It has full read / write access to all EM Express features, and includes the EM_EXPRESS_BASIC role. Compare that to the security model for EM12c, where there are quite a number of roles and over 100 fine-grained privileges, and you can see that EM12c has a much more granular security model.

    There are probably other more minor differences between the products that I haven’t covered here, but to my mind these are the main differences. Hope that helps explain the differences between the two!

    Friday Philosophy – Why I Volunteer for User Groups

    I’ve just noticed a new page about me popping up on the UKOUG web site – It’s in the section about volunteer case studies, alongside people like Joel Goodman, Simon Haslam, Carl Dudley, Jason Arneil, Brendan Tierney and others who have been stupid good enough to give time and effort to the UKOUG.
    {You can get to the page by going to the UKOUG home page (www.ukoug.org) and clicking the Membership or Member Activities tab and Case Studies & Testimonials under that and finally Volunteer Case Studies. Phew. Or follow the link I gave at the start and click on the other names.}

    I’m not sure how long I’ve been up on there but only a couple of days I think.

    Anyway, Why DO I volunteer for user groups?

    The little bio covers most of it but I thought I would put some words here on my blog too. I volunteer because, fundamentally, I am a socialist (with a small ‘S’) – I feel that we are all better off if we all help each other. I’ve been helped by people in my career (presenting stuff I don’t know, giving advice), I guess I feel that I should return that favor. Many of the people who have (and continue) to help me stand nothing to gain personally by helping me. In fact, one or two have helped me when, strictly speaking, they are helping create a rival for work opportunities. I try to do the same to those around me. I know, it sounds a bit “Disney film teaching the kids to do right” goody-two-shoes, but that is the core of it. And there are some other aspects to it too…

    Why do I volunteer for the UKOUG specifically? Because they are THE main user group in my geographic area and provide the most support to the Oracle user community here in the UK. Most of the people involved in the UKOUG are just nice people too. But I also support and volunteer for smaller user groups, mostly by either promoting their meetings, going to them or presenting. I started presenting at the main UKOUG conference back when Dido, Eminem and Christina Aguilera where in their hey-days. I also went to the RDBMS and similar SIGs and before long I was presenting at them and then got sucked into chairing one of them – the Management and Infrastructure SIG. I’ve been slowly sucked in more & more as the years role by.

    That has led on to me presenting at other user groups in different countries. Actually, I used to do quite a bit of presenting abroad (mostly the US) around 10 years ago, but that was part of the role I had at the time and my employer paid the bills. No employer to pay the bills now, but then as it is my time I try to make presenting abroad also a chance to have a short holiday, I try to take a day or two one side or the other of the event to look around. And actually, it is nice spending time with other people who present at or attend user group meetings.

    Another part of it is I just like presenting. This is not quite so Disney Nice Guy, there is an aspect that is more selfish, that standing up, being listened to and telling people stuff that maybe they don’t know makes me feel better about myself. Better about myself? OK, I’ll let that stand for now but it is more that it makes me feel I am achieving something and having an impact. That I am useful. Fundamentally it is still a desire to help and presenting does not scare me (I know it is scary for a lot of people, but then a lot of people are not scared of heights and I am – it all balances out). But with a slice of “look at me!!!” thrown in.

    There are also rewards for the effort. I’ve got to know a lot more people as a result of presenting, blogging (and now tweeting) than I would have had I stayed just one of the audience. For me it has helped me make more friends. As I said above, part of what is now nice about user group meetings for me is meeting friends I’ve made who are also on the speaker circuit and there is inevitable a few drinks in the evening whenever there is a user group. It also gives you more exposure in the community and helps lead to job opportunities – or at least that is the theory. No one has yet offered me a job because they liked my blog post or presentation!

    That leads me to the last aspect of volunteering. Some people volunteer primarily for selfish reasons. To get bragging rights, get it on their CV’s, to help them get sales contacts or better jobs. The odd thing is, people who do it for those reasons tend not to last – as volunteering for user groups is a lot of hard work to get those rewards. You can usually spot them as they are the ones who don’t actually do a lot or complain all the time about the coffee being bad (actually, usually the coffee IS bloody terrible) and other things. Don’t get me wrong, some of those rewards do come with the volunteering, but if someone is volunteering primarily to get them, it does not seem to work out for them. Or maybe that is my socialism coming out again :-). Fundamentally, I think volunteering only works if, at the core of it, you want to help other people. Maybe that is why other volunteers are such nice people to hang around with.

    Why do you do it? (or not).

    EM12c : Login to GUI with the correct password causes authentication failure

    So the other day I was trying to log in to my EM12c R4 environment with the SSA_ADMINISTRATOR user, and I got the error:

    “Authentication failed. If problem persists, contact your system administrator”

    I was quite sure that the password that I had was correct, so I tried with the SYSMAN user and had the same error. I still wanted to verify that I had the correct password , so I tried with the SYSMAN user to log in to the repository database, and was successful, so I know something was wrong there.

    
    SQL> connect sysman/
    Enter password:
    Connected.
    
    

    So I went to the/em/EMGC_OMS1/sysman/log/emoms.log and saw the following error

    
    2015-05-18 21:22:06,103 [[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR audit.AuditManager auditLog.368 - Could not Log audit data, Error:java.sql.SQLException: ORA-14400: inserted partition key does not map to any partition
    ORA-06512: at "SYSMAN.MGMT_AUDIT", line 492
    ORA-06512: at "SYSMAN.MGMT_AUDIT", line 406
    ORA-06512: at line 1
    
    

    Which led me to believe that the JOB_QUEUE_PROCESSES was set to 0, but it wasn’t the case, since it was set to 50. Though, this is actually an incorrect limit, so I bumped it up to 1000 and tried to rerun the EM12c repository DBMS Scheduler jobs as per the documentation in 1498456.1:

    
    SQL> show parameter JOB_QUEUE_PROCESSES
    
    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    job_queue_processes integer 50
    SQL> alter system set JOB_QUEUE_PROCESSES=1000 scope = both;
    
    System altered.
    
    SQL> show parameter both
    SQL> show parameter job
    
    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    job_queue_processes integer 1000
    SQL> connect / as sysdba
    Connected.
    SQL> alter system set job_queue_processes = 0;
    
    System altered.
    
    SQL> connect sysman/alyarog1605
    Connected.
    SQL> exec emd_maintenance.remove_em_dbms_jobs;
    
    PL/SQL procedure successfully completed.
    
    SQL> exec gc_interval_partition_mgr.partition_maintenance;
    
    PL/SQL procedure successfully completed.
    
    SQL> @$OMS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_recompile_invalid.sql SYSMAN
    old 11: AND owner = upper('&RECOMPILE_REPOS_USER')
    new 11: AND owner = upper('SYSMAN')
    old 26: dbms_utility.compile_schema(upper('&RECOMPILE_REPOS_USER'),FALSE);
    new 26: dbms_utility.compile_schema(upper('SYSMAN'),FALSE);
    old 41: WHERE owner = upper('&RECOMPILE_REPOS_USER')
    new 41: WHERE owner = upper('SYSMAN')
    old 84: AND owner = upper('&RECOMPILE_REPOS_USER')
    new 84: AND owner = upper('SYSMAN')
    old 104: AND ds.table_owner = upper('&RECOMPILE_REPOS_USER')
    new 104: AND ds.table_owner = upper('SYSMAN')
    
    PL/SQL procedure successfully completed.
    PL/SQL procedure successfully completed.
    
    SQL> connect / as sysdba
    Connected.
    SQL> alter system set job_queue_processes = 1000;
    
    System altered.
    
    SQL> connect sysman/
    Enter password:
    Connected.
    SQL> exec emd_maintenance.submit_em_dbms_jobs;
    
    PL/SQL procedure successfully completed.
    
    SQL> commit;
    
    Commit complete.
    
    

    After this I bounced the OMS, but still kept getting the same error. And though it fixed the scheduler jobs, I was now seeing the following error in the emoms.log:

    
    2015-05-18 22:29:09,573 [[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'] WARN auth.EMRepLoginFilter doFilter.450 - InvalidEMUserException caught in EMRepLoginFilter: Failed to login using repository authentication for user: SSA_ADMIN
    oracle.sysman.emSDK.sec.auth.InvalidEMUserException: Failed to login using repository authentication for user: SSA_ADMIN
    
    

    So what I did was an update to the SYSMAN.MGMT_AUDIT_MASTER table and ran the procedure MGMT_AUDIT_ADMIN.ADD_AUDIT_PARTITION as was stated in document id 1493151.1:

    
    oracle $ sqlplus
    
    &nbsp;
    
    Enter user-name: sysman
    Enter password:
    
    SQL> update mgmt_audit_master set prepopulate_days=5 where prepopulate_days is null;
    
    1 rows updated.
    
    SQL> select count(1) from mgmt_audit_master where prepopulate_days is null;
    
    COUNT(1)
    ----------
    0
    
    SQL> exec mgmt_audit_admin.add_audit_partition;
    
    PL/SQL procedure successfully completed.
    
    SQL> commit;
    
    Commit complete.
    
    

    Once I did this, I was able to login with all my EM12c administrators without any issues:

    
    oracle@em12cr4.localdomain [emrep] /home/oracle
    oracle $ emcli login -username=ssa_admin
    Enter password
    
    Login successful
    
    

    Conclusion

    Even though the JOB_QUEUE_PROCESSES were not set to 0, it was the cause that it was failing, as it was a low value for this parameter. Thus, be careful when setting up this parameter, be sure to follow the latest installation guidelines.

    Note– This was originally published on rene-ace.com