Search

OakieTags

Who's online

There are currently 0 users and 37 guests online.

Recent comments

Oakies Blog Aggregator

Server Problems : Update

hard-disk-42935_640This is a follow on from my server problems post from yesterday…

Regarding the general issue, misiaq came up with a great suggestion, which was to use watchdog. It’s not going to “fix” anything, but if I get a reboot when the general issue happens, that would be much better than having the server sit idle for 5 hours until I  wake up. </p />
</p></div>

    	  	<div class=

Enterprise Manager Support Files 101- The EMOMS files

Someone pinged me earlier today and said, “Do I even really need to know about logs in Enterprise Manager?  I mean, it’s a GUI, (graphical user interface) so the logs should be unnecessary to the administrator.”

luke

You just explained why we receive so many emails from database experts stuck on issues with EM, thinking its “just a GUI”.

Log Files

Yes, there are a lot of logs involved with the Enterprise Manager.  With the introduction back in EM10g of the agent, there were more and with the EM11g, the weblogic tier, we added more.  EM12c added functionality never dreamed before and with it, MORE logs, but don’t dispair, because we’ve also tried to streamline those logs and where we weren’t able to streamline, we at least came up with a directory path naming convention that eased you from having to search for information so often.

The directory structure for the most important EM logs are in the $OMS_HOME/gc_inst/em/OMGC_OMS1/sysman/log directory.

Now in many threads on Oracle Support and in blogs, you’ll hear about the emctl.log, but today I’m going to spend some time on the emoms properties, trace and log files.  Now the EMOMS naming convention is just what you would think it’s about-  the Enterprise Manager Oracle Management Service, aka EMOMS.

The PROPERTIES File

After all that talk about logs, we’re going to jump into the configuration files first.  The emoms.properties is in a couple directory locations over in the $OMS_HOME/gc_inst/em/EMGC_OMS1/sysman/config directory.

Now in EM12c, this file, along with the emomslogging.properties file was very important to the configuration of the OMS and it’s logging, which without this, we wouldn’t have any trace or log files or at least the OMS wouldn’t know what to do with the output data it collected!  If you look in the emoms.properties/emomslogging.properties files for EM13c, you’ll receive the following header:

#NOTE
#----
#1. EMOMS(LOGGING).PROPERTIES FILE HAS BEEN REMOVED

Yes, the file is simply a place holder and you now use EMCTL commands to configure the OMS and logging properties.

There are, actually, very helpful commands listed in the property file to tell you HOW to update your EM OMS properties!  Know if you can’t remember an emctl property commands, this is a good place to look to find the command/usage.

The TRACE Files

Trace files are recognized by any DBA-  These files trace a process and for the emoms*.trc files, these are the trace files for EM OMS processes, including the one for the Oracle Management Service.  Know that a “warning” isn’t always a thing to be concerned about.  Sometimes it’s just letting you know what’s going on in the system, (yeah, I know, shouldn’t they just classify that INFO then?”

2016-04-09 01:00:07,523 [RJob Step 62480] WARN jobCommand.JvmdHealthReportJob logp.251 - JVMD Health report job has started

These files do contain more information than the standard log file, but it may be more than what a standard EM administrator is going to search through.  They’re most helpful when working with MOS and I recommend uploading the corresponding trace files if there is a log that support has narrowed in on.

The LOG Files

Most of the time, you’re going to be in this directory, looking at the emctl.og, but remember that the emoms.log is there for research as well.  If you perform any task that involves the OMS and an error occurs, it should be written to the emoms.log, so looking at this log can provide insight to the issue you’re investigating.

The format of the logs are important to understand and I know I’ve blogged about this in the past, but we’ll just do a quick and high level review.  Taking the following entry:

2016-01-12 14:54:56,702 [[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR deploymentservice.OMSInfo logp.251 - Failed to get all oms info

We can see that the log entry starts with timestamp, module, message, status, (ERROR, WARN, INFO) detail, error message.  This simplifies it when having to read these logs or knowing how one would parse them into a log analysis program.

There are other emoms log files, simply specializing in loader processing and startup.  Each of these logs commonly contain a log file with more detailed information about the data its in charge of tracing.

If you want to learn more, I’d recommend reading up on EM logging from Oracle.

 

 



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Enterprise Manager Support Files 101- The EMOMS files], All Right Reserved. 2016.

Stats History

From time to time we see a complaint on OTN about the stats history tables being the largest objects in the SYSAUX tablespace and growing very quickly, with requests about how to work around the (perceived) threat. The quick answer is – if you need to save space then stop holding on to the history for so long, and then clean up the mess left by the history that you have captured; on top of that you could stop gathering so many histograms because you probably don’t need them, they often introduce instability to your execution plans, and they are often the largest single component of the history (unless you are using incremental stats on partitioned objects***)

For many databases it’s the histogram history – using the default Oracle automatic stats collection job – that takes the most space, here’s a sample query that the sys user can run to get some idea of how significant this history can be:


SQL> select table_name , blocks from user_tables where table_name like 'WRI$_OPTSTAT%HISTORY' order by blocks;

TABLE_NAME                           BLOCKS
-------------------------------- ----------
WRI$_OPTSTAT_AUX_HISTORY                 80
WRI$_OPTSTAT_TAB_HISTORY                244
WRI$_OPTSTAT_IND_HISTORY                622
WRI$_OPTSTAT_HISTHEAD_HISTORY          1378
WRI$_OPTSTAT_HISTGRM_HISTORY           2764

5 rows selected.

As you can see the “histhead” and “histgrm” tables (histogram header and histogram detail) are the largest stats history tables in this (admittedly very small) database.

Oracle gives us a couple of calls in the dbms_stats package to check and change the history setting, demonstrated as follows:


SQL> select dbms_stats.get_stats_history_retention from dual;

GET_STATS_HISTORY_RETENTION
---------------------------
                         31

1 row selected.

SQL> execute dbms_stats.alter_stats_history_retention(7)

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_stats_history_retention from dual;

GET_STATS_HISTORY_RETENTION
---------------------------
                          7

1 row selected.

Changing the retention period doesn’t reclaim any space, of course – it simply tells Oracle how much of the existing history to eliminate in the next “clean-up” cycle. This clean-up is controllled by a “savtime” column in each table:

SQL> select table_name from user_tab_columns where column_name = 'SAVTIME' and table_name like 'WRI$_OPTSTAT%HISTORY';

TABLE_NAME
--------------------------------
WRI$_OPTSTAT_AUX_HISTORY
WRI$_OPTSTAT_HISTGRM_HISTORY
WRI$_OPTSTAT_HISTHEAD_HISTORY
WRI$_OPTSTAT_IND_HISTORY
WRI$_OPTSTAT_TAB_HISTORY

5 rows selected.

If all you wanted to do was stop the tables from growing further you’ve probably done all you need to do. From this point onwards the automatic Oracle job will start deleting the oldest saved stats and re-using space in the existing table. But you may want to be a little more aggressive about tidying things up, and Oracle gives you a procedure to do this – and it might be sensible to use this procedure anyway at a time of your own choosing:


SQL> execute dbms_stats.purge_stats(sysdate - 7);

Basically this issues a series of delete statements (including a delete on the “stats operation log (wri$_optstat_opr)” table that I haven’t previously mentioned) – here’s an extract from an 11g trace file of a call to this procedure (output from a simple grep command):


delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_tab_history          where savtime < :1 and rownum <= NVL(:2, rownum)
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_ind_history h        where savtime < :1 and rownum <= NVL(:2, rownum)
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_aux_history          where savtime < :1 and rownum <= NVL(:2, rownum)
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_opr                  where start_time < :1 and rownum <= NVL(:2, rownum)
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_histhead_history     where savtime < :1 and rownum <= NVL(:2, rownum)
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_histgrm_history      where savtime < :1 and rownum <= NVL(:2, rownum)

Two points to consider here: although the appearance of the rownum clause suggests that there’s a damage limitation strategy built into the code I only saw one commit after the entire delete cycle, and I never saw a limiting bind value being supplied. If you’ve got a large database with very large history tables you might want to delete one day (or even just a few hours) at a time. The potential for a very long, slow, delete is also why you might want to do a manual purge at a time of your choosing rather than letting Oracle do the whole thing on auto-pilot during some overnight operation.

Secondly, even though you may have deleted a lot of data from these table you still haven’t reclaimed the space – so if you’re trying to find space in the sysaux tablespace you’re going to have to rebuild the tables and their indexes. Unfortunately a quick check of v$sysaux_occupants tells us that there is no official “move” producedure:


SQL> execute print_table('select occupant_desc, move_procedure, move_procedure_desc from v$sysaux_occupants where occupant_name = ''SM/OPTSTAT''')

OCCUPANT_DESC                 : Server Manageability - Optimizer Statistics History
MOVE_PROCEDURE                :
MOVE_PROCEDURE_DESC           : *** MOVE PROCEDURE NOT APPLICABLE ***

So we have to run a series of explicit calls to alter table move and alter index rebuild. (Preferably not when anyone is trying to gather stats on an object). Coding that up is left as an exercise to the reader, but it may be best to move the tables in the order of smallest table first, rebuilding indexes as you go.

Footnote:

*** Incremental stats on partitioned objects: I tend to assume that sites which use partitioning are creating very large databases and have probably paid a lot more attention to the details of how to use statistics effectively and successfully; that’s why this note is aimed at sites which don’t use partitioning and therefore think that the space taken up by the stats history significant.

Server Problems : Any ideas?

I’m pretty sure last night’s problem was caused by a disk failure in the RAID array. The system is working now, but it might go down sometime today to get the disk replaced. Hopefully they won’t do what they did last time and wipe the bloody lot! </p />
</p></div>

    	  	<div class=

OTN Yathra– scenes from Bangalore and Hyderbad

It’s sad that I could not capture in pictures the amazing sights and sounds from both inside the conference and outside in the streets of these amazing places.

I was too busy just absorbing it myself and neglected to take enough pictures Sad smile

But again, a truly wonderful couple of days.

Please be patient!

angry-1300616_640It’s extremely nice to have a big audience. It’s very flattering that people care enough about what I say to be bothered to read it. The problem with having a large audience is people can get a very demanding at times. </p />
</p></div>

    	  	<div class=

EM13c Proxy Setup, MOS and CSI Setup

The change in EM13c, is that it support multiple proxies, but you may still not know how to set up a proxy and then use it with your MOS credentials and then assign out your CSI’s to targets.

scratcheshead

Proxies

To do this, click on Settings, Proxy Settings, My Oracle Support.  Click on Manual Proxy Setting and then type in your proxy host entry, (sans the HTTPS, that’s already provided for you) and the port to be used:

proxy1http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/04/proxy1.png?res... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/04/proxy1.png?res... 768w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/04/proxy1.png?w=1206 1206w" sizes="(max-width: 478px) 100vw, 478px" data-recalc-dims="1" />

Once entered, click on Test and if successful, then click on Apply.  If it fails, make sure to check the settings with your network administrator and test the new ones offered.  Once you have a proxy that works, you’ll receive the following message:

proxy2http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/04/proxy2.png?res... 300w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/04/proxy2.png?res... 768w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/04/proxy2.png?w=1104 1104w" sizes="(max-width: 485px) 100vw, 485px" data-recalc-dims="1" />

My Oracle Support Credentials

Next, you’ll need to submit your MOS credentials to be used with the EM environment.  Keep in mind, the credentials used for this account, (let’s say you’re logged in as SYSMAN)  will be identified with this EM login unless updated or removed.

Click on Settings, My Oracle Support, My Credentials.  Enter the credentials to be used with this login and click Apply.

proxy3http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/04/proxy3.png?res... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/04/proxy3.png?res... 768w" sizes="(max-width: 477px) 100vw, 477px" data-recalc-dims="1" />

You’ve now configured MOS credentials to work with the main features of EM13c.

Support Identifier Assignment

Under the same location as the one you set up your MOS credentials, you’ll notice the following drop down:  Support Identifier Assignment.

This option allows you to verify and assign CSI’s to the targets in Oracle Enterprise Manager.  Its a nice inventory features in EM that can save you time as you work with MOS and SR support, too.

proxy4http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/04/proxy4-1.png?r... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/04/proxy4-1.png?r... 768w" sizes="(max-width: 497px) 100vw, 497px" data-recalc-dims="1" />

As you can see from my setup above, I only have a few targets in this EM environment and I was able to do a search of the CSI that is connected to my MOS credentials and then assign it to each of these targets, (whited out.)  If you have more than one CSI, you can assign the appropriate one to the targets that the targets belong to after searching for the target names or by target types you wish to locate.

And that’s the 101 on Proxy, MOS and CSI Setup in EM13c!

 



Tags:  , , , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [EM13c Proxy Setup, MOS and CSI Setup], All Right Reserved. 2016.

Evolution of Masking Vendors

Screen Shot 2016-04-22 at 12.47.52 PM

Masking with Delphix (where duplicate blocks are shared making a new copy almost free storage wise and almost instantaneous) has 4 big advantages

  1. Instant data, no copying
  2. Ease of Use
  3. Consistent across data centers and databases vendors
  4. Master/Slave

Virtual Data Masking

Delphix masking and virtualization is the most advanced solution in the market place, because Delphix doesn’t provision data. Instead of provisioning data, Delphix sets up pointers back to existing data for a new clone. When that new clone tries to modify data, the existing data stays unchanged, and the changed data is stored elsewhere and only visible to the clone that made the change. This allows Delphix to mask once and provision many masked copies in minutes for almost no storage.

  • Some tools requires to subset data. Imagine writing code to subset data from a medium size (5000 objects) custom database, and maintain it.
  • Some tools requires 1.5X disk in the target, because it creates temp tables to copy and mask data.
  • Whereas, Delphix masks in memory, no need for disk, and virtualizes the data.

Screen-Shot-2016-03-07-at-5.17.21-PM

Ease of use saves money

Largest cost in data masking is the personnel to develop and maintain masking code.

Most tools require significant programming skills and dedicated administrators.

  • DELPHIX:
    • Users with no programming background can use the product in 4 hours.
    • Web based interface with profiling integrated to masking: You can profile and start masking in minutes without any programming knowledge.

Mask data consistently

Delphix masks data consistently across different type of data sources, across different data centers automatically

Some tools either masked different data sources differently breaking referential integrity or they require the user to manually maintain relationships across all attributes and across all data sources using the ‘Translation Matrix’.  Other tools based on specific databases require the user to import data into that proprietary database in order to mask it and then the data needs to be copied back out of the proprietary database into the location it is used.

  • DELPHIX:
    • The module which identifies sensitive data (Profiler), also assigns the masking algorithms, so no need to manually define relationships.
    • Delphix masking algorithms are deterministic, so based on the input we create a consistent output, regardless of the data source
    • Delphix architecture separates transformation from a data source

Master/Slave configuration

Delphix provides a central interface to configure/manage users, metadata and algorithms, and execute masking in a consistent and distributed manner for each department, entity, or data center. Without this, each entity would have masked data differently, and aggregation of data would be useless.

Screen Shot 2016-03-07 at 5.21.34 PM

Next Steps

Pete Finnigan recently did a paper reviewing of Delphix and data masking where he points out some of the challenges to masking and solutions.

Pete goes into ways of securing the source database such that the cloned copy benefits from the security in the source. Pete also shares some of the top reasons he has heard at customer sites for why people don’t mask even though they want to.

The top 5 reasons people don’t mask when they should

  1. Fear of not locating all data to mask
  2. Referential integrity
  3. Data distribution
  4. Testing may not be valid with masked data
  5. Time, cost and skills needed

Pete has done a second paper on specifically how to secure data in non production areas. We will be publishing this paper soon.

Pete’s first paper with Delphix on masking is available here.

 

DDL logging

I was presenting at the UKOUG event in Manchester on Thursday last week (21st April 2016), and one of the sessions I attended was Carl Dudley’s presentation of some New Features in 12c. The one that caught my eye in particular was “DDL Logging” because it’s a feature that has come up fairly frequently in the past on OTN and other Oracle forums.

So today I decided to write a brief note about DDL Logging – and did a quick search of my blog to see if I had mentioned it before: and I found this note that I wrote in January last year but never got around to publishing – DDL Logging is convenient, but doesn’t do the one thing that I really want it to do:

DDL Logging – 12c

One of the little new features that should be most welcome in 12c is the ability to capture all DDL executed against the database. All it takes is a simple command (if you haven’t set the relevant parameter in the parameter file):

alter system set enable_ddl_logging = true;

All subsequent DDL will be logged to two different places (in two formats)

  • $ORACLE_BASE/diag/rdbms/{database}/{instance}/log/ddl/log.xml
  • $ORACLE_BASE/diag/rdbms/{database}/{instance}/log/ddl_{instance}.log

Unfortunately the one thing I really wanted to see doesn’t appear – probably because it doesn’t really count as DDL –  it’s the implicit DDL due to inserting into not-yet-existing partitions of an interval partitioned table.

Note: If you’re using a container database with pluggable databases then the DDL for all the pluggable databases goes into the same log file.

Update – Licensing

The following text in the Oracle 12c Database Licensing document has just been brought to my attention:

Licensed Parameters
The init.ora parameter ENABLE_DDL_LOGGING is licensed as part of the Database Lifecycle Management Pack when set to TRUE. When set to TRUE, the database reports schema changes in real time into the database alert log under the message group schema_ddl. The default setting is FALSE.

The licensing document is also linked to from the 12c online html page for the parameter.

The 11g parameter definition makes no mention of licensing, and the 11g “New Features” manual don’t mention the feature at all, but the parameter does get a special mention in the 11g licensing document where it is described as being part of the Change Management Pack.

init.ora Parameters
The use of the following init.ora parameter is licensed under Oracle Change Management Pack:
■ ENABLE_DDL_LOGGING: when set to TRUE (default: FALSE)

 

OTN Yathra Chennai

Rather than try to convey in words the first couple of days in India, I thought I’d share some of the images from the first days here in India, covering the Chennai conference. Enjoy