Search

OakieTags

Who's online

There are currently 0 users and 33 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Setting Up Chargeback in EM 12.1.0.4

Some time back, I posted a blog entry about Metering and Chargeback in Enterprise Manager, where I covered the basic capabilities of the Chargeback Application. It’s taken me some time to get back to the promised followup blog post on setting up Chargeback, what with Oracle Open World getting in the way of things, but that’s what I’m finally doing now.

Chargeback can be setup in EM12c by any user with the EM_CBA_ADMIN role, or of course as SYSMAN. Once you are logged in as a user with the relevant privileges, follow the path “Enterprise → Chargeback”:

Image00001

The default currency symbol displayed in the Chargeback application is the dollar sign ($). Note this is just a symbol and does not have any impact on chargeback calculations, but you can change it if you want by clicking on the “Settings” tab:

Image00002

You can enter a new currency symbol here, but for this environment we will leave it at the default and click “Change Plans”:

Image00003

The first thing we want to do is set rates for the universal change plan. This covers the CPU Usage, Memory Allocation and Storage Allocation metrics. Click “Set Rates”:

Image00004

Set the rates appropriately and click “Save”:

Image00005

While the universal charge plan is useful, there are situations where you want to apply charges on other entities. That’s where an extended charge plan can be used. To create an extended charge plan, click “Create” then “Plan”:

Image00006

Provide a meaningful name for the charge plan, then click “Add” to select an entity type for which you can set rates:

Image00007

You can add multiple entity types to the charge plan at once, so choose “Host” and “Oracle Pluggable Database” then click “OK”:

Image00008

You can now setup specific configurations for each entity type, by selecting the entity then clicking “Setup Configurations …”:

Image00009

Click the “Add” button:

Image00010

In this example, we want to charge different rates for different machine architectures. This is the sort of thing you would do if you wanted to charge more for maintaining machines that are either outdated or where you have less skills and may need to bring in consultants. To do this, you can set the “Condition Item” to “Machine Architecture”, then click the “Search” button to search for different values you can choose:

Image00011

In this scenario, we want to set up three different configurations:
• Intel x86_64 as our default
• Intel i686 as more expensive because it’s outdated
• PA-RISC 64-bit as more expensive since we have less skills in that area.
You can select each in turn from the list and click “OK”, like this:

Image00012

Click “OK”:

Image00013

On the “Setup Configurations” pop-up, click “OK” again:

Image00014

You can now specify multipliers for each architecture. In this example, we’re leaving our preferred architecture (Intel x86-64) at 1, setting Intel i686 to 2x, PA-RISC 64 bit to 1.5x and other architectures to 3x. Next we can click “Oracle Pluggable Database” to setup configurations for that:

Image00015

Click “Setup Configurations”:

Image00017

Click “Add…”:

Image00018

On the “Add Configuration” pop-up, select “Version” for the “Condition Item”:

Image00019

In this scenario, we want to charge more for clients that have not switched to the latest version of the database as a way of encouraging migration so we set the “Condition Operator” and “Condition Value” to validate that. We could also have chosen to charge more when using paid options like Partitioning and so on. Click “OK”:

Image00020

On the “Setup Configurations” pop-up, click “OK”:

Image00021

Back on the “Create Plan: HR Chargeback Plan” page, we can increase the multiplier for configurations that aren’t on the latest version and click “Save”:

Image00022

Now we have the charge plans configured the way we want, we can also define cost centers that the charges will be assigned to. Cost centers are normally configured in a business hierarchy, so let’s walk through the process of building that. We start by clicking the “Cost Centers” tab:

Image00023

Cost centers can either be imported from an existing LDAP configuration (done via the “Action” menu) or defined in the Chargeback application itself. In this example, we’ll walk through adding them to the Chargeback application. To do this, click “Add”:

Image00024

On the “New Cost Center” pop-up, enter a cost center name and display name, then click “OK”:

Image00025

You can do this as many times as you like. To add a cost center in a business hierarchy, first create a cost center at the top level (as shown above) then create a cost center using the “Member of” radio button and select the relevant top level cost center, as shown here:

Image00026

Once you have defined all the cost centers you need, the next step is to add the relevant entities. An entity is either a target in Enterprise Manager, or a custom resource that has been added to Enterprise Manager. Custom resources can include target types for which there is no current out-of-the-box Chargeback support (covered in more detail in the documentation). To add entities, click the “Entities” tab:

Image00027

Click “Add Entities”:

Image00028

This will start the “Add Entities” wizard. Click “Add” to select targets for chargeback:

Image00029

Firstly, we want to add hosts. To search for hosts to add, deselect “All” and select “Host” from the “Target Type” dropdown:

Image00030

From the list of hosts, select one or more hosts to add as an entity and click “Select”:

Image00031

On the Make Assignments wizard step, select the entity name and click “Assign Plan”:

Image00032

Select “HR Chargeback Plan” and click “OK” to add the chargeback plan to this host:

Image00033

To add a cost center to this host, click “Assign Cost Center”:

Image00034

Select the relevant cost center and click “OK”:

Image00035

Back on the “Make Assignments” step, click “Next”:

Image00036

Review the settings and if they are correct, click “Submit”:

Image00037

You can walk through the same process to add one of the container databases and a PaaS Infrastructure Zone as well. Once that has been done, you can schedule an on-demand data collection to start seeing the information for these entities in the Chargeback application. To do that, select “On-demand data collection” from the “Action” dropdown menu:

Image00038

Click “Yes” on the “Confirmation” pop-up:

Image00039

In the “Information” message, click “here” to view the job status:

Image00040

Once the job status changes to “1 Succeeded”, click on “Enterprise” then “Chargeback”:

Image00041

You should see information being collected both in the “Summary” and “Usage Trends” regions:

Image00042

It might seem like a very long-winded setup, but in reality Chargeback only takes you a short time to configure. Of course, once it’s configured you then have all the joys of getting money out of the end clients for their resource usage – good luck with that one, I can’t help you there! :)

Roll up! Roll up! Nothing new to see here! (Database Web Services)

Over the years I’ve written loads of stuff about consuming and publishing web services directly from the database. I’ve been doing quite a bit of this at work recently and I realised how difficult it is to find all the pieces, since they are spread across multiple articles, spanning multiple database versions. In an attempt to give a single point of entry I’ve written this very brief article.

It’s really more of a links page. :)

If you are new to the idea of using the database for web services, it might come as a surprise what you can do without having to turn to the dark side (middleware). :)

Cheers

Tim…

Update: This is new on my website though. :)


Roll up! Roll up! Nothing new to see here! (Database Web Services) was first posted on October 16, 2014 at 9:22 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

DevOps and Databases?

I’m super looking forward to next weeks DevOps Enterprise Summit in San Francisco. You might ask “Why?!” since I’m a database guy and not a DevOps guy.  I can understand that reaction.

I don’t know about you, but when I hear the term DevOps I have to roll my eyes and think “oh, the latest greatest tech industry buzzword.” So why would I as a DBA care about DevOps? With DevOps, as with most tech industry buzzwords, there is actually a worthy idea at the kernel. The kernel of DevOps has two parts.

  1. communication
  2. automation

The first part is changing company culture seeking to improve communication, understanding, empathy and create bridges between teams in different silos in development and operations to improve efficiencies. The second part of DevOps is using the best methods and tools to enable automation. With better communication between groups including fast feedback loops to track the impact of changes quickly and automated tools to rollout changes quickly DevOps can profoundly improve the efficiency of companies.

How can a DBA help bring DevOps culture to a company 

It’s true though, that the best tools will have no effect if there is not a culture in place that can adopt the new tools. On the other hand the best culture can only go so far with out the best tools. Introducing tools and methods is often the best way to facilitate making changes in culture. What is the best tool that a DBA can bring to his organization that facilitates not only automation but to improve communication and culture. That tool is data virtualization.

Why is data virtualization key to DevOps and databases?

First what is the goal of DevOps? It’s to improve efficiencies by bringing development closer to operations, i.e. IT. Bringing Dev and Ops closer together means better communication, better understanding and more efficient interactions. What kind of interactions do Dev and Ops have? Well developers are creating new applications and features that have to be deployed into production. In order to create and deploy applications, developers require copies of the production environment. The hardest part of creating a copy of the production environment is creating a copy of the production database. The database is the largest amount of data in the environments and also the most complicated part to copy as it requires special tools and procedures to copy. The copies often require masking which leads to more complexity. All of this means that making copies of databases becomes the bottleneck in development and production deployment. To deploy applications into production requires thorough QA testing of code and not only testing of code but testing of the actual rollout process. All of this QA and testing requires more environments and again.

As Gene Kim author on The Phoenix Project said, the number one bottleneck in application development is supplying development and QA environments.

What if with one simple stroke one could provide all the environments they wanted at the push of a button for almost no storage and the interface was so simple with secure logons that developers and QA could provision, rollback, refresh, bookmark, rewind full environments themselves. Thats what Delphix does. Simply bringing in Delphix makes a paradigm shift easy. Yes, Dev and Ops still have to work together to adopt the processes that Delphix provides but when the processes are so powerful and  so easy then shifting company culture becomes possible. Shifting Dev and Ops to DevOps even becomes possible in the largest of enterprise companies which are the most daunting cultures to bring into the DevOps movement.

After bring in Delphix to a 100 of the Fortune 500 we’ve seen case after case of application development teams doubling output, reducing bugs and creating higher quality code.

If you are at DevOps Enterprise Summit next week come visit the Delphix booth and stop by and say “hi” and find out how Delphix can eliminate the biggest constraints in application development and put a jet pack on your projects.

Kicking off the summit will be Gene Kim (IT Revolution) + Steve Brodie (Electric Cloud)!

Screen Shot 2014-10-16 at 6.40.23 AM

Patching Time

Just a quick note to point out that the October PSU was just released. The database has a few more vulnerabilities than usual (31), but they are mostly related to Java and the high CVSS score of 9 only applies to people running Oracle on windows. (On other operating systems, the highest score is 6.5.)

I did happen to glance at the announcement on the security blog, and I thought this short blurb was worth repeating:

In today’s Critical Patch Update Advisory, you will see a stronger than previously-used statement about the importance of applying security patches. Even though Oracle has consistently tried to encourage customers to apply Critical Patch Updates on a timely basis and recommended customers remain on actively-supported versions, Oracle continues to receive credible reports of attempts to exploit vulnerabilities for which fixes have been already published by Oracle. In many instances, these fixes were published by Oracle years ago, but their non-application by customers, particularly against Internet-facing systems, results in dangerous exposure for these customers. Keeping up with security releases is a good security practice and good IT governance.

The Oracle Database was first released in a different age than we live in today. Ordering physical parts involved navigating paper catalogs and faxing order sheets to the supplier. Physical inventory management relied heavily on notebooks and clipboards. Mainframes were processing data but manufacturing and supply chain had not yet been revolutionized by technology. Likewise, software base installs and upgrades were shipped on CDs through the mail and installed via physical consoles. The feedback cycle incorporating customer requests into software features took years.

Today, manufacturing is lean and the supply chain is digitized. Inventory is managed with the help of scanners and real-time analytics. Customer communication is more streamlined than ever before and developers respond quickly to the market. Bugs are exploited maliciously as soon as they’re discovered and the software development and delivery process has been optimized for fast response and rapid digital delivery of fixes.

Here’s the puzzle: Cell phones, web browsers and laptop operating systems all get security updates installed frequently. Even the linux OS running on your servers is easy to update with security patches. Oracle is no exception – they have streamlined delivery of database patches through the quarterly PSU program. Why do so many people simply ignore the whole area of Oracle database patches? Are we stuck in the old age of infrequent patching activity even though Oracle themselves have moved on?

Repetition

For many, it just seems overwhelming to think about patching. And honestly – it is. At first. The key is actually a little counter-intuitive: it’s painful, so you should in fact do it a lot! Believe it or not, it will actually become very easy once you get over the initial hump.

In my experience working at one small org (two dba’s), the key is doing it regularly. Lots of practice. You keep decent notes and setup scripts/tools where it makes sense and then you start to get a lot faster after several times around. By the way, my thinking has been influenced quite a bit here by the devops movement (like Jez Humble’s ’12 berlin talk and John Allspaw’s ’09 velocity talk). I think they have a nice articulation of this basic repetition principle. And it is very relevant to people who have Oracle databases.

So with all that said, happy patching! I know that I’ll be working with these PSUs over the next week or two. I hope that you’ll be working with them too!

Patching Time

Just a quick note to point out that the October PSU was just released. The database has a few more vulnerabilities than usual (31), but they are mostly related to Java and the high CVSS score of 9 only applies to people running Oracle on windows. (On other operating systems, the highest score is 6.5.)

I did happen to glance at the announcement on the security blog, and I thought this short blurb was worth repeating:

In today’s Critical Patch Update Advisory, you will see a stronger than previously-used statement about the importance of applying security patches. Even though Oracle has consistently tried to encourage customers to apply Critical Patch Updates on a timely basis and recommended customers remain on actively-supported versions, Oracle continues to receive credible reports of attempts to exploit vulnerabilities for which fixes have been already published by Oracle. In many instances, these fixes were published by Oracle years ago, but their non-application by customers, particularly against Internet-facing systems, results in dangerous exposure for these customers. Keeping up with security releases is a good security practice and good IT governance.

The Oracle Database was first released in a different age than we live in today. Ordering physical parts involved navigating paper catalogs and faxing order sheets to the supplier. Physical inventory management relied heavily on notebooks and clipboards. Mainframes were processing data but manufacturing and supply chain had not yet been revolutionized by technology. Likewise, software base installs and upgrades were shipped on CDs through the mail and installed via physical consoles. The feedback cycle incorporating customer requests into software features took years.

Today, manufacturing is lean and the supply chain is digitized. Inventory is managed with the help of scanners and real-time analytics. Customer communication is more streamlined than ever before and developers respond quickly to the market. Bugs are exploited maliciously as soon as they’re discovered and the software development and delivery process has been optimized for fast response and rapid digital delivery of fixes.

Here’s the puzzle: Cell phones, web browsers and laptop operating systems all get security updates installed frequently. Even the linux OS running on your servers is easy to update with security patches. Oracle is no exception – they have streamlined delivery of database patches through the quarterly PSU program. Why do so many people simply ignore the whole area of Oracle database patches? Are we stuck in the old age of infrequent patching activity even though Oracle themselves have moved on?

Repetition

For many, it just seems overwhelming to think about patching. And honestly – it is. At first. The key is actually a little counter-intuitive: it’s painful, so you should in fact do it a lot! Believe it or not, it will actually become very easy once you get over the initial hump.

In my experience working at one small org (two dba’s), the key is doing it regularly. Lots of practice. You keep decent notes and setup scripts/tools where it makes sense and then you start to get a lot faster after several times around. By the way, my thinking has been influenced quite a bit here by the devops movement (like Jez Humble’s ’12 berlin talk and John Allspaw’s ’09 velocity talk). I think they have a nice articulation of this basic repetition principle. And it is very relevant to people who have Oracle databases.

So with all that said, happy patching! I know that I’ll be working with these PSUs over the next week or two. I hope that you’ll be working with them too!

Patching Time

Just a quick note to point out that the October PSU was just released. The database has a few more vulnerabilities than usual (31), but they are mostly related to Java and the high CVSS score of 9 only applies to people running Oracle on windows. (On other operating systems, the highest score is 6.5.)

I did happen to glance at the announcement on the security blog, and I thought this short blurb was worth repeating:

In today’s Critical Patch Update Advisory, you will see a stronger than previously-used statement about the importance of applying security patches. Even though Oracle has consistently tried to encourage customers to apply Critical Patch Updates on a timely basis and recommended customers remain on actively-supported versions, Oracle continues to receive credible reports of attempts to exploit vulnerabilities for which fixes have been already published by Oracle. In many instances, these fixes were published by Oracle years ago, but their non-application by customers, particularly against Internet-facing systems, results in dangerous exposure for these customers. Keeping up with security releases is a good security practice and good IT governance.

The Oracle Database was first released in a different age than we live in today. Ordering physical parts involved navigating paper catalogs and faxing order sheets to the supplier. Physical inventory management relied heavily on notebooks and clipboards. Mainframes were processing data but manufacturing and supply chain had not yet been revolutionized by technology. Likewise, software base installs and upgrades were shipped on CDs through the mail and installed via physical consoles. The feedback cycle incorporating customer requests into software features took years.

Today, manufacturing is lean and the supply chain is digitized. Inventory is managed with the help of scanners and real-time analytics. Customer communication is more streamlined than ever before and developers respond quickly to the market. Bugs are exploited maliciously as soon as they’re discovered and the software development and delivery process has been optimized for fast response and rapid digital delivery of fixes.

Here’s the puzzle: Cell phones, web browsers and laptop operating systems all get security updates installed frequently. Even the linux OS running on your servers is easy to update with security patches. Oracle is no exception – they have streamlined delivery of database patches through the quarterly PSU program. Why do so many people simply ignore the whole area of Oracle database patches? Are we stuck in the old age of infrequent patching activity even though Oracle themselves have moved on?

Repetition

For many, it just seems overwhelming to think about patching. And honestly – it is. At first. The key is actually a little counter-intuitive: it’s painful, so you should in fact do it a lot! Believe it or not, it will actually become very easy once you get over the initial hump.

In my experience working at one small org (two dba’s), the key is doing it regularly. Lots of practice. You keep decent notes and setup scripts/tools where it makes sense and then you start to get a lot faster after several times around. By the way, my thinking has been influenced quite a bit here by the devops movement (like Jez Humble’s ’12 berlin talk and John Allspaw’s ’09 velocity talk). I think they have a nice articulation of this basic repetition principle. And it is very relevant to people who have Oracle databases.

So with all that said, happy patching! I know that I’ll be working with these PSUs over the next week or two. I hope that you’ll be working with them too!

Oracle fanboy and blind to the truth?

I had a little exchange with someone on Twitter last night, which was initiated by him complaining about the cost of Oracle and predicting their demise. Once that was over I spent a little time thinking about my “fanboy status”.

If you know anything about me, you will know I’m an Oracle fanboy. I’ve spent nearly 20 years doing this stuff and the last 14+ years writing about it on the internet. If I wasn’t into it, it would be a pretty sorry state of affairs. So does that mean I’m totally blinded like all those Apple fanboys and fangirls? No. I just don’t choose to dwell on a lot of the negative and instead focus on the positive, like the cool bits of tech. The common topics I hear are:

  • Oracle costs too much : I could bleat on about the cost of Oracle and what features are missing from specific editions, but quite frankly that is boring. Unless you’ve been under a rock for the last 35+ years you should know the score. If it’s got the name Oracle associated with it, it’s probably going to be really expensive. That’s why people’s jaws drop when they find out Oracle Linux is free. They are just not used to hearing the words Oracle and free in the same sentence. If you want free or cheap, you can find it. What people often don’t consider is total cost of ownership. Nothing is ever free. The money just gets directed in different ways.
  • The cheap/free RDBMS products will kill Oracle : This talk has been going on since I started working with Oracle 20 years ago. It used to worry me. It doesn’t any more. So far it hasn’t materialized. Sure, different products have eaten into the market share somewhat and I’m sure that will continue to happen, but having a headstart over the competition can sometimes be a significant advantage. I work with other RDBMS products as well and it is sometimes infuriating how much is missing. I’m not talking about those headline Oracle features that 3 people in the world use. I’m talking about really simple stuff that is missing that makes being a DBA a total pain in the ass. Typically, these gaps have to be filled in by separate products or tools, which just complicates your environment.
  • It’s just a bit bucket : If your company is just using the database as a bit bucket and you do all the “cool” stuff in the middle tier, then Oracle databases are probably not the way to go for you. Your intellectual and financial focus will be on the middle tier. Good luck!
  • But company X use product Y, not Oracle : I’m so bored of this type of argument. Facebook use MySQL and PHP. Yes, but they wrote their own source code transformer (HipHop) to turn PHP into C++ and they use so much stuff in front of MySQL (like Memcached) that they could probably do what they do on top of flat files. Companies talk about their cool stuff and what makes them different. They are not so quick to talk about what is sitting behind the ERP that is running their business…
  • NoSQL/Hadoop/Document Stores will kill RDBMS : Have you ever had a real job in industry? Have you ever done anything other than try to write a twitter rip-off in Ruby for your school project? Do you know how long it took COBOL to die? (it still isn’t dead by the way). There is a massive investment in the I.T. industry around relational databases. I’m not saying they are the perfect solution, but they aren’t going anywhere in the near future. Good luck running your ERP on any of these non-RDBMS data stores! What has changed is that people now realise RDBMS is not the right solution for every type of data store. Using the right product for the right job is a good thing. There are still plenty of jobs where an RDBMS is the right tool.
  • The cloud will kill Oracle : The cloud could prove to be the biggest spanner in the works for many IT companies. If we start using cloud-based services for everything in the Software as a Service (SaaS) model, who cares what technology sits behind it? Provided our applications work and they meet our SLAs, who cares how many bodies are running around like headless chickens in the background to keep the thing running? For Platform as a Service (PaaS) and Infrastructure as a Service (IaaS), I don’t think cloud makes so much of a difference. In these cases, you are still picking the type of database or the type of OS you need. They are not hidden from you like in the SaaS model. I guess the impact of cloud will depend on your definition of cloud and route the market eventually takes. What people also seem to forget is the big winners in the cloud game will be the big companies. When the world is only using SaaS, you are going to have to work for Amazon, Oracle, Microsoft etc. if you want to be a techie. The ultimate goal of cloud is consolidation and centralisation, so you will have to work for one of these big players if you want to be anything other than a user. I find it interesting that people are betting on the cloud as a way of punishing the big companies, when actually it is likely to help them and put us folks out of business…

The post has got a bit long an tedious, so I’m going to sign off now.

In conclusion, yes I’m a fanboy, but I’m not oblivious to what’s going on outside Oracle. I like playing with the tech and I try to look on the positive side where my job-related technology is concerned. If I focussed on the negative I would have to assume that Oracle is doomed and we will all die of Ebola by the end of the week…

Cheers

Tim…

 


Oracle fanboy and blind to the truth? was first posted on October 15, 2014 at 9:46 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

What’s more sexist: Time’s question or Sandberg’s response?

So I was reading Time’s September 22, 2014 edition and came across 10 questions for Facebook COO Sheryl Sandberg when I reached this absolute gem:

Why do you think women are so afraid of making mistakes?

WOW. So in the question women are defined as being afraid of making mistakes. Now I don’t know whether this comes from Charlotte Alter (the by line for the 10 questions) out of whole cloth or whether it was inspired by something Sheryl Sandberg wrote or said previously of which I’m not aware.

In my book attributing qualitative faults (like fearing making a mistake) to one sex as opposed to another is pretty much the definition of sexism. I’m not saying the sexes cannot be compared. On average men are taller than women. More women have wombs than men. Calling out objective differences of fact is not sexist in and of itself.

So the question itself as posed seems to me a pretty bad thing. Maybe we should excuse Sheryl Sandberg for her completely sexist response (as printed by Time – she should sue if they got that wrong):

“When men make mistakes, they don’t internalize it is their fault, so it doesn’t hurt them as much. Because gender makes us over-estimate male performance and underestimate female performance, we have more tolerance for men’s mistakes.”

Holy cow! Did I miss a memo? Has a study been done that men don’t internalize mistakes as their own fault as a whole gender? Even as a tendency for the gender?

But what might be worse is the illogic, at least as I see it: My tolerance for mistakes IS related to my estimation of what a person is capable of. So if I am in fact over-estimating male performance then a mistake is going to be regarded more harshly as something they should not have let happen. Under Sandberg’s apparent vision of reality, men should be more afraid of making mistakes.

So I call on all y’all to reject these sexist notions. The correct response to mistakes is some amount of disappointment relative to the individual’s capabilities, current run of overwork and personal distractions, and the difficulty of the task at hand. Often the correct response is: “Sorry I put you in that tough of a spot.”

Over the long haul you learn which individuals you can trust with what and gender should play no role, even with physical tasks, because you work with individuals, not gender averages.

Uncharacteristic mistakes are worthy of examination because you need to know whether something was an aberrant loss of focus rather than some problem, acute or chronic, that has developed. Then comes the far tougher question of whether or not any problem is your business.

But questions like Time’s and answers like Sandberg’s have no place in the workplace.

Let the Data Guard Broker control LOG_ARCHIVE_* parameters!

When using the Data Guard Broker, you don’t need to set any LOG_ARCHIVE_* parameter for the databases that are part of your Data Guard configuration. The broker is doing that for you. Forget about what you may have heard about VALID_FOR – you don’t need that with the broker. Actually, setting any of the LOG_ARCHIVE_* parameters with an enabled broker configuration might even confuse the broker and lead to warning or error messages. Let’s look at a typical example about the redo log transport mode. There is a broker configuration enabled with one primary database prima and one physical standby physt. The broker config files are mirrored on each site and spfiles are in use that the broker (the DMON background process, to be precise) can access:

Data Guard Broker: OverviewWhen connecting to the broker, you should always connect to a DMON running on the primary site. The only exception from this rule is when you want to do a failover: That must be done connected to the standby site. I will now change the redo log transport mode to sync for the standby database. It helps when you think of the log transport mode as an attribute (respectively a property) of a certain database in your configuration, because that is how the broker sees it also.

 

[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> edit database physt set property logxptmode=sync;
Property "logxptmode" updated

In this case, physt is a standby database that is receiving redo from primary database prima, which is why the LOG_ARCHIVE_DEST_2 parameter of that primary was changed accordingly:

[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 30 17:21:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="physt", LGWR SYNC AFF
						 IRM delay=0 optional compressi
						 on=disable max_failure=0 max_c
						 onnections=1 reopen=300 db_uni
						 que_name="physt" net_timeout=3
						 0, valid_for=(all_logfiles,pri
						 mary_role)

Configuration for physt

The mirrored broker configuration files on all involved database servers contain that logxptmode property now. There is no new entry in the spfile of physt required. The present configuration allows now to raise the protection mode:

DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.

The next broker command is done to support a switchover later on while keeping the higher protection mode:

DGMGRL> edit database prima set property logxptmode=sync;
Property "logxptmode" updated

Notice that this doesn’t lead to any spfile entry; only the broker config files store that new property. In case of a switchover, prima will then receive redo with sync.

Configuration for primaNow let’s do that switchover and see how the broker ensures automatically that the new primary physt will ship redo to prima:

 

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    prima - Primary database
    physt - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> switchover to physt;
Performing switchover NOW, please wait...
New primary database "physt" is opening...
Operation requires shutdown of instance "prima" on database "prima"
Shutting down instance "prima"...
ORACLE instance shut down.
Operation requires startup of instance "prima" on database "prima"
Starting instance "prima"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "physt"

All I did was the switchover command, and without me specifying any LOG_ARCHIVE* parameter, the broker did it all like this picture shows:

Configuration after switchoverEspecially, now the spfile of the physt database got the new entry:

 

[oracle@uhesse2 ~]$ sqlplus sys/oracle@physt as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 14 15:43:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="prima", LGWR SYNC AFF
						 IRM delay=0 optional compressi
						 on=disable max_failure=0 max_c
						 onnections=1 reopen=300 db_uni
						 que_name="prima" net_timeout=3
						 0, valid_for=(all_logfiles,pri
						 mary_role)

Not only is it not necessary to specify any of the LOG_ARCHIVE* parameters, it is actually a bad idea to do so. The guideline here is: Let the broker control them! Else it will at least complain about it with warning messages. So as an example what you should not do:

[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 14 15:57:11 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> alter system set log_archive_trace=4096;

System altered.

Although that is the correct syntax, the broker now gets confused, because that parameter setting is not in line with what is in the broker config files. Accordingly that triggers a warning:

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    physt - Primary database
    prima - Physical standby database
      Warning: ORA-16792: configurable property value is inconsistent with database setting

Fast-Start Failover: DISABLED

Configuration Status:
WARNING

DGMGRL> show database prima statusreport;
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT
               prima    WARNING ORA-16714: the value of property LogArchiveTrace is inconsistent with the database setting

In order to resolve that inconsistency, I will do it also with a broker command – which is what I should have done instead of the alter system command in the first place:

DGMGRL> edit database prima set property LogArchiveTrace=4096;
Property "logarchivetrace" updated
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    physt - Primary database
    prima - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Thanks to a question from Noons (I really appreciate comments!), let me add the complete list of initialization parameters that the broker is supposed to control. Most but not all is LOG_ARCHIVE*

LOG_ARCHIVE_DEST_n
LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_LAG_TARGET
DB_FILE_NAME_CONVERT
LOG_ARCHIVE_FORMAT
LOG_ARCHIVE_MAX_PROCESSES
LOG_ARCHIVE_MIN_SUCCEED_DEST
LOG_ARCHIVE_TRACE
LOG_FILE_NAME_CONVERT
STANDBY_FILE_MANAGEMENT

Tagged: Data Guard, High Availability

Exadata and Virtual Private Database: will it offload my query?

During one of the classes I taught about Exadata optimisations I had an interesting question:

If I am using VPD, will Exadata still offload the query?

Background is that we discussed function offloading, and the meta-view v$sqlfn_metadata. It turned out that SYS_CONTEXT() is not offloadable in 11.2.0.4.

SQL> select name,datatype,analytic,aggregate,offloadable,descr
  2  from v$sqlfn_metadata where name = 'SYS_CONTEXT';

NAME                                               DATATYPE ANA AGG OFF DESCR
-------------------------------------------------- -------- --- --- --- ------------------------------
SYS_CONTEXT                                        UNKNOWN  NO  NO  NO  SYS_CONTEXT

Since I’m a great fan of the Tom Kyte method (don’t say it is so, show it!) I needed a quick example. Of all the Oracle books I read “Effective Oracle by Design” was among the most inspiring.

Where to check?

My first idea was to check v$sqlfn_metadata to see if the ever present SYS_CONTEXT() was offloadable:

SQL> select name,offloadable
  2  from v$sqlfn_metadata
  3  where name = 'SYS_CONTEXT';

NAME                    OFF
----------------------- ---
SYS_CONTEXT             NO

SYS:dbm011> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE	11.2.0.4.0	Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

OK so it does not appear to be offloadable. Does it matter? As you will see, it does not although it could startle you at first.

Reproduce a test case: 11.2.0.4/11.2.3.3.1.140529.1

After finishing the example from the Oracle documentation I decided that I needed a few more rows in the table to get to a more realistic data distribution for an Exadata system. You wouldn’t see a smart scan on a row with < 10 rows. This can be done quite easily, and the end result was:

SYS> select count(*) from scott.orders_tab;

   COUNT(*)
-----------
   12000003

1 row selected.

I had to ensure that the query is offloaded first-simplez! Kerry Osborne has a script for this:

SYS> select /*+ gather_plan_statistics rowcount_sys_stats */
  2  count(*) from scott.orders_tab;

   COUNT(*)
-----------
   12000003

1 row selected.

Elapsed: 00:00:00.81

SYS:OGGSRC> @scripts/fsx
Enter value for sql_text: %rowcount_sys_stats%
Enter value for sql_id:

SQL_ID	       CHILD   PLAN_HASH  EXECS  AVG_ETIME AVG_PX OFFLOAD IO_SAVED_% SQL_TEXT
------------- ------ ----------- ------ ---------- ------ ------- ---------- ----------------------------------------------------------------------
8bv5b04mjku08	   0  1093340548      1        .81	0 Yes	       38.47 select /*+ gather_plan_statistics rowcount_sys_stats */ count(*) from

1 row selected.

SYS:OGGSRC> @scripts/dplan
Enter value for sql_id: 8bv5b04mjku08
Enter value for child_no:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID	8bv5b04mjku08, child number 0
-------------------------------------
select /*+ gather_plan_statistics rowcount_sys_stats */ count(*) from
scott.orders_tab

Plan hash value: 1093340548

---------------------------------------------------------------------------------
| Id  | Operation                  | Name       | Rows  | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |            |       |  7521 (100)|          |
|   1 |  SORT AGGREGATE            |            |     1 |            |          |
|   2 |   TABLE ACCESS STORAGE FULL| ORDERS_TAB |    12M|  7521   (1)| 00:01:31 |
---------------------------------------------------------------------------------

15 rows selected.

So I guess it is.

Enter VPD

With all the VPD bells and whistles turned on I connected as on the the users for which there is an explicit context set and ran my query:

TBROOKE> select /*+ gather_plan_statistics tbrooke_query_001 */
  2  count(*) from scott.orders_tab;

   COUNT(*)
-----------
          4

1 row selected.

Elapsed: 00:00:00.15

1 row selected.

Which shows that the VPD policy works. What do my tools indicate?

SYS> @scripts/fsx
Enter value for sql_text: %tbrooke_query_001%
Enter value for sql_id:

SQL_ID	       CHILD   PLAN_HASH  EXECS  AVG_ETIME AVG_PX OFFLOAD IO_SAVED_% SQL_TEXT
------------- ------ ----------- ------ ---------- ------ ------- ---------- ----------------------------------------------------------------------
bs9gq0bdqazzu	   0  1093340548      1        .15	0 Yes	       99.98 select /*+ gather_plan_statistics tbrooke_query_001 */ count(*) from s

1 row selected.

SYS> @scripts/dplan
Enter value for sql_id: bs9gq0bdqazzu
Enter value for child_no: 0

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID	bs9gq0bdqazzu, child number 0
-------------------------------------
select /*+ gather_plan_statistics tbrooke_query_001 */ count(*) from
scott.orders_tab

Plan hash value: 1093340548

-----------------------------------------------------------------------------------------
| Id  | Operation                  | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |            |       |       |  7651 (100)|          |
|   1 |  SORT AGGREGATE            |            |     1 |     6 |            |          |
|*  2 |   TABLE ACCESS STORAGE FULL| ORDERS_TAB |     1 |     6 |  7651   (3)| 00:01:32 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - storage("CUST_NO"=TO_NUMBER(SYS_CONTEXT('orders_ctx','cust_no')))
       filter("CUST_NO"=TO_NUMBER(SYS_CONTEXT('orders_ctx','cust_no')))


21 rows selected.

So using the script it is easily visible that a smart scan has happened and in fact it saved 99.98% of IO. Which is not surprising giving that only 4 rows out of the whole result set have been returned. The Real Time SQL Monitor Report confirmed the finding by the way. I had to sneak in another hint (+monitor) otherwise the statement wouldn’t be captured in SQL Monitor (only “long running” statements are captured by default)

SQL Monitoring Report

SQL Text
------------------------------
select /*+ gather_plan_statistics monitor tbrooke_query_002 */ count(*) from scott.orders_tab

Global Information
------------------------------
 Status              :  DONE (ALL ROWS)
 Instance ID         :  1
 Session             :  TBROOKE (33:239)
 SQL ID              :  8ydqam3fuwt2z
 SQL Execution ID    :  16777216
 Execution Started   :  10/14/2014 04:28:24
 First Refresh Time  :  10/14/2014 04:28:24
 Last Refresh Time   :  10/14/2014 04:28:24
 Duration            :  .100139s
 Module/Action       :  SQL*Plus/-
 Service             :  SYS$USERS
 Program             :  sqlplus@enkdb01.enkitec.com (TNS V1-V3)
 Fetch Calls         :  1

Global Stats
====================================================================================
| Elapsed |   Cpu   |    IO    | PL/SQL  | Fetch | Buffer | Read | Read  |  Cell   |
| Time(s) | Time(s) | Waits(s) | Time(s) | Calls |  Gets  | Reqs | Bytes | Offload |
====================================================================================
|    0.10 |    0.01 |     0.09 |    0.00 |     1 |  27352 |  228 | 214MB |  99.98% |
====================================================================================

SQL Plan Monitoring Details (Plan Hash Value=1093340548)
=========================================================================================================================================================================
| Id |          Operation           |    Name    |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read | Read  |  Cell   |  Mem  | Activity | Activity Detail |
|    |                              |            | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs | Bytes | Offload | (Max) |   (%)    |   (# samples)   |
=========================================================================================================================================================================
|  0 | SELECT STATEMENT             |            |         |      |         1 |     +0 |     1 |        1 |      |       |         |       |          |                 |
|  1 |   SORT AGGREGATE             |            |       1 |      |         1 |     +0 |     1 |        1 |      |       |         |       |          |                 |
|  2 |    TABLE ACCESS STORAGE FULL | ORDERS_TAB |       1 | 7651 |         1 |     +0 |     1 |        4 |  228 | 214MB |  99.98% |    2M |          |                 |
=========================================================================================================================================================================

This confirms that a full table scan happened, and it must have been executed as a direct path read. A DPR on Exadata most oftent transforms into a smart scan. As you can see I didn’t specify any other predicate, and yet the VPD was offloaded.

Reproduce a test case: 12.1.0.2.0/12.1.1.1.1.140712

Interestingly in 12.1.0.2.0 the function SYS_CONTEXT is offloadable:

SQL> select name, offloadable
  2  from v$sqlfn_metadata where name = 'SYS_CONTEXT';

NAME                           OFF
------------------------------ ---
SYS_CONTEXT                    YES

1 row selected.

For completeness sake I have repeated my test here. I copied the table via network link and created the same users and VPD. Not too much of a difference. Information is shown here without comments:

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE    12.1.0.2.0      Production
TNS for Linux: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production


TBROOKE> select /*+ gather_plan_statistics tbrooke_query_013 */ count(*) from scott.orders_tab;

  COUNT(*)
----------
         4

Elapsed: 00:00:00.02

SQL> @scripts/fsx
Enter value for sql_text: %tbrooke_query_013%
Enter value for sql_id:

SQL_ID         CHILD  PLAN_HASH  EXECS  AVG_ETIME AVG_PX OFFLOAD IO_SAVED_% SQL_TEXT
------------- ------ ---------- ------ ---------- ------ ------- ---------- ----------------------------------------------------------------------
bf0s5hzr7x9r5      0 1093340548      1        .02      0 Yes          99.97 select /*+ gather_plan_statistics tbrooke_query_013 */ count(*) from s

1 row selected.

SQL> @scripts/dplan
Enter value for sql_id: bf0s5hzr7x9r5
Enter value for child_no: 0

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  bf0s5hzr7x9r5, child number 0
-------------------------------------
select /*+ gather_plan_statistics tbrooke_query_013 */ count(*) from
scott.orders_tab

Plan hash value: 1093340548

-----------------------------------------------------------------------------------------
| Id  | Operation                  | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |            |       |       |     2 (100)|          |
|   1 |  SORT AGGREGATE            |            |     1 |    13 |            |          |
|*  2 |   TABLE ACCESS STORAGE FULL| ORDERS_TAB |     1 |    13 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - storage("CUST_NO"=TO_NUMBER(SYS_CONTEXT('orders_ctx','cust_no')))
       filter("CUST_NO"=TO_NUMBER(SYS_CONTEXT('orders_ctx','cust_no')))


21 rows selected.

SQL Monitoring Report

SQL Text
------------------------------
select /*+ gather_plan_statistics monitor tbrooke_query_014 */ count(*) from scott.orders_tab

Global Information
------------------------------
 Status              :  DONE (ALL ROWS)
 Instance ID         :  1
 Session             :  TBROOKE (1042:32766)
 SQL ID              :  55yy67scgw2sf
 SQL Execution ID    :  16777216
 Execution Started   :  10/14/2014 05:13:30
 First Refresh Time  :  10/14/2014 05:13:30
 Last Refresh Time   :  10/14/2014 05:13:30
 Duration            :  .016025s
 Module/Action       :  SQL*Plus/-
 Service             :  SYS$USERS
 Program             :  sqlplus@enkdb03.enkitec.com (TNS V1-V3)
 Fetch Calls         :  1

Global Stats
==================================================================================================
| Elapsed |   Cpu   |    IO    | Application | PL/SQL  | Fetch | Buffer | Read | Read  |  Cell   |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   | Time(s) | Calls |  Gets  | Reqs | Bytes | Offload |
==================================================================================================
|    0.02 |    0.01 |     0.01 |        0.00 |    0.00 |     1 |  27441 |  223 | 214MB |  99.99% |
==================================================================================================

SQL Plan Monitoring Details (Plan Hash Value=1093340548)
=========================================================================================================================================================================
| Id |          Operation           |    Name    |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read | Read  |  Cell   |  Mem  | Activity | Activity Detail |
|    |                              |            | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs | Bytes | Offload | (Max) |   (%)    |   (# samples)   |
=========================================================================================================================================================================
|  0 | SELECT STATEMENT             |            |         |      |         1 |     +0 |     1 |        1 |      |       |         |       |          |                 |
|  1 |   SORT AGGREGATE             |            |       1 |      |         1 |     +0 |     1 |        1 |      |       |         |       |          |                 |
|  2 |    TABLE ACCESS STORAGE FULL | ORDERS_TAB |       1 |    2 |         1 |     +0 |     1 |        4 |  223 | 214MB |  99.99% |    3M |          |                 |
=========================================================================================================================================================================

In the last example flash cache and storage indexes attributed greatly to the quick execution time. Using Adrian Billington’s mystats I can see more detail. I have removed what’s not needed from the report.

SQL> @scripts/mystats stop t=1

==========================================================================================
MyStats report : 14-OCT-2014 05:27:44
==========================================================================================


------------------------------------------------------------------------------------------
1. Summary Timings
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
TIMER   snapshot interval (seconds)                                                  13.59
TIMER   CPU time used (seconds)                                                       0.01


------------------------------------------------------------------------------------------
2. Statistics Report
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
STAT    CPU used by this session                                                         7
STAT    CPU used when call started                                                       7
STAT    DB time                                                                          9
...
STAT    cell IO uncompressed bytes                                               3,031,040
STAT    cell blocks helped by minscn optimization                                      370
STAT    cell blocks processed by cache layer                                           370
STAT    cell blocks processed by data layer                                            370
STAT    cell blocks processed by txn layer                                             370
STAT    cell flash cache read hits                                                      10
STAT    cell num smartio automem buffer allocation attempts                              1
STAT    cell physical IO bytes eligible for predicate offload                  224,763,904
STAT    cell physical IO bytes saved by storage index                          221,732,864
STAT    cell physical IO interconnect bytes                                          5,360
STAT    cell physical IO interconnect bytes returned by smart scan                   5,360
STAT    cell scans                                                                       1
...
STAT    consistent gets                                                             27,816
STAT    consistent gets direct                                                      27,437
STAT    consistent gets examination                                                      3
STAT    consistent gets examination (fastpath)                                           3
STAT    consistent gets from cache                                                     379
STAT    consistent gets pin                                                            376
STAT    consistent gets pin (fastpath)                                                 376
STAT    db block changes                                                               787
STAT    db block gets                                                                2,879
STAT    db block gets from cache                                                     2,879
...
STAT    logical read bytes from cache                                           26,689,536
...
STAT    physical read bytes                                                    224,763,904
STAT    physical read requests optimized                                               223
STAT    physical read total IO requests                                                223
STAT    physical read total bytes                                              224,763,904
STAT    physical read total bytes optimized                                    224,763,904
STAT    physical read total multi block requests                                       215
STAT    physical reads                                                              27,437
STAT    physical reads direct                                                       27,437
...
STAT    table fetch by rowid                                                             1
STAT    table scan blocks gotten                                                         3
STAT    table scan disk non-IMC rows gotten                                          1,685
STAT    table scan rows gotten                                                       1,685
STAT    table scans (direct read)                                                        1
STAT    table scans (long tables)                                                        1
...

------------------------------------------------------------------------------------------
3. About
------------------------------------------------------------------------------------------
- MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
- Based on the SNAP_MY_STATS utility by Jonathan Lewis

==========================================================================================
End of report
==========================================================================================

In the case of VPD the fact that SYS_CONTEXT() is offloadable did not play a major role.