Search

OakieTags

Who's online

There are currently 0 users and 34 guests online.

Recent comments

Oakies Blog Aggregator

ORAchk / EXAchk questions

Yesterday I wrote a post on the ORAchk / EXAchk plug-in in Enterprise Manager Cloud Control 13c, and I promised I would write another post that would cover some of the more frequently asked questions we’ve been receiving on the plug-in. That’s what this post in, so the rest of the post will be in a Q&A sort of format.

Question: What are the benefits of EXAchk integration with the Enterprise Manager compliance framework i.e. what can we do with this that we could not in EM12c?

Answer: In EM 12c, we ask customers to setup EXAchk in the target themselves, we just bring the results to EM and show the results on the EXAchk target Home page. In 13c these are our main features:

  • Install & Setup ORAchk utility from Enterprise Manager Cloud Control
  • Convert check results into Compliance Standards (CS) violations
  • Associate the ORAchk results (CS violations) with appropriate targets
  • Upgrade of ORAchk utility from Enterprise Manager
  • Release new checks as compliance standards at the same time ORAchk releases their new version.

Question: Can I do a “Create Like” of a standard based on EXAchk i.e. include my own rules into a standard?

Answer: No, actual checks are still done within the EXAchk utility, we take the results and show violations against appropriate targets. So our CS rule is just looking for Pass/Fail only.

Question: Does EM rely on the EXAchk executable at all? Does the EXAchk executable (not the rules) need to be updated periodically?

Answer: We fully rely on the EXAchk executable and rules. Yes we need to update periodically, we have added self update entity types to deliver the EXAchk binaries.

Question: Are the rules evaluated on the agent side or the repository side?

Answer: We bring the results of EXAchk to the repository and within our repository rule we check for Pass/Fail only.

Question: How will Enterprise Manager keep up with new revisions of the rules? Are the new revisions automatically applied?

Answer: Our current plan is to deliver two entities – one for EXAchk/ORAchk binaries and one for compliance standard and rules.

Question: I don’t see the Enterprise Manager ORAchk Healthchecks Plug-in in my Enterprise Manager 12c environment. Does it exist?

Answer: Yes, in the 12.1.0.5 environment I just checked on, the plug-in is called “Oracle Engineered Systems Healthchecks” and it’s found under the Engineered Systems plug-ins on the Plug-ins page (go to Setup -> Extensibility -> Plug-ins. If you can’t find it there, it probably means you didn’t select it from the list of plug-ins to install when you were installing Enterprise Manager. In that case, click the “Check Updates” button, click “Plug-In” from the list of types and then either click the “Check Updates” button or choose “Check Updates” from the Actions menu. You will need to download and deploy the plug-in as per normal after that.

Hopefully that helps clarify a few areas for you! If you have any questions on ORAchk / EXAchk that haven’t been answered above, feel free to add the question as a comment on this post. I will then update the blog with your question and the answer!

The post ORAchk / EXAchk questions appeared first on PeteWhoDidNotTweet.com.

Messed-Up App of the Day: Tables of Numbers

Quick, which database is the biggest space consumer on this system?

Database                  Total Size   Total Storage
-------------------- --------------- ---------------
SAD99PS 635.53 GB 1.24 TB
ANGLL 9.15 TB 18.3 TB
FRI_W1 2.14 TB 4.29 TB
DEMO 6.62 TB 13.24 TB
H111D16 7.81 TB 15.63 TB
HAANT 1.1 TB 2.2 TB
FSU 7.41 TB 14.81 TB
BYNANK 2.69 TB 5.38 TB
HDMI7 237.68 GB 476.12 GB
SXXZPP 598.49 GB 1.17 TB
TPAA 1.71 TB 3.43 TB
MAISTERS 823.96 GB 1.61 TB
p17gv_data01.dbf 800.0 GB 1.56 TB

It’s harder than it looks.

Did you come up with ANGLL? If you didn’t, then you should look again. If you did, then what steps did you have to execute to find the answer?

I’m guessing you did something like I did:

  1. Skim the entire list. Notice that HDMI7 has a really big value in the third column.
  2. Read the column headings. Parse the difference in meaning between “size” and “storage.” Realize that the “storage” column is where the answer to a question about space consumption will lie.
  3. Skim the “Total Storage” column again and notice that the wide “476.12” number I found previously has a GB label beside it, while all the other labels are TB.
  4. Skim the table again to make sure there’s no PB in there.
  5. Do a little arithmetic in my head to realize that a TB is 1000× bigger than a GB, so 476.12 is probably not the biggest number after all, in spite of how big it looked.
  6. Re-skim the “Total Storage” column looking for big TB numbers.
  7. The biggest-looking TB number is 15.63 on the H111D16 row.
  8. Notice the trap on the ANGLL row that there are only three significant digits showing in the “18.3” figure, which looks physically the same size as the three-digit figures “1.24” and “4.29” directly above and below it, but realize that 18.3 (which should have been rendered “18.30”) is an order of magnitude larger.
  9. Skim the column again to make sure I’m not missing another such number.
  10. The answer is ANGLL.

That’s a lot of work. Every reader who uses this table to answer that question has to do it.

Rendering the table differently makes your readers’ (plural!) job much easier:

Database          Size (TB)  Storage (TB)
---------------- --------- ------------
SAD99PS .64 1.24
ANGLL 9.15 18.30
FRI_W1 2.14 4.29
DEMO 6.62 13.24
H111D16 7.81 15.63
HAANT 1.10 2.20
FSU 7.41 14.81
BYNANK 2.69 5.38
HDMI7 .24 .48
SXXZPP .60 1.17
TPAA 1.71 3.43
MAISTERS .82 1.61
p17gv_data01.dbf .80 1.56

This table obeys an important design principle:

The amount of ink it takes to render each number is proportional to its relative magnitude.

I fixed two problems: (i) now all the units are consistent (I have guaranteed this feature by adding unit label to the header and deleting all labels from the rows); and (ii) I’m showing the same number of significant digits for each number. Now, you don’t have to do arithmetic in your head, and now you can see more easily that the answer is ANGLL, at 18.30 TB.

Let’s go one step further and finish the deal. If you really want to make it as easy as possible for readers to understand your space consumption problem, then you should sort the data, too:

Database          Size (TB)  Storage (TB)
---------------- --------- ------------
ANGLL 9.15 18.30
H111D16 7.81 15.63
FSU 7.41 14.81
DEMO 6.62 13.24
BYNANK 2.69 5.38
FRI_W1 2.14 4.29
TPAA 1.71 3.43
HAANT 1.10 2.20
MAISTERS .82 1.61
p17gv_data01.dbf .80 1.56
SAD99PS .64 1.24
SXXZPP .60 1.17
HDMI7 .24 .48

Now, your answer comes in a glance. Think back at the comprehension steps that I described above. With the table here, you only need:

  1. Notice that the table is sorted in descending numerical order.
  2. Comprehend the column headings.
  3. The answer is ANGLL.

As a reader, you have executed far less code path in your brain to completely comprehend the data that the author wants you to understand.

Good design is a topic of consideration. And even conservation. If spending 10 extra minutes formatting your data better saves 1,000 readers 2 minutes each, then you’ve saved the world 1,990 minutes of wasted effort.

But good design is also a very practical matter for you personally, too. If you want your audience to understand your work, then make your information easier for them to consume—whether you’re writing email, proposals, reports, infographics, slides, or software. It’s part of the pathway to being more persuasive.

A cool thing with EXCHANGE PARTITION (part 2)

In the previous post, I showed that even though a partition was “removed” (ie, exchanged out) from a table, a query running against the table could still successfully run the completion.

However, of course, if once that partition is exchanged out, it is now a table in it’s own right…and is subject to the whims of what a DBA may wish to do with it.   If that table is dropped, or truncated, then as you might expect, our query is going to struggle to find that data ! Smile

Here’s an example of what happens when the query cannot successfully run:

Engineered Systems Healthchecks Plug-in (ORAchk)

In Enterprise Manager Cloud Control 12c release 12.1.0.3, we released the Oracle Engineered System Healthchecks plug-in which processed the XML output from the EXAchk tool, included as part of Oracle Enterprise Manager system monitoring. The EXAchk tool provides functionality for system administrators to automate the assessment of Engineered Systems for known configuration problems and best practices.

Over the years since that first release, we increased the scope and functionality of the tool, to the stage where it now has its own documention guide, Enterprise Manager ORAchk Healthchecks Plug-in User’s Guide. Notice there’s no mention of the word “EXAchk” in that title. That’s because the plug-in has been expanded so far it now includes two health check tools:

  • EXAChk – the scope of EXAchk now includes Oracle Exadata Database Machine, Oracle SuperCluster, Oracle Private Cloud Appliance, Oracle Database Appliance, Oracle Big Data Appliance, Oracle Exalogic Elastic Cloud, Oracle Exalytics In-Memory Machine, Oracle Zero Data Loss Recovery Appliance, and Oracle ZFS Storage Appliance. EXAchk contains hundreds of checks, such as:
    • Software checks (firmware, operating system, clusterware, ASM, database, and Exadata)
    • Hardware checks (database server, InfiniBand, Exadata cells, disks)
    • Configuration best practices (operating system, clusterware, ASM, RAC, database, Exadata, Infiniband)
    • Consolidated RAC, Exadata MAA, and some performance configuration () best practices ()
  • ORAchk – ORAchk covers just about everything else, including:
    • Systems – Oracle Solaris, cross stack checks, Solaris Cluster, and OVN
    • Middleware – Application Continuity, and Oracle Identity and Access Management Suite (Oracle IAM)
    • Oracle Database – Standalone database, Grid Infrastructure and RAC, Maximum Availability Architecture (MAA) scorecard, Upgrade Readiness Validation, and Golden Gate
    • E-Business Suite – Payables, Workflow, Purchasing, Order Management, Process Manufacturing, Receivables, Fixed Assets, HCM, CRM, and Project Billing
    • Enterprise Manager Cloud Control – Repository, agent, and OMS
    • Siebel – Database best practices
    • PeopleSoft – Database best practices
    • … and more to come!

The health checks are based on the most impactful reoccurring problems across the Oracle stack. You can schedule automated health checks in your environment and receive an email report of findings. Importantly for some secured sites, there is no need to send any data to Oracle. As you can see, there’s a lot of functionality covered in the tool now. For more details, visit this Support note:

ORAchk/EXAchk Master Reference (Doc ID 1969085.1)

In my next post on the tool, I’ll answer some of the frequently asked questions we’ve been asked about it so stay tuned for that!

The post Engineered Systems Healthchecks Plug-in (ORAchk) appeared first on PeteWhoDidNotTweet.com.

Changes to Configuration Management in EM13c

Change is difficult for technical folks.  Our world is always moving at blinding speed, so if you start changing things that we don’t think need to be changed, even if you improve upon them, we’re not always appreciative.

change

Configuration Management, EM12c to EM13c

As requests came in for me to write on the topic of Configuration Management, I found the EM13c documentation very lacking, having to push back to EM 12.1.0.5 to fill in a lot of missing areas.  There were changes to the main interface that you use to work with the product.

When comparing the drop down, you can see the changes.config_chng1http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_chng1.p... 300w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_chng1.p... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

Now I’m going to explain to you why this change is good.  In Enterprise Manager 12.1.0.5, (on the left)  you can see that the Comparison feature of the Configuration Management has a different drop down option than in Enterprise Manager 13.1.0.0.

EM12c Configuration Management

You might think it is better to have a direct access to the Compare, Templates and Job Activity directly via the drop downs, but it really is *still directly* accessible, but the interface has changed.

When you accessed Configuration Management in EM12c, you would click on Comparison Templates and reach the following window:

config_c5http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_c5.png?... 300w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_c5.png?... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

You can see all the templates, access them quickly, but what if you want to then perform a comparison?  Intuition would tell you to click on Actions and then Create.  This unfortunately, only allows you to create a Comparison Template, not a One-Time Comparison.

To create a one-time comparison in EM12c, you would have to start over, click on the Enterprise menu, Configuration and then Comparison.  This isn’t very user friendly and can be frustrating for the user, even if they’ve become accustomed to the user interface.

EM13c Configuration Management Overview

EM13c has introduced a new interface for Configuration Management.  The initial interface dashboard is the Overview:

config_c4http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_c4.png?... 300w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_c4.png?... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

You can easily create a One-time Comparison, a Drift Management definition or Consistency Management right from the main Overview screen.  All interfaces for the Configuration Manager now includes tab icons on the left so that you can easily navigate from one feature of the Configuration Management utility to another.

In EM13c, if you are in the Configuration Templates, you can easily see the tabs to take you to the Definitions, the Overview or even the One-Time Comparison.

config_c6http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_c6.png?... 300w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_c6.png?... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

No more returning to the Enterprise drop down and starting from the beginning to simply access another aspect of Configuration Management.

See?  Not all change is bad… </p />
</p></div></div>

    	  	<div class=

The best idea since 1992: Putting the C into ACID (We need your vote)

Oracle Rdb (only available for the VMS platform) supports SQL-92 assertions (http://community.hpe.com/hpeb/attachments/hpeb/itrc-149/22979/1/15667.doc) so why not Oracle Database? Let’s put the “C” into “ACID.”(read more)

Another reason why you should use the Data Guard Broker for your #Oracle Standby

The Data Guard Broker is recommended for various reasons, this one is less obvious: It prevents a Split-Brain problem that may otherwise occur in certain situations. Let me show you:

[oracle@uhesse ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Members:
  prima - Primary database
    physt - Physical standby database 

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 18 seconds ago)

This is my setup with 12c, but the demonstrated behavior is the same with 11g already. I will cause a crash of the primary database now, without damaging any files – like a power outage on the primary site:

[oracle@uhesse ~]$ ps -ef | grep smon
oracle    6279     1  0 08:30 ?        00:00:00 ora_smon_prima
oracle    6786     1  0 08:32 ?        00:00:00 ora_smon_physt
oracle    7168  3489  0 08:43 pts/0    00:00:00 grep --color=auto smon
[oracle@uhesse ~]$ kill -9 6279

Don’t do that at home:-) Now the primary is gone, but of course I can failover to the standby:

[oracle@uhesse ~]$ dgmgrl sys/oracle@physt
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.
DGMGRL> failover to physt;
Performing failover NOW, please wait...
Failover succeeded, new primary is "physt"

So far so good, my end users can continue to work now on the new primary. But what happens when the power outage is over and the ex-primary comes back up again?

[oracle@uhesse ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed May 18 08:47:30 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1258291200 bytes
Fixed Size		    2923920 bytes
Variable Size		  452985456 bytes
Database Buffers	  788529152 bytes
Redo Buffers		   13852672 bytes
Database mounted.
ORA-16649: possible failover to another database prevents this database from
being opened

The DMON background process of the new primary communicates with the DMON on the ex-primary, telling it that there cannot be two primary databases within the same Data Guard Broker configuration! Try the same scenario without the broker and you will observe the ex-primary coming up until status OPEN. Just wanted to let you know:-)

Tagged: Data Guard

Experimenting with the ZFSSA’s snapshot capability using the simulator

Recently I have been asked how the Copy-on-Write cloning works on the ZFS Storage Appliance. More specifically, the question was about the “master” copy: did it have to be static or could it be rolled forward? What better than a test to work out how it works. Unfortunately I don’t have an actual system available to me at home so I had to revert to the simulator, hoping that it represents the real appliance accurately.

Setup

First I downloaded the ZFS Storage Appliance Simulator from the Oracle website and created a nice, new, shiny storage system (albeit virtual). Furthermore I have an Oracle Linux 7 system with UEK3 that will attach to the ZFSSA using dNFS. The appliance has an IP address of 192.168.56.101 while the Linux system is accessible via 192.168.56.20. This is of course a virtual toy environment, a real life setup would be quite different using IPMP and multiple paths preferably over Infiniband.

Configuration

Configuring the system is a two step process. The first is to create a storage pool on the ZFSSA that will host database backups, snapshots and clones. The second part is the configuration of the database server to use dNFS. I have written about that in detail in a previous blog post: https://martincarstenbach.wordpress.com/2014/07/09/setting-up-direct-nfs-on-oracle-12c/.

Step 1: Configure the ZFSSA

Again this is the simulator and I can only wish I had the real thing :) I have created a mirrored pool across all disks (that’s possible at this point because I skipped the pool creation during the initial appliance configuration). Navigating to Configuration -> Storage I clicked on the + button to create the pool and assigned all disks to it. I used a mirrored configuration, which again is owned to my lab setup. Depending on your type of (Exadata) backup you would probably choose something else. There are white papers that explain the best data profile based on the workload.

Next I created a new project, named NCDB_BKP to have a common location to set attributes. I tend to set a different mount point, in this case /export/ncdb_bkp to group all shares about to be created. Set the other attributes (compression, record size, access permissions etc) according to your workload. Following the recommendation in the white paper listed in the reference section I created 4 shares under the NCDB_BKP project:
– data
– redo
– alert
– archive

You probably get where this is heading … With those steps I took from the white paper listed in the reference section, the setup of the ZFSSA Simulator is done, at least for now. Head over to the database server.

Step 2: configure the database server

On the database server I customarily create a /zfssa/
/ mount point where I’m intending to mount the project’s shares. In other words I have this:

[oracle@oraclelinux7 ~]$ ls -l /zfssa/ncdb_bkp
total 0
drwxr-xr-x. 2 oracle oinstall 6 Jan 12 16:28 alert
drwxr-xr-x. 2 oracle oinstall 6 Jan 12 16:28 archive
drwxr-xr-x. 2 oracle oinstall 6 Jan 12 16:28 data
drwxr-xr-x. 2 oracle oinstall 6 Jan 12 16:28 redo

These will be mounted from the ZFSSA – edit your fstab to mount the shares from the appliance (simulator). When mounted, you’d see something like that:

[root@oraclelinux7 ~]# mount | awk '/^[0-9].*/ {print $1}'
192.168.56.101:/export/ncdb_bkp/alert
192.168.56.101:/export/ncdb_bkp/archive
192.168.56.101:/export/ncdb_bkp/data
192.168.56.101:/export/ncdb_bkp/redo

The next step is to add those to the oranfstab which I covered in the previous post I referred to. That should be it for now! Time to take a backup of the source database in preparation for the cloning. Be careful: adding image copies to an existing backup strategy might have adverse side effects-as always make sure you understand the implications of this technique and its impact. As always, test thoroughly!

Creating a backup

Let’s have a look at the database before taking a backup:

[oracle@oraclelinux7 ~]$ NLS_DATE_FORMAT="dd.mm.yyyy hh24:mi:ss" rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Mar 3 10:12:28 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: NCDB (DBID=3358649481)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name NCDB

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    790      SYSTEM               YES     +DATA/NCDB/DATAFILE/system.279.905507757
3    610      SYSAUX               NO      +DATA/NCDB/DATAFILE/sysaux.273.905507723
4    280      UNDOTBS1             YES     +DATA/NCDB/DATAFILE/undotbs1.259.905507805
5    1243     EXAMPLE              NO      +DATA/NCDB/DATAFILE/example.283.905507865
6    5        USERS                NO      +DATA/NCDB/DATAFILE/users.266.905507803

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    60       TEMP                 32767       +DATA/NCDB/TEMPFILE/temp.282.905507861

RMAN>

Nothing special, just a standard DBCA-created General Purpose database … Time to take the image copy.

RMAN> @/u01/app/oracle/admin/NCDB/scripts/imagecopy.rman

RMAN> run {
2>   allocate channel c1 device type disk format '/zfssa/ncdb_bkp/data/%U';
3>   allocate channel c2 device type disk format '/zfssa/ncdb_bkp/data/%U';
4>   backup incremental level 1 for recover of copy with tag 'zfssa' database ;
5>   recover copy of database with tag 'zfssa';
6> }

allocated channel: c1
channel c1: SID=36 device type=DISK

allocated channel: c2
channel c2: SID=258 device type=DISK

Starting backup at 03.03.2016 14:48:25
no parent backup or copy of datafile 5 found
no parent backup or copy of datafile 1 found
no parent backup or copy of datafile 3 found
no parent backup or copy of datafile 4 found
no parent backup or copy of datafile 6 found
channel c1: starting datafile copy
input datafile file number=00005 name=+DATA/NCDB/DATAFILE/example.283.905507865
channel c2: starting datafile copy
input datafile file number=00001 name=+DATA/NCDB/DATAFILE/system.279.905507757
output file name=/zfssa/ncdb_bkp/data/data_D-NCDB_I-3358649481_TS-SYSTEM_FNO-1_6jqvie1q 
  tag=ZFSSA RECID=36 STAMP=905525356
channel c2: datafile copy complete, elapsed time: 00:00:55
channel c2: starting datafile copy
input datafile file number=00003 name=+DATA/NCDB/DATAFILE/sysaux.273.905507723
output file name=/zfssa/ncdb_bkp/data/data_D-NCDB_I-3358649481_TS-EXAMPLE_FNO-5_6iqvie1p 
  tag=ZFSSA RECID=37 STAMP=905525386
channel c1: datafile copy complete, elapsed time: 00:01:23
channel c1: starting datafile copy
input datafile file number=00004 name=+DATA/NCDB/DATAFILE/undotbs1.259.905507805
output file name=/zfssa/ncdb_bkp/data/data_D-NCDB_I-3358649481_TS-UNDOTBS1_FNO-4_6lqvie4d
  tag=ZFSSA RECID=39 STAMP=905525414
channel c1: datafile copy complete, elapsed time: 00:00:25
channel c1: starting datafile copy
input datafile file number=00006 name=+DATA/NCDB/DATAFILE/users.266.905507803
output file name=/zfssa/ncdb_bkp/data/data_D-NCDB_I-3358649481_TS-SYSAUX_FNO-3_6kqvie3k
  tag=ZFSSA RECID=38 STAMP=905525409
channel c2: datafile copy complete, elapsed time: 00:00:51
output file name=/zfssa/ncdb_bkp/data/data_D-NCDB_I-3358649481_TS-USERS_FNO-6_6mqvie57
  tag=ZFSSA RECID=40 STAMP=905525416
channel c1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 03.03.2016 14:50:18

Starting recover at 03.03.2016 14:50:18
no copy of datafile 1 found to recover
no copy of datafile 3 found to recover
no copy of datafile 4 found to recover
no copy of datafile 5 found to recover
no copy of datafile 6 found to recover
Finished recover at 03.03.2016 14:50:19
released channel: c1
released channel: c2

RMAN> **end-of-file**

The Oracle white paper I just mentioned proposes using the %b flag as part of the RMAN formatSpec in the backup command – which interestingly does not work. The second time I ran it it failed like this:

RMAN> @/u01/app/oracle/admin/NCDB/scripts/imagecopy.rman

RMAN> run {
2>   allocate channel c1 device type disk format '/zfssa/ncdb_bkp/data/%b';
3>   allocate channel c2 device type disk format '/zfssa/ncdb_bkp/data/%b';
4>   backup incremental level 1 for recover of copy with tag 'zfssa' database ;
5>   recover copy of database with tag 'zfssa';
6> }

allocated channel: c1
channel c1: SID=36 device type=DISK

allocated channel: c2
channel c2: SID=258 device type=DISK

Starting backup at 03.03.2016 14:42:04
channel c1: starting incremental level 1 datafile backup set
channel c1: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA/NCDB/DATAFILE/example.283.905507865
input datafile file number=00006 name=+DATA/NCDB/DATAFILE/users.266.905507803
input datafile file number=00004 name=+DATA/NCDB/DATAFILE/undotbs1.259.905507805
channel c1: starting piece 1 at 03.03.2016 14:42:04
RMAN-03009: failure of backup command on c1 channel at 03/03/2016 14:42:04
ORA-19715: invalid format b for generated name
ORA-27302: failure occurred at: slgpn
continuing other job steps, job failed will not be re-run
channel c2: starting incremental level 1 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/NCDB/DATAFILE/system.279.905507757
input datafile file number=00003 name=+DATA/NCDB/DATAFILE/sysaux.273.905507723
channel c2: starting piece 1 at 03.03.2016 14:42:04
released channel: c1
released channel: c2
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on c2 channel at 03/03/2016 14:42:04
ORA-19715: invalid format b for generated name
ORA-27302: failure occurred at: slgpn

RMAN> **end-of-file**

This makes sense-Oracle tries to create an incremental backup with the same name as the data file copy, which would erase the copy and replace it with the incremental backup. Thankfully RMAN does not allow that to happen. This is why I chose the %U flag in the formatSpec, as it allows the incremental backup to be created successfully in addition to the datafile image copies. I am conscious of the fact that the image copies have somewhat ugly names.

After this little digression it’s time to back up the archived logs. When using copies of archived logs you need to make sure that you don’t have overlapping backups. The Oracle white paper has the complete syntax, I spare you the detail as it’s rather boring.

After some time the result is a set of image copies of the database plus the archived logs:

[oracle@oraclelinux7 ~]$ ls -lR /zfssa/ncdb_bkp/
/zfssa/ncdb_bkp/:
total 14
drwxr-xr-x. 2 oracle dba 2 Mar  3 10:32 alert
drwxr-xr-x. 2 oracle dba 4 Mar  3  2016 archive
drwxr-xr-x. 2 oracle dba 7 Mar  3  2016 data
drwxr-xr-x. 2 oracle dba 2 Mar  3 10:32 redo

/zfssa/ncdb_bkp/alert:
total 0

/zfssa/ncdb_bkp/archive:
total 2821
-r--r-----. 1 oracle asmdba 2869760 Mar  3  2016 1_117_905507850.arc
-r--r-----. 1 oracle asmdba    1024 Mar  3  2016 1_118_905507850.arc

/zfssa/ncdb_bkp/data:
total 3001657
-rw-r-----. 1 oracle asmdba 1304174592 Mar  3  2016 data_D-NCDB_I-3358649481_TS-EXAMPLE_FNO-5_6iqvie1p
-rw-r-----. 1 oracle asmdba  639639552 Mar  3  2016 data_D-NCDB_I-3358649481_TS-SYSAUX_FNO-3_6kqvie3k
-rw-r-----. 1 oracle asmdba  828383232 Mar  3  2016 data_D-NCDB_I-3358649481_TS-SYSTEM_FNO-1_6jqvie1q
-rw-r-----. 1 oracle asmdba  293609472 Mar  3  2016 data_D-NCDB_I-3358649481_TS-UNDOTBS1_FNO-4_6lqvie4d
-rw-r-----. 1 oracle asmdba    5251072 Mar  3  2016 data_D-NCDB_I-3358649481_TS-USERS_FNO-6_6mqvie57

/zfssa/ncdb_bkp/redo:
total 0

[oracle@oraclelinux7 ~]$

Create a Clone

Now that the database is backed up in form of an image copy and I have archived redo logs I can create a clone of it. This requires you to jump back to the ZFSSA interface (either CLI or BUI) and create snapshots followed by clones on the directories used.

One way is to log in to the BUI, select the NCDB_BKP project, navigate to “snapshots” and creating one by clicking on the + button. I named it snap0. If you plan on doing this more regularly it can also be scripted.

A clone is a writeable snapshot and is as easy to create. Add a new project – for example NCDB_CLONE1 – and set it up as required for your workload. In the next step you need to switch back to the backup project and for each of the 4 shares create a clone. To do so, navigate to the list of shares underneath NCDB_BKP and click on the pencil icon. This takes you to the share settings. Click on snapshots and you should see snap0. Hovering the mouse over the snapshot’s name reveals additional icons on the right, one of which allows you to create a clone (“clone snapshot as a new share”). Hit that plus sign and change the project to your clone (NCDB_CLONE1) and assign a name to the clone. I tend to use the same name as the source. The mount point should automatically be updated to /export/ncdb_clone1/sharename.

Now you need to get back to the database server and add the mount points for the recently created clones: /zfssa/ncdb_clone1/{data,redo,arch,alert}. Edit the fstab and oranfstab files, then mount the new shares.

Finishing the clone creation

The following procedure is most likely familiar to DBAs who created databases as file system copies. The steps are somewhere along the line of this:
– register the database in oratab
– create an initialisation file
– create a password file
– back up the source controlfile to trace in order to create a “create controlfile” statement for the clone
– start the clone
– create the controlfile
– recover the clone using the newly created backup controlfile
– open the clone with the resetlogs option
– add temp file(s)

Based on my source database’s parameter file, I created the following for the clone database which I’ll call CLONE1:

*.audit_file_dest='/u01/app/oracle/admin/CLONE1/adump'
*.audit_trail='db'
*.compatible='12.1.0.2.0'
*.control_files='/zfssa/ncdb_clone1/redo/control1.ctl'
*.db_block_size=8192
*.db_create_file_dest='/zfssa/ncdb_clone1/data'
*.db_domain=''
*.db_name='CLONE1'
*.db_recovery_file_dest='/zfssa/ncdb_clone1/archive'
*.db_recovery_file_dest_size=4560m
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=CLONE1XDB)'
*.nls_language='ENGLISH'
*.nls_territory='UNITED KINGDOM'
*.open_cursors=300
*.pga_aggregate_target=512m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=1024m
*.undo_tablespace='UNDOTBS1'

Notice how some of the filesystem related parameters changed to point to the mount points exported via NFS from the ZFSSA. I have refrained from changing diagnostic_dest to the ZFSSA simulator, this prevented the database from starting (perf told me that the sqlplus session spent all the time trying to use the network).

I’ll spare you the details of the create controlfile command, just make sure you change paths and point to the data files on the cloned shares (/zfssa/ncdb_clone1/data/*) and NOT on the ones where the master copy resides. After the controlfile is created, recover the database using the backup controlfile, then open it. Voila! You have just created CLONE1. Just add a temp file and you are almost good to go.

In the next part

The post has already become a bit too long, so I’ll stop here and split it into this one and a second part. In the next part you will read about changes to the source database (NCDB) and how I’ll roll the image copies forward.

References

http://www.oracle.com/technetwork/articles/systems-hardware-architecture/cloning-solution-353626.pdf

Configuration Management Searches in EM13c

With the addition of the Configuration Management from OpsCenter to Enterprise Manager 13c, there are some additional features to ease the management of changes and drift in Enterprise Manager, but I’m going to take these posts in baby steps, as the feature can be a little daunting.  We want to make sure that you understand this well, so we’ll start with the configuration searches and search history first.

babysteps

To access the Configuration Management feature, click on Enterprise and Configuration.

Click on Search to being your journey into Configuration Management.

confs2http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/confs2.png?res... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/confs2.png?res... 768w" sizes="(max-width: 833px) 100vw, 833px" data-recalc-dims="1" />

From the Search Dashboard, click on Actions, Create and History.  You’ll be taken to the History wizard and you’ll need to fill in the following information:

config_hist2http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_hist2.p... 300w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_hist2.p... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

And then click on Schedule and Notify to build out a schedule to check the database for configuration changes.

config_hist1http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_hist1.p... 300w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_hist1.p... 768w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_hist1.p... 1514w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

For our example, we’ve chosen to run our job once every 10 minutes, set up a grace period and once satisfied, click on Schedule and Notify.  Once you’ve returned to the main screen, click on Save.

Now when we click on Enterprise, Configuration, Search, we see our Search we created in the list of Searches.  The one we’ve created is both runnable AND MODIFIABLE.  The ones that come with the EM13c are locked down and should be considered templates to be used in Create Like options.

The job runs every 10 minutes, so if we wait long enough after a change, we can then click on the search from the list and click on Run from the menu above the list:

confs4http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/confs4.png?res... 300w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/confs4.png?res... 768w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/confs4.png?w=1287 1287w" sizes="(max-width: 389px) 100vw, 389px" data-recalc-dims="1" />

As I’ve made a change to the database, it shows immediately in the job and if I set this up to notify, it would email me via the settings for the user who owns the configuration:

confs5http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/confs5.png?res... 300w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/confs5.png?res... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

If you highlight a row and click on See Real-Time Observations.  This will take you to the reports that show you that each of the pluggable databases weren’t brought back up to an open mode post maintenance and that they need to be returned to an open status before they will match the original historical configuration.

We can quickly verify that the databases aren’t open.  In fact, one is read only and the other is only mounted:

SQL> select name, open_mode from v$database;

NAME OPEN_MODE
--------- --------------------
CDBKELLY READ WRITE

SQL> select name, open_mode from v$pdbs;

NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDBKELLYN MOUNTED
PDBK_CL1 READ ONLY

So let’s open our PDBs and then we’ll be ready to go :‏

ALTER PLUGGABLE DATABASE PDBKELLYN OPEN;
ALTER PLUGGABLE DATABASE PDBK_CL1 CLOSE;
ALTER PLUGGABLE DATABASE PDBK_CL1 OPEN;

Ahhhh, much better.



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Configuration Management Searches in EM13c], All Right Reserved. 2016.