Search

Top 60 Oracle Blogs

Recent comments

KVM

Silent installation: Oracle Restart 19c, ASM Filter Driver, UEK 5 edition

As promised in an earlier post here are my notes about installing Oracle Restart with ASM Filter Driver (ASMFD) 19c on Oracle Linux 7 using UEK 5.

Since the approach you are about to read isn’t explicitly covered in the documentation I suggest you ask Oracle Support whether it is supported before using this outside a playground/lab environment.

I also forgot about this post waiting to be published in my drafts folder, it should have gone out early April. Some components I used to put the post together aren’t the latest and greatest, please adjust accordingly.

Creating a new disk group for use with ASM Filter Driver on the command line in Oracle 19c

In my previous post I shared my surprise when I learned that calling gridSetup.sh 19c for use with Oracle ASM Filter Driver (ASMFD) required me to specify the names of the native block devices. This is definitely different from installing ASM with ASMLib where you pass ASM disks as “ORCL:diskname” to the installer.

Um, that’s great, but why did I write this post? Well, once the installation/configuration steps are completed you most likely need to create at least a second disk group. In my case that’s going to be RECO, for use with the Fast Recovery Area (FRA). This post details the necessary steps to get there, as they are different compared to the initial call to gridSetup.sh.

Silent installation: Oracle Restart 19c, ASM Filter Driver, RHCK edition

As promised in the earlier post here are my notes about installing Oracle Restart 19c on Oracle Linux 7.7 using the RedHat compatible kernel (RHCK). Please consult the ACFS/ASMFD compatibility matrix, My Oracle Support DocID 1369107.1 for the latest information about ASMFD compatibility with various kernels as well.

Why am I starting the series with a seemingly “odd” kernel, at least from the point of view of Oracle Linux? If you try to install the Oracle Restart base release with UEK 5, you get strange error messages back from gridSetup telling you about invalid ASM disks. While that’s probably true, it’s a secondary error. The main cause of the problem is this:

Using virt-install/KVM and kickstart to create lots of identical VMs quickly

In preparation for an upcoming talk about Oracle 12.2 I am planning on getting my feet wet with the Oracle 12.2 Sharding option. It seems to be a very interesting subject to get into, but like all new subjects, they can be intimidating at first due to their complexity. So why not use the instructions someone has already written down?

Oracle has a nice series of articles about Sharding. I found the Sharding landing page in the Maximum Availability Architecture section on OTN. For the purpose of this blog post I’m intending to follow the “Oracle Sharded Database Deployment with Active Data Guard using CREATE SHARD method for On-Premises – Cookbook”, available from the aforementioned landing page.

New Events for Data Guard and Synchronous Redo Transport in 12c (2)

After the baseline has been established in the first part of this series it’s time to measure the effect of the network in this part. The second test will introduce an interesting feature: Using Linux’s own Traffic Shaper/Quality of Services module I will add a delay of 100ms to the Data Guard network interface card (NIC) to slow things down a little.

WARNING: this is of course a lab or VM-only situation. I can’t exert control over wire quality in my (own) switches, hence some software magic is needed on my virtual ones. This post is intended to be merely for educational purposes, not for use at work.

I am continuing to use the 2 node RAC 12.1.0.2.170117 primary database on Oracle Linux 7 with UEK 4 and an identical RAC to host my standby database.

New Events for Data Guard and Synchronous Redo Transport in 12c (1)

I may have said it before but I consider presenting and teaching a great way to expand one’s knowledge: first of all it requires me to really understand a subject. Secondly, when presenting, you get lots of interesting questions that can turn into blog posts like this one.

Lately I have been asked about the impact of synchronous log shipping to a physical standby database. I was sure there was an effect to be observed, depending most likely on the network latency between systems but I didn’t have any evidence I could pull out of the hat to back up my thoughts. So what better than trying! I also read that some of the events have changed in 12c, and wanted to make them visible. My environment is based on the 2 node RAC primary/2 node RAC standby configuration I wrote about in my previous posts.

Since their initial setup I upgraded the cluster to 12.1.0.2.170117 for Clusterware and RDBMS.

Creating a RAC 12.1 Data Guard Physical Standby environment (4)

In the previous three parts of this series a lot of preparation work, needed for the configuration of Data Guard, was performed. In this part of the mini-series they all come to fruition. Using the Data Guard broker a switchover operation will be performed. A couple of new features in 12c make this easier. According to the “Changes in This Release for Oracle Data Guard Concepts and Administration” chapter of the 12.1 Data Guard Concepts and Administration guide:

When [you, ed.] perform a switchover from an Oracle RAC primary database to a physical standby database, it is no longer necessary to shut down all but one primary database instance.

I have always wanted to test that in a quiet moment…

Creating a RAC 12.1 Data Guard Physical Standby environment (3)

In the previous two parts of this series you read about my lab environment and the preparations on the network side as well as the database. In this part I’ll cover the database duplication. Again, this won’t be a short post …

NOTE: As always, this is just a demonstration using VMs in my lab, based on my notes. Your system is most likely different, so in real-life you might take a different approach. The techniques I am using here were suitable for me, and my own small scale testing. I tried to make sure they are valid, but you may want to allocate more resources in your environment. Test, test, test on your own environment on test kit first!

Step 1: Create an initialisation file

The next step is the preparation of an initialisation file. I am taking NCDBA as the sample and transfer it over to rac12sec1:

Little things worth knowing: when a transient ASM disk failure cannot be fixed in time

In the previous blog post I used libvirt and KVM in my lab environment to simulate a transient disk failure and how to recover from it. This post takes this example a step further: I am simulating another disk failure, but this time won’t pretend I can fix the issue and put it back. In other words, I simulate the effect of the disk_repair_time hitting zero.

Most of what I am covering here is an extension of the previous post, I’ll mention the main detail here for your benefit, but would like to invite you to revert to the previous post for more detail.

The idea is to show you the output of the ASM alert.log and result of the lost disk in the V$-views.

As with the previous post, the code examples in this one are for demonstration purposes only!

Little things worth knowing: transient ASM disk failures in 12c

For quite some time we have been treated nicely by ASM when it comes to transient disk failures. Since 11.1 (if memory serves me right), transient failures won’t cause an ASM disk to be dropped immediately. This is good, because it can potentially save a lot of time! When a disk is dropped from an ASM disk, a rebalance operation is unavoidable. And there is more than a chance of another one following it, because ultimately you want to add the disk back to the disk group. Well, to be fair, this is only applicable for normal or high redundancy disk groups.

The feature I just described very briefly is referred to as ASM Fast Mirror Resync, and documented in the ASM Administrator’s Guide in section Administering Oracle ASM Disk Groups.