Search

Top 60 Oracle Blogs

Recent comments

Automatic Storage Management

12.2 New Feature: the FLEX ASM disk group part 3

In the previous 2 parts of this mini series I introduced the Flex ASM disk group and two related concepts, the Quota Group and File Group. In what should have become the final part (but isn’t) I am interested in checking whether quotas are enforced.

(Un)fortunately I have uncovered a few more things that are worth investigating and blogging about, which is why a) this isn’t the last post and b) it got a bit shorter than the previous two. Had I combined part 3 and 4 it would have been too long for sure … BTW, you can navigate all posts using the links at the very bottom of the page.

Are quotas enforced?

The purpose of the Quota Group is … to enforce quotas on a disk group, much like on a file system. This is quite interesting, because you now have a hard limit to which databases can grow within a disk group even for non-CDBs.

12.2 New Feature: the FLEX ASM disk group part 2

In the first part of this series I explained the basics and some potential motivation behind the use of ASM Flex disk groups. In this part I would like to complete the description of new concepts.

New Concepts related to FLEX ASM Disk Groups

With the Flex disk group mounted, the next steps are to create a few new entities. First, I want to create a Quota Group. The Quota Group – as the name implies – will enforce quotas for entities residing within it. It is optional to add one yourself, Oracle creates a default Quota Group for you that does not enforce storage limits. As you will see later, the default Quota Group will be assigned to all new databases in the Flex ASM disk group.

12.2 New Feature: the FLEX ASM disk group part 1

I knew about the 12.2 FLEX ASM disk group type from other presenters but until now – when researching the feature for the upcoming DOAG HA Day – I haven’t been able to appreciate how cool this is. And I really think it is pretty cool and worth sharing! There is a lot to be said about the feature and these tests, which is why I am splitting it into multiple parts.

Please be aware that this post is about my lab experiments, I have no production experience with FLEX ASM disk groups. As with all new features it might take a while to mature, so test, test, test…

Little things worth knowing: when a transient ASM disk failure cannot be fixed in time

In the previous blog post I used libvirt and KVM in my lab environment to simulate a transient disk failure and how to recover from it. This post takes this example a step further: I am simulating another disk failure, but this time won’t pretend I can fix the issue and put it back. In other words, I simulate the effect of the disk_repair_time hitting zero.

Most of what I am covering here is an extension of the previous post, I’ll mention the main detail here for your benefit, but would like to invite you to revert to the previous post for more detail.

The idea is to show you the output of the ASM alert.log and result of the lost disk in the V$-views.

As with the previous post, the code examples in this one are for demonstration purposes only!

Fixing a problem with the ASM spfile preventing RAC 12c from starting

This is a little note to myself on how to fix a corrupt spfile in clustered ASM. I hope you find it useful, too.

Let’s assume you made a change to the ASM (server) parameter file that causes an issue. You are most likely to notice this once CRS is restarted but parts of the stack fail to come up. If “crsctl check crs” mentions any component not started you can try to find out where in the bootstrap process you are stuck. Here is the output from my system.

UKOUG post conference geek update part 1 – ACFS for Oracle databases

One of the many interesting things I heard at the conference this time around was that Oracle’s future direction includes the use of database files on ACFS. When ACFS came out this was strictly ruled out, but has been possible for a little while now, I believe with 12.1.0.1.0. With the Oracle Database Appliance (ODA) using this deployment option and hearing about it at the conference, a little further investigation was in order. During one of the presentation @OracleRACPM Markus Michalewicz had a reference to a script that I didn’t know on his slides. The script is called gDBClone, and I wanted to see how it works. The idea is that the script can be used to create a snap-clone of a database if the source is on ACFS and in archivelog mode.

As it turned out there were a few hurdles along the way and I will point them out so you don’t run into the same issues.

Can you have high redundancy files in a normal redundancy diskgroup?

One of the perks of teaching classes is that I get to research questions asked. In the last Exadata Administration Class I taught someone asked: can you have your disk groups in Exadata on normal redundancy yet have certain databases use high redundancy? This would be a good interview question …

The answer is yes, which I remembered from researching material on the 11g RAC book but I wanted to prove that it is the case.

Update: I planned a second blog post where I wanted to test the effect but Alex Fatkulin was quicker, and I promise I didn’t see his post when I wrote mine. Otherwise there probably wouldn’t have been one :) In summary, you aren’t really any better protected. The disk group remains at normal redundancy, even with the data files in high. Looking at Alex’s results (and I encourage you to do so) I concur with his summary that although you have a 3rd copy of the extent protecting you from corruption, you don’t have higher resilience.

DBMS_FILE_TRANSFER potentially cool but then it is not

This post is interesting for all those of you who plan to transfer data files between database instance. Why would you consider this? Here’s an excerpt from the official 12.1 package documentation:

The DBMS_FILE_TRANSFER package provides procedures to copy a binary file within a database or to transfer a binary file between databases.

But it gets better:

The destination database converts each block when it receives a file from a platform with different endianness. Datafiles can be imported after they are moved to the destination database as part of a transportable operation without RMAN conversion.

So that’s a way not only to copy data files from one database to another but it also allows me to get a file from SPARC and make it available on Linux!

Adding a node to a 12c RAC cluster

This post is not in chronological order, I probably should have written something about installing RAC 12c first. I didn’t want to write the tenth installation guide for Clusterware so I’m focusing on extending my two node cluster to three nodes to test the new Flex ASM feature. If you care about installing RAC 12c head over to RAC Attack for instructions, or Tim Hall’s site. The RAC Attack instructions are currently being worked at for a 12c upgrade, you can follow/participate the work on this free mailing list.

The cluster I installed is based on KVM on my lab server. I have used Oracle Linux 6.4 with UEK2 for the host OS. It is a standard, i.e. not a Flex Cluster but with Flex ASM configured. My network configuration is as shown:

Performance testing with Virident PCIe SCM

Thanks to the kind introduction from Kevin Closson I was given the opportunity to benchmark the Virident PCIe flash cards. I have written a little review of the testing conducted, mainly using SLOB. To my great surprise Virident gave me access to a Westmere-EP system running a top of the line 2s12c24t system with lots of memory.

In summary the testing shows that the “flash revolution” is happening, and that there are lots of vendors out there building solutions for HPC and Oracle database workloads alike. Have a look at the attached PDF for the full story if you are interested. When looking at the numbers please bear in mind it was a two socket system! I’m confident the server could not max out the cards.

Full article:

Virident testing martin bach consulting