I was recently involved with an upgrade project to go from 22.214.171.124 to 126.96.36.199 on an Exadata V2. We hit some snags during the upgrade specifically related to OEM 12c Cloud Control. We performed an out-of-place upgrade and OEM 188.8.131.52.0 had some difficulty in dealing with this.
12c Cloud Control is supposed to run a daily check which looks for new targets on each server. When it finds something new, it places this in a queue to wait for admin approval. With a single click you can promote the newly discovered target into an OEM managed object.
Oracle Cloud Control 12cR2 is installed and merrily monitoring one of the test 11gR2 databases running on HP-UX. I’ll probably leave it like that until I come back from Oracle OpenWorld. I don’t want to change the entire administration and monitoring infrastructure just as I leave for a couple of weeks.
As I’m re-familiarizing myself with the 12c way of doing things, I’ve been wondering if this really is a full “Release 2″ product, or just 12cR1 with Bundle Patch 2. Not surprisingly, one of my readers asked the same question, pointing out the version 184.108.40.206 does not look consistent with a “Release 2″ product, which would typically be 220.127.116.11.
I did an EM Cloud Control 12cR2 installation at work yesterday. The database repository was 18.104.22.168 on HP-UX and the middle tier was installed on RHEL 5.8. The installation was pretty much the same as the 12cR1 version. Over the next few days I’ll be testing out some of the features to decide if we can move across to it permanently.
Today I did two run throughs of single server installations on Oracle Linux 5.8 and 6.3. There are a couple of minor differences, but nothing to worry about. You can see what I did here:
The installations are a little small, so they are not too fast, but it’s good enough to test things out.
Enterprise Manager Cloud Control 12c Release 2 was released a couple of days ago for all the major platforms. That in itself is not news any more, but the fact we are going to trial it at work as a replacement for our 11g Grid Control installation is.
It’s in my low priority task list, so I’m not sure I’ll get it all sorted before OOW12, but it is something to look forward too. I know it’s tragic, but I’m quite excited.
Yesterday I presented at UKOUG’s Availability, Infrastructure and Management Special Interest Group (hey, say this 3 times in a row, quickly!) about Oracle Enterprise Manager 12c and my experience with it. As my good fried Piet de Visser pointed out I had way too much to say for the 45 minute slot allocated. But then Piet always tells me that. Sadly he is also often right :) That’s why I like seeing him during my talks!
In summary I would have liked to do a different presentation, and that’s for two reasons: 1) I overran and 2) I haven’t managed to show the patching part which is hugely interesting, at least to me.
Now here’s the reason for the blog post. I haven’t done online seminars yet, and was wondering if people were interested in a 1-1.5 hour UKOUG-like presentation from myself, broadcast via Goto Meeting or similar to an audience. Would that be of interest? The topics to be covered are:
One of the promises from Oracle for OEM 12c was improved support for Oracle RAC One Node. I have spent quite a bit of time researching RON, and wrote a little article in 2 parts about it which you can find here:
One of my complaints with it was the limited support in OEM 11.1. At the time I was on a major consolidation project, which would have used OEM for management of the database.
Question: What happens when 12c Cloud Control runs out of disk space?
Answer: It doesn’t work very well.
I have a 12c Cloud Control installation on an Oracle Linux 6.1 VM and I was pushing an agent to both nodes of an 22.214.171.124 RAC, also on OL6.1 VMs. The agent installation seemed to go fine and the agent upload to CC was fine, but when I tried to discover the database on the nodes it went a bit loopy. After a little messing about I noticed my disk was maxed out on the 12c CC server. Bummer!
So I turned off the VM, added another virtual disk, turned it back on and added the new disk to the existing volume. Bob’s your uncle!
One of the questions I have always asked myself revolved around: “why doesn’t Oracle package certain software as an RPM on Linux?” Well this question has recently been answered in the form of the Oracle 12c agent. It IS possible to use an RPM based installation, although it doesn’t make 100 use of RPM. I have written this post to give you an idea what happens.
The procedure is described in the OEM 12 Cloud Control Advanced Installation and Configuration Guide, chapter 6. The process is very similar to the non-RPM based agent deployment. Let’s have a loot at it in detail.
Operations to be performed on the OMS
Log in to the OMS host and log in to OEM using emcli as shown in these exapmles (which are taken from the official Cloud Control documentation):
I mentioned the day before Open World I put a Virtual RAC on Oracle Linux 6.1 article live. Although the procedure was complete, some of the screen shots were from an old article as I didn’t have time to redo them before my flight. I’ve just run through the procedure again and taken new screen shots. As a result, I’ve allowed the article to display on the front page of the website, which is why you will see it listed as a new article there.
This kinda rounds out the whole Oracle on 6.1 stuff as there has been a single instance installation guide out for ages and more recently the Cloud Control installation, which references it.
Remember, it’s still not certified yet, but it’s coming.
While I was at Open World I tried a few times to get hold of the new Cloud Control software, but the hotel network wasn’t up to the job, so I had to wait until I got home.
The installation is pretty simple compared to previous versions of Grid Control and it installs fine on both Oracle Linux 5.x and 6.x. As always it’s a little greedy on the memory front, with the recommendation for a small installation being 4G for the Cloud Control and 2G for the repository database. That’s not including the OS requirement. On the subject of the repository database, you can use a number of 10g and 11g versions, but anything before 126.96.36.199 requires additional patches, so I stayed with 188.8.131.52.
You can see what I did here.