Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Announement: “Oracle Indexing Internals and Best Practices Seminar” – Berlin 8-9 May 2019 !!

I’m very excited to announce I’ll be running my highly acclaimed “Oracle Indexing Internals and Best Practices” seminar in conjunction with the German Oracle User Group (DOAG) in Berlin, Germany on 8-9 May 2019. The venue will be the DOAG Offices in Berlin (DOAG Office, 6th Floor, Tempelhofer Weg 64, 12347 Berlin). Because of venue […]

Azure Automation of A-to-Z, Part I

DevOps deployments and automation have numerous tools at their disposal, but most often, scripting is required. Although I’m a Microsoft Azure fanatic, I am also a strong advocate of Linux and with my two decades on Unix, I strongly prefer BASH over PoSH. I find the maturity of BASH and KSH highly attractive over PoSH and with my experience, I’m simply more skilled with shells native to the Linux OS.

Before we get too far in, I know some of the text can be difficult to read and/or you want to copy and paste or you want some of the scripts for yourself.  All of this can be found on the following GitHub Repository.

Do the (Shell) Thing, Julie

As you learn to write shell scripts, no matter the choice in scripting, it’s critical to learn how to reuse code to:

  • Lesson the amount of code to maintain.
  • Help establish standards in coding practices.
  • Establish centralized locations for scripts, as reuse of existing script often assists in this.

Taking a project that’s near and dear to my heart, we’ll use the Higher Education Solution that was first built out by my team with Patrick LeBlanc, Steve Pontello, Jason Morales and Dustin Ryan. The goal is to have an end-to-end solution that can be deployed, (originally on-premises, subsequently I’ve been working to move to Azure) for any customer as part of a POC. It provides the customer with the following:

  1. PaaS databases- staging and data warehouse.
  2. Firewall rules to connect the Azure CLI to the PaaS databases.
  3. Analysis Services Semantic Model for scalability and governance.
  4. Azure Data Factory to manage SSIS packages via pipelines, scheduled and management.
  5. Sample data via CSV files, but can be used as examples for the customer to create versions of the files for their own data loads.
  6. ELT process to load data into staging and then to the data warehouse.
  7. Logical objects in staging and the data warehouse.
  8. On-premises Data Gateway to connect to Power BI Desktop to the Azure environments.
  9. Three Power BI beginning dashboards to grant beginning insight into the data and a starting point for customers to work from.

This is a large undertaking, as we’re not talking about standard, “deploy some servers and turn it over to the DBAs to do their thang” or “import some data and then turn it over to the developers” kind of thang. This is a beginning to end solution for customers that I want to be able to run a script, answer a few questions asked interactively of it and then have it deployed completely to their Azure environment and onto their desktop. The customer should then be able to publish the Power BI reports up to the web to let others get a taste of what they can do with just the flip of a switch.

If you’re looked at pulling automation scripts from Azure, you’ll notice it can be PoSH or BASH with LARGE JSON template and parameter files. These files aren’t built with the goal of dynamic builds or are they easy to maintain. Due to this, I looked towards the Azure CLI and BASH and created my own. I also deployed everything into a single resource group to simplify management, scaling out and removal if after the POC, the customer isn’t ready yet to deploy into their environment.

Part A, Introduction are Important

This was rather simple, as it’s common practice to deploy VMs with Azure DB/SQL Server and/or SQL Server/Azure DB in PaaS. The goal was to remove the heavy dependence on JSON files. If you were to log into the Azure Portal, you would see the amount of JSON that is created as part of the automation scripts.

Now the script under the CLI tab is sufficient for recreating the environment “as is” but remember, I wanted this to be interactive and add the capability to offer customers the ability to meet their naming conventions, etc. Keeping as close to the format and design of the script that Microsoft has offered as part of the automation scripts, but enhancing them to meet my goals is the right thing to do. It results in something that is recognizable by Microsoft support and still keeps to our focus.

Where their script begins like this:

My Turn

My script doesn’t veer off course from the original. I’ve added clear comments to help the user understand the goal, usage and steps, but notice the similarities:

So, let’s talk about all these additions and why the introduction to a script is as important as an introduction to a paper or story.

As with the script from Microsoft, I’m starting with a reminder to the user executing the script that there are requirements for the usage of the script and if the execution doesn’t meet those, the script will simply exit to protect from missing arguments if they aren’t supplied with the USAGE() step.

We then declare the arguments passed for the arguments will be used to populate those arguments. What’s interesting is that we will ask the user questions interactively to fulfill these arguments as we proceed into the beginning of the script execution, using the WHILE GETOPTS step.

I’m going to skip over the minimal JSON template and parameters that are used by this script in this post, as I want to focus on the physical deployment outside of the Azure Data Factory and the Azure Analysis Services, (another post we’ll come back to this).

Dynamic Pull and Requests

The next step in the script, we’ll display how we locate the customer’s Azure subscription ID using the AZ ACCOUNT SHOW command. The current subscription ID is returned and the customer is asked to copy and paste it into the prompt. The reason for this is customers may want to use a different one that they are currently using and this gives them the ability to view what subscription they are logged into and place a different one into this step.

Next, I ask the customer to create a unique RESOURCE GROUP to own all the resources we’re about to deploy. This simplifies the monitoring, maintenance and management of the deployment. The next step is to create two passwords, one for the SQL Server and the next for their Analysis Server. I give examples and let the user know that if they don’t follow password requirements, the deployment will fail.

The last two, initial questions ask the customer to create a name for their SQL Server host and to choose from a list the Azure location zone for the resources to be created in. I use the AZ ACCOUNT LIST-LOCATIONS and this is a step where BASH comes in handy, as I have a simple GREP and AWK to filter what is returned to the prompt to choose from.

Last of the Introduction

To finish up today’s investigation into this script is the firewall rule. Now most tools, including SQL Server Management Studio, (SSMS) and Visual Studio, offer to create a firewall rule for your workstation upon logging into the Azure resource, but with the Azure Cloud Shell, this needs to be created to log into the SQL Server environment. My script uses ifconfig.me with a CURL command to capture the IP Address and use it as the beginning and end IP for a firewall rule that it will create for the Azure Cloud Shell to do the subsequent scripts that we’ll begin to walk through in the next post.

Notice that last lines I’ve pasted here, lets you know we’re not going to fool around with our login user, it’s going to be SQLADMIN that’s created and also displays the LOGFILE file name that will be used. As customers will be creating these entries as they execute the script, it should only be expected that they’ll forget the entries or have a typo. Having the output entered into a log file, along with success of each step is a huge benefit for them. Also note, although there’s been a lot of enhancements to the original script from Microsoft, there’s also a lot about it that’s very similar in format and steps. This is important as we begin to build out, in the end, a wrapper script for a number of deployment options for my group.



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBAKevlar [Azure Automation of A-to-Z, Part I], All Right Reserved. 2019.

Where to check Data Guard gap?

At work, we had a discussion with well-known colleagues, Luca Canali and Ludovico Caldara, about where we check that Data Guard recovery works as expected without gap. Several views can be queried, depending on the context. Here are a few comments about them.

v$database

This is my preferred because it relies on the actual state of the database, whatever the recovery process is:

SQL> select scn_to_timestamp(current_scn) 
from v$database;
SCN_TO_TIMESTAMP(CURRENT_SCN)
----------------------------------------------------------
22-JAN-19 03.08.32.000000000 PM

This reads the current System Change number (DICUR_SCN from X$KCCDI) and maps it to a timestamp (using the mapping SMON_SCN_TIME table).

However, relying on a function which is available only when the database is opened, this is easy only with Active Data Guard. When the database is in mount state, you will get ORA-00904: “SCN_TO_TIMESTAMP”: invalid identifier.

Note that I’ve also seen cases where, in case of gap, the SMON_SCN_TIME was not up-to-date and I got ORA-08181: specified number is not a valid system change number. Then this is not for automatic monitoring.

Without Active Data Guard, you need to do the SCN to timestamp conversion on the primary. Or read the SCN from the datafiles, but this is not the latest apply but the lastest checkpointed:

SQL> alter session set nls_date_format='dd-mm-yy hh24:mi:ss';
Session altered.
SQL> select max(checkpoint_time) from v$datafile_header;
MAX(CHECKPOINT_TI
-----------------
26-01-19 17:45:04

Reading from V$DATABASE does not rely on Data Guard and then is also available when the MRP is not started and also in Standard Edition non-managed standby.

gv$recovery_progress

This is Luca’s favored one (see is adg.sql script among many other interesting ones in https://github.com/LucaCanali/Oracle_DBA_scripts):

SQL> select inst_id, max(timestamp) 
from gv$recovery_progress group by inst_id;
INST_ID MAX(TIMESTAMP)
---------- --------------------
1 22-JAN-2019 15:08:51

Where does this information come from? If you look at the execution plan you will see that it reads X$KSULOP which is the X$ that is behind V$SESSION_LONGOPS. You can get the same timestamp from it:

SQL> alter session set nls_date_format='dd-mm-yy hh24:mi:ss';
Session altered.
SQL> select inst_id,opname,timestamp from gv$session_longops 
where opname='Media Recovery' and target_desc='Last Applied Redo';
INST_ID OPNAME               TIMESTAMP
---------- -------------------- -----------------
1 Media Recovery 26-01-19 18:56:39
1 Media Recovery 26-01-19 19:40:35

As this information comes from what the MRP (Managed Recovery Process) logs, this view is available only when the recovery is running (APPLY-ON).

v$managed_standby;

Talking about what is logged by MRP, Ludo goes to the MRP status in v$managed_standby to see the sequence and block# increase. This is very interesting as we can compare the remaining work to do, from what is received by RFS:

19:59:46 SQL> select inst_id,process,status,client_process,thread#,sequence#,block#,blocks from gv$managed_standby;
I PROCESS   STATUS       CLIENT_P THREAD# SEQUENCE# BLOCK# BLOCKS
- --------- ------------ -------- ------- --------- ------ ------
1 ARCH CLOSING ARCH 1 34 32768 481
1 DGRD ALLOCATED N/A 0 0 0 0
1 DGRD ALLOCATED N/A 0 0 0 0
1 ARCH CLOSING ARCH 1 41 28672 2046
1 ARCH CLOSING ARCH 1 37 30720 1804
1 ARCH CLOSING ARCH 1 40 26624 1608
1 RFS IDLE Archival 1 0 0 0
1 RFS IDLE UNKNOWN 0 0 0 0
1 RFS IDLE LGWR 1 42 121 1
1 RFS IDLE UNKNOWN 0 0 0 0
1 MRP0 APPLYING_LOG N/A 1 42 121 40960
11 rows selected.
19:59:50 SQL> /
I PROCESS   STATUS       CLIENT_P THREAD# SEQUENCE# BLOCK# BLOCKS
- --------- ------------ -------- ------- --------- ------ ------
1 ARCH CLOSING ARCH 1 34 32768 481
1 DGRD ALLOCATED N/A 0 0 0 0
1 DGRD ALLOCATED N/A 0 0 0 0
1 ARCH CLOSING ARCH 1 41 28672 2046
1 ARCH CLOSING ARCH 1 37 30720 1804
1 ARCH CLOSING ARCH 1 40 26624 1608
1 RFS IDLE Archival 1 0 0 0
1 RFS IDLE UNKNOWN 0 0 0 0
1 RFS IDLE LGWR 1 42 124 1
1 RFS IDLE UNKNOWN 0 0 0 0
1 MRP0 APPLYING_LOG N/A 1 42 124 40960
11 rows selected.

v$archived_log

V$ARCHIVED_LOG has an ‘APPLIED’ flag, but it is not really helpful here as it does not consider the real-time apply. In the following screenshot the changes up to 20:35:46 in sequence# 55 have been applied but V$ARCHIVED_LOG shows sequence 52 as not applied:

This view is about archived logs. But before being archived, the redo stream is received to the standby logs.

v$standby_log

With real-time apply (RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE) the redo is applied as soon as it is received in the standby redo logs. The gap should be small and is visible in v$managed_standby (number of blocks between RFS from LGWR and MRP apply). I’ve seen some monitoring queries on V$STANDBY_LOG. The idea is to read the actual state of the transport, in the same idea that when I read V$DATABASE for the actual state of apply, without relying on what is logged by the processes:

SQL> select max(last_time) 
from gv$standby_log;
MAX(LAST_TIME)
--------------------
22-JAN-2019 15:08:55

However, this query reads X$KCCSL which is not very efficient as it reads the standby redo log files. This can be long when they are large and full:

So… be careful with this one.

v$dataguard_stats

The DG Broker ‘show database’ displays the gap information. This comes from V$DATAGUARD_STATS as Data Guard checks the state at regular interval and stores the latest here:

SQL> select name||' '||value ||' '|| unit
||' computed at '||time_computed
from v$dataguard_stats;
NAME||''||VALUE||''||UNIT||'COMPUTEDAT'||TIME_COMPUTED
------------------------------------------------------
transport lag +00 00:00:00 day(2) to second(0) interval computed at 01/27/2019 22:08:33
apply lag +00 00:00:01 day(2) to second(0) interval computed at 01/27/2019 22:08:33
apply finish time +00 00:00:06.493 day(2) to second(3) interval computed at 01/27/2019 22:08:33

You must always check when the value was calculated (TIME_COMPUTED) and may add this to gap to estimate the gap from the current time, as with DGMGRL:

Role:               PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 1 second (computed 1 second ago)
Average Apply Rate: 0 Byte/s
Real Time Query: OFF
Instance(s):
CDB2

How Not to Perform a Difficult Update in SQL Server/Azure

I learned a long time ago, that the quickest way to do something was to not do it at all.

A friend of mine asked me if I could review an update statement that was confounding their group looking for ways to optimize it.

The table has, just short of 50 columns, a considerable number of indexes and the column in question, has a datatype set to NVarChar(8), consists of some numbers, combinations of letters and numbers, etc.

The process would take 7 ½ hours to run this update, as it parses through 100K rows at a time. There is one index that includes the CN1 column in it, but no index on just the CN1 column. This is what the update statement looks like:


UPDATE TOP (@BatchSize) [dbo].[Table_b4]
SET CN1 = (CASE
WHEN TRY_CAST(CN1 AS INT) IS NULL THEN CN1
ELSE FORMAT(CAST(CN1 AS INT), ‘val1’)
END)
WHERE CN1 NOT LIKE ‘[0-x][0-x][0-x][0-x][0-x][0-x][0-x][0-x]’

Now as we review this update statement and try to optimize it, the question came to me to ask, “Why am I updating this at all?” We have to cast the VarChar data as an integer, then format the data the data and verify that none of the existing values aren’t like the lovely section in the WHERE clause.

Back in Oracle 10g, I had numerous wide, SAS tables that required extensive CPU and IO to update them. I introduced the guys to CTAS and then exchange partition, but there’s also a version of this in SQL Server, called Swap Schema.

The idea behind it, is instead of updating an existing table, create a new table as select on the original one, building out the data on the insert as you would for the update, then swap the new table with the original, skipping the update.

The new process would look like the following:

Two Schemas:

  • DBO, (Database Owner)
  • STAGING, (For the build and switch)

First, create the table in the STAGING schema, based on a select from the original table in the DBO schema, but with the data inserted as the update:

CREATE TABLE STAGING.LRG_Tbl

AS SELECT * FROM DBO.LRG_Tbl

WHERE TRY_CAST(CN1 AS INT) IS NULL

THEN FORMAT(CAST(CN1 AS INT), ‘val1’)

GO

 

INSERT INTO STAGING.LRG_Tbl

SELECT * FROM DBO.LRG_Tbl where CN1!=’val1′;

GO

 

 

You get the idea… The goal here is to perform the CTAS with the data pre-built in the updated format and then insert the last of the data.

 

Once the table is built, now we need to do the switch and update statistics-

 

ALTER SCHEMA DBO TRANSFER STAGING.LRG_Tbl

GO

 

GRANT SELECT ON “DBO.LRG_Tbl” TO “f_read_only”

GO

 

You will grant all permissions to the object to match the previous object and the process is complete. This is a process that will need to be done in a quiet time, but if the process that takes just a matter of minutes replaces a current process that takes 7+ hours, it may be worth it.

 

If you decide to perform a CTAS naming the table differently than the original, you would need to use the SP_RENAME proc to correct the name and this could be a more complicated process. With this process, the privileges aren’t impacted.

 

If you select from the table, you’ll now see that it has the data updated without ever having to have performed an update.

 

 

 

 


 



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBAKevlar [How Not to Perform a Difficult Update in SQL Server/Azure], All Right Reserved. 2019.

Oracle numbers in K/M/G/T/P/E

Oracle is very well instrumented, for decades, from a time where measuring the memory in bytes was ok. But today, we spend a lot of time converting bytes in KB, GB, TB to read it easily. I would love to see a Human-Readable format for TO_CHAR, but there’s not. Here is a workaround without having to create a new function.

DBMS_XPLAN does that when displaying execution plans and we can access the functions it uses internally. The metrics can be numbers, and then the Kilo, Mega, Giga applies to powers of 1000. Or they can be a size in bytes, and we prefer the powers of 1024. Or they can be a time in seconds, and then we use a base 60. And then we have 3 sets of functions:

  • FORMAT_SIZE for base 2 numbers where we use powers of 1024 

Oracle Linux 7 and a new YUM configuration since January 2019

For quite some time I used to have a configuration file /etc/yum.repos.d/public-yum-ol7.repo managing my package sources in lab VMs. Today I noticed that this configuration file is deprecated, and has been replaced by a new model. This is explained in the Oracle Linux 7 administrator guide and a few other sources I totally missed. I thought I’d show you the full story in this post before I go and change all my Ansible scripts :)

State of play before

To repeat the scenario I just went through, I created a new machine, server3, based on the stock Oracle Linux 7.6 image. After its initial boot I log in to the console to perform a yum upgrade.

This is of course only acceptable in a lab environment, anything deployed in an enterprise environment would use a local, change-controlled mirror based on Spacewalk or comparable software.

Prior to starting the actual upgrade, let’s have a look at how things were before the new model was rolled out. There is only a single repository configuration file present after the initial reboot and without any calls to yum whatsoever:

[ansible@server3 ~]$ ls -l /etc/yum.repos.d/
total 16
-rw-r--r--. 1 root root 13093 Nov 2 07:02 public-yum-ol7.repo
[ansible@server3 ~]$

Upgrading

Next I run the upgrade command, I have removed a lot of output for clarity

[ansible@server3 ~]$ sudo yum upgrade
Loaded plugins: ulninfo
ol7_UEKR5 | 1.2 kB 00:00
ol7_latest | 1.4 kB 00:00
(1/5): ol7_UEKR5/x86_64/updateinfo | 27 kB 00:00
(2/5): ol7_UEKR5/x86_64/primary | 2.4 MB 00:00
(3/5): ol7_latest/x86_64/group | 659 kB 00:00
(4/5): ol7_latest/x86_64/updateinfo | 767 kB 00:00
(5/5): ol7_latest/x86_64/primary | 11 MB 00:00
ol7_UEKR5 120/120
ol7_latest 11799/11799
Resolving Dependencies
--> Running transaction check
---> Package NetworkManager.x86_64 1:1.12.0-6.el7 will be updated
---> Package NetworkManager.x86_64 1:1.12.0-8.el7_6 will be an update

[ ... more output ... ]

Updating : 1:grub2-pc-2.02-0.76.0.3.el7.x86_64 15/85
Updating : cronie-1.4.11-20.el7_6.x86_64 16/85
Updating : cronie-anacron-1.4.11-20.el7_6.x86_64 17/85
Installing : python-chardet-2.2.1-1.el7_1.noarch 18/85
Installing : python-kitchen-1.1.1-5.el7.noarch 19/85
Installing : yum-utils-1.1.31-50.0.1.el7.noarch 20/85

IMPORTANT: A legacy Oracle Linux yum server repo file was found.
Oracle Linux yum server repository configurations have changed which
means public-yum-ol7.repo will no longer be updated. New repository
configuration files have been installed but are disabled. To complete
the transition, run this script as the root user:

/usr/bin/ol_yum_configure.sh

See https://yum.oracle.com/faq.html for more information.

Installing : oraclelinux-release-el7-1.0-3.el7.x86_64 21/85
Updating : rhn-client-tools-2.0.2-24.0.5.el7.x86_64 22/85
Updating : ipset-libs-6.38-3.el7_6.x86_64 23/85
Updating : selinux-policy-3.13.1-229.0.1.el7_6.6.noarch 24/85

[ ... more output ...]

The message that “A legacy Oracle Linux yum server repo file was found” started this blog post. So what is there to be done? The upgrade created a few more files in /etc/yum.repos.d/:

[ansible@server3 ~]$ ls -l /etc/yum.repos.d/
total 28
-rw-r--r--. 1 root root 3354 Jan 22 16:14 oracle-linux-ol7.repo.disabled
-rw-r--r--. 1 root root 13093 Nov 2 07:02 public-yum-ol7.repo
-rw-r--r--. 1 root root 2116 Jan 22 16:14 uek-ol7.repo.disabled
-rw-r--r--. 1 root root 226 Jan 15 21:31 virt-ol7.repo.disabled
[ansible@server3 ~]$

The files ending in *disabled are not considered eligible during any execution of “yum”.

Transitioning to the new model

According to yum.oracle.com/getting-started.html the following steps are only needed if a) you have public-yum-ol7.repo in use and the new configuration files – provided by oraclelinux-release-el7 in my case – are present as well. This applies to this VM, so I decided to go ahead and call the script /usr/bin/ol_yum_configure.sh to see what happens.

[root@server3 ~]# /usr/bin/ol_yum_configure.sh
Repository ol7_UEKR5 Fine
Repository ol7_latest Fine
[root@server3 ~]# ls -l /etc/yum.repos.d/
total 28
-rw-r--r--. 1 root root 3354 Jan 22 16:14 oracle-linux-ol7.repo
-rw-r--r--. 1 root root 13093 Nov 2 07:02 public-yum-ol7.repo.sav
-rw-r--r--. 1 root root 2116 Jan 22 16:14 uek-ol7.repo
-rw-r--r--. 1 root root 226 Jan 15 21:31 virt-ol7.repo

It appears as if ol_yum_configure.sh switched the all-in-one configuration for the new, modular one.

Implications

In other posts of mine I described how I’m upgrading public-yum-ol7.repo from Oracle’s yum server in my lab (and only in the lab!). Based on the monolithic file I can call yum-config-manager to enable and disable any repository I need. With the new modular approach I might have to rethink this approach. The Administration Guide I referred to earlier has more details about the details of the change.

Building your own local Oracle Linux 7 Vagrant base box

I have been talking about Vagrant for a long time and use it extensively on my Ubuntu-powered laptop. I am using Oracle Linux 7.6 for most of my lab builds, and I like to have specific tools such as collectl, perf, and many others available when the VM boots. I als like to stay in control of things, especially when it comes to downloading otherwise unknown things from the Internet I decided to learn how to create a Vagrant box myself.

Using Vagrant with my custom images, all I need to do is run a single command and it will spin up a clean VM using the VirtualBox provider with the exact software configuration I want. I can also supply so-called provisioners to further configure my environment. I found this particularly useful when writing and testing Ansible scripts. Sometimes I just wanted to go back to my starting point but that can be tricky at times: imagine you just partitioned your block devices for use with the database and discovered you wanted to change the flow. Getting back to unpartitioned, unformatted block devices is possible, but I don’t think it’s terribly elegant. Plus I have to manually do it, and I prefer the Ansible approach.

Building a base box

The Vagrant documentation is pretty good, so this is mostly pulling together information from 2 sources: The starting point I used was Creating a Base Box with specifics for the VirtualBox driver I’m using. I don’t claim I’m an expert in this field.

Running Vagrant VMs can be inherently insecure as you will see in a bit. It’s fine for me because I’m creating/trashing short-lived VMs on a regular basis and all I do is play around with them whilst they remain perfectly isolated from the rest of the world. If you are ok with this limitation feel free to read on, otherwise please refrain from following the steps in this blog post.

The overall process isn’t too hard to follow:

  • Create your gold image
    • Install the Operating System in VirtualBox
    • Install/upgrade any software you want to have available
    • Configure the system for Vagrant specifics
  • Create a base box off your gold image
  • Add the box to your environment
  • Start the VM and enjoy

Creating the VM and installing the Operating System

The first step obviously is to create the VM and install the operating system. For quite some time now I’m creating a VM with sufficient RAM and a couple of block devices: the first one is used as the root volume group, the second block device will be used for Oracle. Plenty of articles have been written about installing Oracle Linux on VirtualBox, I won’t write the 42nd variation here ;)

There are only a few things to pay attention to. These can all be found in the documentation I referenced earlier. First of all, please ensure that your network adaptor uses NAT. You can use port forwarding to access a NAT device in VirtualBox (configured later). The documentation furthermore recommends removing any necessary components such as USB and audio from the VM. I have used a strong password for “root” as I have no intention at all of sharing my VM. Apply security hardening at this stage.

A common error is not to enable the network device to start up automatically when the system boots. Vagrant uses port-forwarding to the NAT device and SSH keys to authenticate, there doesn’t appear to be a mechanism circumventing the network stack. With the network interface down it’s quite hard to connect via SSH.

Install/upgrade software

Once the operating system is installed and the VM rebooted, it’s time to configure it for your needs. I usually end up completing the pre-requisites for an Oracle database installation. This, too, has been covered so many times that I don’t feel like adding value by telling you how to complete the steps.

Configure the system for Vagrant

At this stage your VM should be properly configured for whichever purpose you have in mind. All that remains now is the addition of the specific configuration for Vagrant. There are a few steps to this, all to be completed on the guest.

Install VirtualBox Guest Additions

Vagrant offers the option of mounting a file system from your host on the guest VM. I very much like this feature, which is enabled by default. Please refer to the Vagrant documentation for security implications of sharing file systems between guest and host.

As with every VirtualBox VM, shared folders won’t work without installing the guest additions though so that’s what I do next. This is pretty straight forward and for Oracle Linux 7 generally speaking requires tar, bzip2, gcc and kernel-uek-devel matching your current kernel-uek. If you just completed a “yum upgrade” and your kernel was upgraded you need to reboot first. After VBoxLinuxAdditions.run has completed successfully (I am using VirtualBox 5.2.x) it’s time to move on to the next step.

Add a Vagrant user

Vagrant expects a user named vagrant to be present on the VM. It uses SSH-keys when connecting to the VM. The documentation mentions a so-called insecure key-pair I decided not to use. Instead, I created my own key pair for use with the machine and added it to ~/.ssh/authorized_keys in the vagrant user’s home directory. It is a new keypair I created on the host specifically for use with Vagrant. If you are on MacOS or Linux it’s convenient to add it to the SSH agent (ssh-add …). There are similar tools for Windows users.

Creating the user is easy and should be completed now unless you already created the user during the initial installation:

# useradd -c 'vagrant user' -m -s $(which bash) vagrant 

The user should have passwordless sudo enabled as well as per the documentation. It is also recommended by the Vagrant documentation to assign a weak password to the vagrant account, which I didn’t. I never ran the passwd command to set a password for the vagrant user and so far seem to be doing ok.

Create a base box

This concludes the preparations on the VM side. Next up you need to create the base box, which you can then refer to in your own Vagrantfile. The command to do so is just one line. Be careful though: it will create a compressed file named package.box in your current working directory. This file can be rather large, so make sure you have enough space to store it.

$ vagrant package --base 

Depending on how powerful your laptop is this can take a little while.

Add the box to your environment

The previous command will complete eventually. This is the moment where you add the box to Vagrant’s local inventory as shown here:

$ vagrant box add --name blogpost /home/martin/package.box 

This command shouldn’t take too long to complete. If you see a line “box: successfully added box ‘blogpost’ (v0) for ‘virtualbox’ you are good. You can assign any name to the box you add, it will alter on show up under that designation when you run “vagrant box list”

Start the VM and enjoy

The remaining tasks are identical to using Vagrant boxes off their repository. Start off by vagrant init and make all the changes you normally do to the Vagrantfile. As I’m using my own SSH key I have to make sure that I’m telling Vagrant where to find it using a configuration option:

config.ssh.private_key_path = "/path/to/ssh/keyfile" 

Once you start the VM using “vagrant up” you are good to go!

Patch conflicts

My last post was about patching my home databases from 18.3 to 18.5 on Windows, and how I encountered a patch conflict when I tried to patch the JVM. I thought I’d give a little bit of info for anyone who runs into patch conflicts from time to time. It can be stressful especially if unforeseen, or you are in the middle of limited time outage window etc.

So before you jump into applying a patch, a nice little tool you might like to explore is the patch conflict checker on My Oracle Support. You can get it via:

https://support.oracle.com/epmos/faces/PatchConflictCheck

It is straightforward to use, you simply fill in the platform and your current patch inventory details, and then list out the patches you intend to apply.

image

 

One caveat – the online tool does not work with Windows environments Sad smile but you can get around that by downloading the patches you intend to apply to your local machine. Then you can use opatch itself to perform that check:


opatch prereq CheckConflictAmongPatchesWithDetail -phBaseDir path

where “path” is where you have unzipped the patch to.

Like all patching activities, the more preparation work you can do before actually commencing the work, the more likely your chances of not encountering a sticky moment during the work.

From Database 18.3 to 18.5 (on Windows)

Contrary to wild rumours on the internet, it was not a fear of the number 13 that led to a numbering jump from version 12c to version 18c. The jump was part of our new, more flexible release mechanism so that we can get fixes and enhancements to customers on a more frequent and predictable schedule. In a nutshell, smaller bundles of features and fixes, more frequently.

I won’t dwell on that – if you’re unfamiliar with the new strategy, the best place to start is  MOS Note 2285040.1, which has a description and a FAQ. But in terms of (as the saying goes) eating one’s own dog food, I downloaded the 18.5 release update which came out this week, and applied it to my 18.3 installation and I thought I’d share the process.

You might be wondering: “Why am I skipping 18.4?”  Well that can be summarised simply as ignorance on my part – I missed the email a couple of few months back that told me 18.4 was available Smile You might also be wondering: “I read that there were 4 release updates per year, how come there is an 18.5?” That is a common misinterpretation. We aim to provide a release update each quarter, but we never said that this means there would only ever be 4.

Now I stress – I’m just sharing my experiences here, not claiming that this is your go-to guide for applying the release update. I was just updating 18c on a Windows laptop, not a server. You should always follow the installation documents that come with the release update. For example, I had to reboot my Windows machine a couple of times, because it had transpired that some webcam software I’d installed had opted to use the Microsoft C++ shared libraries in my 18c Oracle Home! I think we can safely assume your servers probably don’t have webcams on them!

Step 1: Download a fresh OPatch

Seriously…just do this every time. As it turned out, my OPatch was already at a high enough level to apply the release update, but for me, years ago I adopted the mindset: “If I am going to apply a patch, then download a new OPatch“. It just makes it a no-brainer.

Step 2: Shut everything down.

I am running single instance, no “rolling” patching for me.

Step 3: Unzip the patch, set the environment, and patch


C:\>set ORACLE_HOME=C:\oracle\product\18
C:\>set PATH=%ORACLE_HOME%\OPatch;%ORACLE_HOME%\perl\bin;%PATH%
C:\>set PERL5LIB=
C:\>cd \oracle\stage\29124511
C:\oracle\stage\29124511>opatch apply

Oracle Interim Patch Installer version 12.2.0.1.14
Copyright (c) 2019, Oracle Corporation.  All rights reserved.


Oracle Home       : C:\oracle\product\18
Central Inventory : C:\Program Files\Oracle\Inventory
   from           :
OPatch version    : 12.2.0.1.14
OUI version       : 12.2.0.4.0
Log file location : C:\oracle\product\18\cfgtoollogs\opatch\opatch2019-01-18_09-36-46AM_1.log

Verifying environment and performing prerequisite checks...

Conflicts/Supersets for each patch are:

Patch : 29124511

        Conflict with 28267731
        Conflict details:
        C:\oracle\product\18\javavm\admin\classes.bin
        C:\oracle\product\18\javavm\admin\lfclasses.bin
        C:\oracle\product\18\javavm\jdk\jdk8\admin\classes.bin
        C:\oracle\product\18\javavm\jdk\jdk8\admin\lfclasses.bin
        C:\oracle\product\18\rdbms\admin\oracle.sym
        C:\oracle\product\18\bin\oracle.exe

        Bug Superset of 27783303
        Super set bugs are:
        27748954, 27604293, 27747869, 27984028, 27550341, 27389352, 27433163, 27538461, 27341181,....

Well…that didn’t go so well Smile Now the first thing confused me was: “How could there be any pre-existing patch to conflict with?”. After all, this was a complete 18c installation that I had downloaded from OTN when it first became available. But then I remembered, this is 18.3. So while it’s true that I download it as a standalone complete piece of software, it is still a patched release of the database. This is so much better than the old days where if you wanted (say) version 9.2.0.8, you had to download 9.2.0.1 and then apply the patch on top of it before using the software. A listing of the patch inventory showed that I already had some patches installed with my initial fresh download.


C:\oracle\stage\29124511>opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.14
Copyright (c) 2019, Oracle Corporation.  All rights reserved.


Oracle Home       : C:\oracle\product\18
Central Inventory : C:\Program Files\Oracle\Inventory
   from           :
OPatch version    : 12.2.0.1.14
OUI version       : 12.2.0.4.0
Log file location : C:\oracle\product\18\cfgtoollogs\opatch\opatch2019-01-18_09-39-23AM_1.log

Lsinventory Output file location : C:\oracle\product\18\cfgtoollogs\opatch\lsinv\lsinventory2019-01-18_09-39-23AM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: gtx
ARU platform id: 233
ARU platform description:: Microsoft Windows (64-bit AMD)


Installed Top-level Products (1):

Oracle Database 18c                                                  18.0.0.0.0
There are 1 products installed in this Oracle Home.


Interim patches (3) :

Patch  28267731     : applied on Sat Aug 18 17:34:22 AWST 2018
Unique Patch ID:  22301563
Patch description:  "WINDOWS OJVM BUNDLE PATCH : 18.3.0.0.180717 (28267731)"
   Created on 8 Jul 2018, 06:48:38 hrs PST8PDT
   Bugs fixed:
     27642235, 27952586, 27304131, 27461740, 27636900, 27539876
   This patch overlays patches:
     27783303
   This patch needs patches:
     27783303
   as prerequisites

Patch  27908644     : applied on Sat Aug 18 17:31:26 AWST 2018
Unique Patch ID:  22299245
Patch description:  "UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171"
   Created on 29 Jun 2018, 02:51:19 hrs PST8PDT
   Bugs fixed:
     27908644

Patch  27783303     : applied on Sat Aug 18 17:25:42 AWST 2018
Unique Patch ID:  22238986
Patch description:  "Windows Database Bundle Patch : 18.3.0.0.180717 (27783303)"
   Created on 16 Aug 2018, 07:58:43 hrs PST8PDT
   Bugs fixed:
     27026401, 27994333, 27680509, 27314206, 27345231, 24925863, 27447452
     26792891, 27236110, 27608669, 27670484, 27421101, 27240246, 27213140
     27740844, 27616657, 18701017, 27177852, 27697092, 27379956, 26598422
     27688099, 27580996, 27534509, 27602488, 27333978, 27098733, 27163313
     27551855, 27012915, 27603841, 27224987, 28165545, 27259983, 27919283
     28039953, 27357773, 27302730, 27263996, 27345498, 27517818, 27377219
     26882126, 27396377, 27701279, 27285557, 27779886, 27739006, 27585755
     27321834, 27748954, 27950708, 26646549, 26961415, 27061736, 27066519
     27498477, 28174926, 21806121, 24489904, 27570318, 27365139, 27028251
     27435537, 27869339, 27226913, 27192754, 27038986, 27537472, 27483974
     27329812, 27356373, 27714373, 27422874, 27334648, 27339115, 25035594
     27128580, 27952762, 27691717, 27534289, 27425622, 27434974, 27518227
     27426363, 27352600, 26450454, 27631506, 27143882, 27346949, 27181521
     27199245, 27252023, 27911160, 27365702, 27497950, 26586174, 12816839
     27389352, 25287072, 27215007, 27345190, 27475272, 25634405, 27813267
     27726269, 27463879, 27086406, 27283029, 27850736, 27338838, 27428790
     27395404, 27924147, 27284286, 27430254, 27262945, 27250547, 27346329
     27693713, 27347126, 27745220, 27341036, 27481765, 28174827, 27450355
     27214085, 27503413, 27451182, 27359178, 27625274, 27587672, 28320117
     27367194, 27782464, 27735534, 27396365, 27210872, 27501327, 27984028
     27309182, 27520070, 27999597, 27381383, 27302415, 27378103, 27861909
     27782339, 27254851, 27086821, 27101273, 27465480, 27232983, 27941514
     27486253, 27489719, 27222626, 27560562, 27244785, 27458829, 27262650
     27155549, 25743479, 27897639, 27615608, 27459909, 27267992, 27304936
     27663370, 27602091, 27448162, 27434486, 26933599, 26928317, 27586810
     27153755, 27348081, 27314390, 27786669, 27573408, 27532375, 26818960
     25824236, 27563767, 27060859, 27126666, 27284499, 27210038, 25726981
     9062315, 27301308, 27725967, 27452760, 28188330, 27834984, 27748321
     26990202, 27811439, 27376871, 27654039, 27080748, 27333664, 28059199
     27420715, 27315159, 27331374, 27398660, 27680162, 25035599, 27718914
     27599689, 27595801, 26615291, 27040560, 26476244, 27801774, 27450783
     27066451, 27935826, 28098865, 26226953, 27501465, 27558559, 27496806
     27381656, 27299455, 27124867, 27496224, 27707544, 27163928, 27147979
     27395416, 27532009, 21766220, 27727843, 27607805, 27271876, 26860285
     27997875, 28204423, 27204133, 27627992, 27592466, 27395794, 27430802
     27511196, 27302800, 27204476, 27941896, 27560702, 27053044, 24689376
     27447687, 27451049, 27302695, 26843664, 27181537, 27744211, 27445462
     26427905, 27106915, 27896388, 27812593, 27926113, 27487795, 27135647
     27496308, 28239335, 27452897, 26986173, 27434050, 27513114, 27114112
     27265816, 27274536, 27396624, 28090453, 27396666, 27573409, 27331069
     27379846, 27270197, 27016503, 27934468, 27595973, 27410595, 27833369
     27577758, 26966120, 27757979, 27434193, 27393421, 27032726, 27613080
     27208953, 27538461, 27581484, 27321179, 27263677, 26898279, 27573154
     27492916, 27961746, 27591842, 27868727, 27047831, 27783289, 27405242
     27970265, 27333693, 27506774, 27112686, 27379233, 27471876, 27425507
     28205874, 27544030, 27401637, 27740854, 27993298, 27193810, 27212208
     27184253, 27288230, 27399499, 27786772, 27978668, 26423085, 27873643
     27481406, 27182006, 27547732, 27889841, 27169796, 27501413, 27679488
     27518310, 27545630, 27346644, 27625010, 27360126, 27378959, 27772815
     27525909, 27945870, 27275136, 27321575, 28182503, 26336101, 27216224
     27375260, 27249544, 27174948, 27294480, 27957892, 27774539, 27664702
     27839732, 27166715, 27432355, 27257509, 27657920, 27190851, 27773602
     27774320, 27508936, 27457666, 27330161, 27339396, 27092991, 27101652
     27803665, 27396672, 27472969, 27579969, 27610269, 27302594, 27778433
     27339495, 25724089, 27739957, 28023410, 27433163, 27222121, 27851757
     28109698, 27732323, 27691809, 27503208, 26822620, 28264172, 26846077
     27189611, 27222423, 28132287, 27121566, 27282707, 27133637, 27451531
     27613247, 27560735, 27702244, 27341181, 28240153, 27479358, 27370933
     27396357, 27153641, 26827699, 27238258, 27364916, 27307868, 27791223
     27041253, 27090765, 27892488, 27034318, 27349393, 27412805, 27399762
     27302960, 27679664, 27815347, 27399985, 27241247, 26654411, 27077948
     26987877, 27354783, 27701795, 27304410, 27882176, 27119621, 26956033
     27300007, 27339165, 28106402, 27451187, 27058530, 21547051, 28025398
     27682288, 27398080, 27586895, 27679806, 27164122, 27243810, 13554903
     27993289, 27504190, 26587652, 27212837, 27274143, 27768034, 27550341
     27558861, 27060167, 27600706, 28022847, 27241221, 27131377, 26992964
     27690578, 27747407, 27305318, 27230645, 27818871, 27346709, 28057267
     27405696, 27523368, 27574335, 27526362, 27174938, 27931506, 27392187
     27221900, 27797290, 28184554, 27401618, 27410300, 26313403, 27864737
     27362190, 27439835, 24737581, 27368850, 27593587, 27751006, 23840305
     26996813, 27625050, 27657467, 27073066, 27302711, 27453225, 27984314
     27274456, 27522245, 27417186, 27469329, 27338946, 27396813, 27786699
     27285244, 27692215, 27519708, 23003564, 27339483, 27783059, 26882316
     27757567, 26527054, 27862636, 27563629, 27635508, 27508985, 26785169
     27472898, 27971575, 28413955, 27302681, 27609819, 27345450, 27788520
     27018734, 27766679, 27101527, 27906509, 27593389, 27625620, 27036408
     27358232, 27335682, 23698980, 27144533, 27585800, 27458164, 22734786
     27523800, 28169711, 27384222, 27723002, 27473800, 27310092, 27604293
     27731346, 27365993, 27005278, 27320985, 27445330, 27649707, 27709046
     27313687, 27249215, 25348956, 27065091, 26433972, 27339654, 27318988
     27333658, 27533819, 27403244, 27520900, 27534651, 27030974, 27427805
     27359368, 23310101, 27644757, 27314512, 27044575, 27223171, 27240570
     27156355, 27528204, 27989849, 27143756, 27679961, 27110878, 25943740
     27747869, 27734470, 27283960, 27682151, 27719187, 26894737, 27869283
     27652302, 27182064, 27467543, 27334353, 26843558, 27840386, 27457891
     27334316, 27372756, 27705761, 27484556, 27708711, 27753336, 27364891
     27964051, 27503318, 27423251, 27865439, 27780562, 26731697, 27358241
     27634676, 27726780, 27444727, 27441326, 27326204, 27812560, 27432338
     27577122, 27177551, 27275776, 27558557, 27375542, 26299684, 27301568
     27593263, 27258578, 27222938, 27703242, 27512439, 27508984, 27398223
     27330158, 27807441, 27079545, 27404599, 27259386, 27688692, 28039471
     27292213, 27392256, 27307934, 27617522, 27505603, 27450400, 27348707
     27460675, 27238077, 27346984, 27364947, 26943660, 27236052, 27338912
     27244337, 28021205, 28032758, 28033429, 27263276, 27579353, 27233563
     27220610, 28099592, 27433870, 27426277, 26647619, 27847259, 25929650
     27738679, 27502420, 25709124, 28045209, 27668379, 27318869, 27832643
     27302777, 28072130, 27442041, 27430219, 27614272, 27930478



--------------------------------------------------------------------------------

OPatch succeeded.

The conflict was with 28267731, which is the OJVM patch. I remembered from older 12c versions that the remedy to this was to rollback the older JVM patch before applying the new one. So I ran


C:\oracle\stage\29124511>opatch rollback -id 28267731

and then downloaded the 18.5 OJVM patch (28790647) as well in readiness.

After that, everything went as planned. I applied the DB patch and then applied the 18.5 OJVM patch. The final step was to open my database (and all of the pluggable database) and run in the database-level patch changes using datapatch


Microsoft Windows [Version 10.0.17134.523]
(c) 2018 Microsoft Corporation. All rights reserved.

C:\>set ORACLE_HOME=C:\oracle\product\18

C:\>set PATH=%ORACLE_HOME%\OPatch;%ORACLE_HOME%\perl\bin;%PATH%

C:\>set PERL5LIB=

C:\>cd C:\oracle\product\18\bin

C:\oracle\product\18\bin>sqlplus / as sysdba

SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jan 18 12:33:19 2019
Version 18.5.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.5.0.0.0


SQL> alter pluggable database all open;

Pluggable database altered.

SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.5.0.0.0

C:\oracle\product\18\bin>cd %ORACLE_HOME%/OPatch

C:\oracle\product\18\OPatch>datapatch -verbose
SQL Patching tool version 18.0.0.0.0 Production on Fri Jan 18 12:33:32 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Log file for this invocation: C:\oracle\cfgtoollogs\sqlpatch\sqlpatch_16792_2019_01_18_12_33_32\sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
Interim patch 28267731 (WINDOWS OJVM BUNDLE PATCH : 18.3.0.0.180717 (28267731)):
  Binary registry: Not installed
  PDB CDB$ROOT: Applied successfully on 23-AUG-18 10.31.18.372000 AM
  PDB PDB$SEED: Applied successfully on 23-AUG-18 10.36.41.858000 AM
  PDB PDB1: Applied successfully on 23-AUG-18 10.36.41.858000 AM
Interim patch 28790647 (OJVM RELEASE UPDATE: 18.5.0.0.190115 (28790647)):
  Binary registry: Installed
  PDB CDB$ROOT: Not installed
  PDB PDB$SEED: Not installed
  PDB PDB1: Not installed

Current state of release update SQL patches:
  Binary registry:
    18.5.0.0.0 Release_Update 1812202039: Installed
  PDB CDB$ROOT:
    Applied 18.3.0.0.0 Release_Update 1808132056 successfully on 23-AUG-18 10.31.18.366000 AM
  PDB PDB$SEED:
    Applied 18.3.0.0.0 Release_Update 1808132056 successfully on 23-AUG-18 10.36.41.852000 AM
  PDB PDB1:
    Applied 18.3.0.0.0 Release_Update 1808132056 successfully on 23-AUG-18 10.36.41.852000 AM

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB1
    The following interim patches will be rolled back:
      28267731 (WINDOWS OJVM BUNDLE PATCH : 18.3.0.0.180717 (28267731))
    Patch 29124511 (Windows Database Bundle Patch : 18.5.0.0.190115 (29124511)):
      Apply from 18.3.0.0.0 Release_Update 1808132056 to 18.5.0.0.0 Release_Update 1812202039
    The following interim patches will be applied:
      28790647 (OJVM RELEASE UPDATE: 18.5.0.0.190115 (28790647))

Installing patches...
Patch installation complete.  Total patches installed: 9

Validating logfiles...done
Patch 28267731 rollback (pdb CDB$ROOT): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\28267731\22301563/28267731_rollback_DB18_CDBROOT_2019Jan18_12_34_14.log (no errors)
Patch 29124511 apply (pdb CDB$ROOT): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\29124511\22646084/29124511_apply_DB18_CDBROOT_2019Jan18_12_34_47.log (no errors)
Patch 28790647 apply (pdb CDB$ROOT): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\28790647\22646085/28790647_apply_DB18_CDBROOT_2019Jan18_12_35_21.log (no errors)
Patch 28267731 rollback (pdb PDB$SEED): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\28267731\22301563/28267731_rollback_DB18_PDBSEED_2019Jan18_12_35_23.log (no errors)
Patch 29124511 apply (pdb PDB$SEED): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\29124511\22646084/29124511_apply_DB18_PDBSEED_2019Jan18_12_35_37.log (no errors)
Patch 28790647 apply (pdb PDB$SEED): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\28790647\22646085/28790647_apply_DB18_PDBSEED_2019Jan18_12_36_07.log (no errors)
Patch 28267731 rollback (pdb PDB1): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\28267731\22301563/28267731_rollback_DB18_PDB1_2019Jan18_12_35_24.log (no errors)
Patch 29124511 apply (pdb PDB1): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\29124511\22646084/29124511_apply_DB18_PDB1_2019Jan18_12_35_41.log (no errors)
Patch 28790647 apply (pdb PDB1): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\28790647\22646085/28790647_apply_DB18_PDB1_2019Jan18_12_36_12.log (no errors)
SQL Patching tool complete on Fri Jan 18 12:36:17 2019
C:\oracle\product\18\OPatch>
C:\oracle\product\18\OPatch>
C:\oracle\product\18\OPatch>sqlplus / as sysdba

SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jan 18 12:36:28 2019
Version 18.5.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.5.0.0.0


SQL> shutdown
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 2936010432 bytes
Fixed Size                  9033408 bytes
Variable Size            1023410176 bytes
Database Buffers         1895825408 bytes
Redo Buffers                7741440 bytes
Database mounted.
Database opened.
SQL> alter pluggable database all open;

Pluggable database altered.

SQL> @?/rdbms/admin/utlrp.sql

Session altered.


TIMESTAMP
----------------------------------------------------------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN              2019-01-18 12:37:58

DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>
DOC>   Use the following queries to track recompilation progress:
DOC>
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#

PL/SQL procedure successfully completed.


TIMESTAMP
----------------------------------------------------------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END              2019-01-18 12:38:01

DOC> The following query reports the number of invalid objects.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#

OBJECTS WITH ERRORS
-------------------
                  0

DOC> The following query reports the number of exceptions caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC> Note: Typical compilation errors (due to coding errors) are not
DOC>       logged into this table: they go into DBA_ERRORS instead.
DOC>#

ERRORS DURING RECOMPILATION
---------------------------
                          0


Function created.


PL/SQL procedure successfully completed.


Function dropped.


PL/SQL procedure successfully completed.

SQL>

================================

And that was all there was to it. I’ve now picked up over a 1000 fixes and improvements to the software. If you are one of those people who likes to patch rarely, I strongly recommend you have a re-think and take a closer look at the release update cycle. We want to get you better software, faster.

Enjoy your 18.5 release!

Ok, good to know.

Ok, good to know. I know that development was looking at it after my post. So SR should be considered.