Search

Top 60 Oracle Blogs

Recent comments

automation

Passing complex data types to Ansible on the command line

Earlier this year I wrote a post about passing JSON files as --extra-vars to ansible-playbook in order to simplify deployments and to make them more flexible. JSON syntax must be used to pass more complex data types to Ansible playbooks, the topic of this post. Unlike last time though I’ll pass the arguments directly to the playbook rather than by means of a JSON file. This should cover both methods of passing extra variables.

Ansible tips’n’tricks: defining –extra-vars as JSON

While I’m continuing to learn more about Ansible I noticed a nifty little thing I wanted to share: it is possible to specify –extra-vars for an Ansible playbook in a JSON document in addition to the space-separated list of key=value pairs I have used so often. This can come in handy if you have many parameters in your play and want to test changing them without having to modify your defaults stored in group_vars/*.yml or wherever else you stored them. If you do change your global variables, you can almost be certain that your version control system notifies you about a change in the file and it wants to commit it next time. This might not be exactly what you had in mind.

For later reference, this article was composed using Ubuntu 18.04.4 LTS with all updates up to February 3rd, 2020.

Vagrant tips’n’tricks: changing /etc/hosts automatically for Oracle Universal Installer

Oracle Universal Installer, or OUI for short, doesn’t at all like it if the hostname resolves to an IP address in the 127.0.0.0/0 range. At best it complains, at worst it starts installing and configuring software only to abort and bury the real cause deep in the logs.

I am a great fan of HashiCorp’s Vagrant as you might have guessed reading some of the previous articles, and as such wanted a scripted solution to changing the hostname to something more sensible before I begin provisioning software. I should probably add that I’m using my own base boxes; the techniques in this post should equally apply to other boxes as well.

Automating SQL and PL/SQL Deployments using Liquibase

https://oracle-base.com/blog/wp-content/uploads/2019/12/sqlcl-295x300.png 295w, https://oracle-base.com/blog/wp-content/uploads/2019/12/sqlcl-57x57.png 57w" sizes="(max-width: 189px) 85vw, 189px" />

You’ll have heard me barking on about automation, but one subject that’s been conspicuous by its absence is the automation of SQL and PL/SQL deployments…

Tips’n’tricks: finding the (injected) private key pair used in Vagrant boxes

In an earlier article I described how you could use SSH keys to log into a Vagrant box created by the Virtualbox provider. The previous post emphasised my preference for using custom Vagrant boxes and my own SSH keys.

Nevertheless there are occasions when you can’t create your own Vagrant box, and you have to resort to the Vagrant insecure-key-pair-swap procedure instead. If you are unsure about these security related discussion points, review the documentation about creating one’s own Vagrant boxes (section “Default User Settings”) for some additional background information.

Tips’n’tricks: understanding “too many authentication failures” in SSH

Virtualbox VMs powered by Vagrant require authentication via SSH keys so you don’t have to provide a password each time vagrant up is doing its magic. Provisioning tools you run as part of the vagrant up command also rely on the SSH key based authentication to work properly. This is documented in the official Vagrant documentation set.

I don’t want to use unknown SSH keys with my own Vagrant boxes as a matter of principle. Whenever I create a new custom box I resort to a dedicated SSH key I’m using just for this purpose. This avoids the trouble with Vagrant’s “insecure key pair”, all I need to do is add config.ssh.private_key_path = "/path/to/key" to the Vagrantfile.

Ansible Tips’n’tricks: rebooting Vagrant boxes after a kernel upgrade

Occasionally I have to reboot my Vagrant boxes after kernel updates have been installed as part of an Ansible playbook during the “vagrant up” command execution.

I create my own Vagrant base boxes because that’s more convenient for me than pulling them from Vagrant’s cloud. However they, too, need TLC and updates. So long story short, I run a yum upgrade after spinning up Vagrant boxes in Ansible to have access to the latest and greatest (and hopefully most secure) software.

To stay in line with Vagrant’s philosophy, Vagrant VMs are lab and playground environments I create quickly. And I can dispose of them equally quickly, because all that I’m doing is controlled via code. This isn’t something you’d do with Enterprise installations!

Vagrant and Ansible for lab VMs!

Now how do you reboot a Vagrant controlled VM in Ansible? Here is how I’m doing this for VirtualBox 6.0.14 and Vagrant 2.2.6. Ubuntu 18.04.3 comes with Ansible 2.5.1.

APEX 19.2 : Vagrant and Docker Builds

I’m sure anyone who cares knows that APEX 19.2 was officially released on Friday. I did an upgrade of one of our development instances straight away and it worked fine. it’s subsequently gone to a bunch of other development instances. I’ll be pushing to get this out to production as quickly as possible.

Over the weekend I worked through a bunch of my GitHub stuff.

Ansible tips’n’tricks: executing a loop conditionally

When writing playbooks, I occasionally add optional tasks. These tasks are only executed if a corresponding configuration variable is defined. I usually define configuration variables either in group_vars/* or alternatively in the role’s roleName/default/ directory.

The “when” keyword can be used to test for the presence of a variable and execute a task if the condition evaluates to “true”. However this isn’t always straight-forward to me, and recently I stumbled across some interesting behaviour that I found worth mentioning. I would like to point out that I’m merely an Ansible enthusiast, and by no means a pro. In case there is a better way to do this, please let me know and I’ll update the post :)

Before showing you my code, I’d like to add a little bit of detail here in case someone finds this post via a search engine:

Ansible tips’n’tricks: checking if a systemd service is running

I have been working on an Ansible playbook to update Oracle’s Tracefile Analyser (TFA). If you have been following this blog over the past few months you might remember that I’m a great fan of the tool! Using Ansible makes my life a lot easier: when deploying a new system I can ensure that I’m also installing TFA. Under normal circumstances, TFA should be present when the (initial) deployment playbook finishes. At least in theory.

As we know, life is what happens when you’re making other plans, and I’d rather check whether TFA is installed/configured/running before trying to upgrade it. The command to upgrade TFA is different from the command I use to deploy it.

I have considered quite a few different ways to do this but in the end decided to check for the oracle-tfa service: if the service is present, TFA must be as well. There are probably other ways, maybe better ones, but this one works for me.