Search

OakieTags

Who's online

There are currently 0 users and 23 guests online.

Recent comments

Oakies Blog Aggregator

Funny "ORA-01017: invalid username/password; logon denied" during DataGuard switchover operation from DG broker

Recently one of my customers encountered a problem when they tried to perform a switchover from DataGuard broker command line interface.

$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration;

Configuration - db12c

  Protection Mode: MaxPerformance
  Members:
  db12c  - Primary database
    sdb12c - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 26 seconds ago)

DGMGRL> validate database sdb12c;

  Database Role:     Physical standby database
  Primary Database:  db12c

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Flashback Database Status:
    db12c:   On
    sdb12c:  Off

DGMGRL> switchover to sdb12c;
Performing switchover NOW, please wait...
Operation requires a connection to instance "sdb12c" on database "sdb12c"
Connecting to instance "sdb12c"...
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

        connect to instance "sdb12c" of database "sdb12c"

This output is from the test environment where we could easily reproduce the problem.
The installation is using single-tenant fashion of 12.1 database with ASM and Oracle Restart. Therefore we must take into consideration also the fact, that the databases are started/stopped through Oracle Clusterware.

Investigation:

1.) Check of if password files on primary and standby are the same - shutdown the standby database and copy password file prom primary.

Checking what we see from V$PWFILE_USERS on primary database:

SQL> select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0
SYSDG                          FALSE FALSE FALSE FALSE TRUE  FALSE          1
SYSBACKUP                      FALSE FALSE FALSE TRUE  FALSE FALSE          1
SYSKM                          FALSE FALSE FALSE FALSE FALSE TRUE           1
C##_ADM_JSENEGACNIK            FALSE TRUE  FALSE FALSE FALSE FALSE          1
C##_SYS                        TRUE  FALSE FALSE FALSE FALSE FALSE          0
JOC                            TRUE  FALSE FALSE FALSE FALSE FALSE          3

After restarting standby database only to MOUNT state with refreshed password file we get the following:

SQL> select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          1
SYSDG                          FALSE FALSE FALSE FALSE TRUE  FALSE          1
SYSBACKUP                      FALSE FALSE FALSE TRUE  FALSE FALSE          1
SYSKM                          FALSE FALSE FALSE FALSE FALSE TRUE           1
C##_ADM_JSENEGACNIK            FALSE TRUE  FALSE FALSE FALSE FALSE          1
C##_SYS                        TRUE  FALSE FALSE FALSE FALSE FALSE          1

The standby database represents the password file users slightly different.
Important to interpret here is the CON_ID column which per Oracle documentation has the following meaning:

    The ID of the container to which the data pertains. Possible values include:

        0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs.

        1: This value is used for rows containing data that pertain to only the root

        n: Where n is the applicable container ID for the rows containing data

As the standby database is only in MOUNT state the value 1 in CON_ID is normal because the database is not opened yet in read write mode and therefore no operations on pluggable databases can be performed.

The displayed message when error occurred was:

Operation requires a connection to instance "sdb12c" on database "sdb12c"
Connecting to instance "sdb12c"...
ORA-01017: invalid username/password; logon denied

Analyzing this text means that we have a connection problem when the DG broker tries to connect to standby database.
We are connected as SYSDG user with SYSDG privileges in DG broker on primary site and we get the error when DG broker tries to connect to standby (remote) instance. As we have high level of privileges and the parameter remote_login_passwordfile is set to EXCLUSIVE it means that obviously we have a problem how we connect to the DGMGRL.

Therefore let us try to connect with username and password remotely from DGMGRL utility on primary database using connection string (alias) from standby database:

$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sysdg@sdb12c
Password:
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.
 
Ooops, we get the same error. So when SYSDG user tries to connect to standby database via password file the username/password combination is wrong. We can connect as SYSDG user to DGMGRL utility using only "/" because we are authenticated via OSGROUP authorization. But for stopping/starting remote databases via Oracle Clusterware we need to be logged in via password file.

From the 12c manuals it looks like that the only way to change the password for SYSDG user in password file is by revoking and granting SYSDG privilege.

Find all users who have been granted the SYSDG privilege.

    SELECT USERNAME FROM V$PWFILE_USERS WHERE USERNAME != 'SYS' AND SYSDG='TRUE';

   

    Revoke and regrant the SYSDG privilege to these users.

    REVOKE SYSDG FROM non-SYS-user;
    GRANT SYSDG TO non-SYS-user;  

Actually it is enough that you change the password via "ALTER USER" command. The time  at the OS level of the password file will reflect this change so obviously the password file is updated with new password for SYSDG user.

SQL> alter user sysdg identified by Secret$124;

I have intentionally selected to put special character in password in order to show a problem when you try to use such password from the command line.

$ dgmgrl sys/Secret$124@db12c
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.
DGMGRL>

Again we get error but this time due to "$" character in password. Escaping it does not resolve the problem either, also putting connection parameter into quotes.

$ dgmgrl "sys/Secret$124@db12c"
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

So the right way to test connection is:

$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sysdg@sdb12c
Password:
Connected as SYSDG.
DGMGRL>
DGMGRL> connect sysdg@db12c
Password:
Connected as SYSDG.

Now let us perform switchover to standby and afterwards back to old primary:

$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sysdg@db12c
Password:
Connected as SYSDG.
DGMGRL> validate database sdb12c;

  Database Role:     Physical standby database
  Primary Database:  db12c

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Flashback Database Status:
    db12c:   On
    sdb12c:  Off

DGMGRL> switchover to sdb12c;
Performing switchover NOW, please wait...
Operation requires a connection to instance "sdb12c" on database "sdb12c"
Connecting to instance "sdb12c"...
Connected as SYSDG.
New primary database "sdb12c" is opening...
Oracle Clusterware is restarting database "db12c" ...
Switchover succeeded, new primary is "sdb12c"

DGMGRL>  validate database db12c;

  Database Role:     Physical standby database
  Primary Database:  sdb12c

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Flashback Database Status:
    sdb12c:  Off
    db12c:   On

DGMGRL> switchover to db12c;
Performing switchover NOW, please wait...
Operation requires a connection to instance "db12c" on database "db12c"
Connecting to instance "db12c"...
Connected as SYSDG.
New primary database "db12c" is opening...
Oracle Clusterware is restarting database "sdb12c" ...
Switchover succeeded, new primary is "db12c"

On Oracle support site there are several notes regarding the ORA-1017 error. So it is a good idea to check them in case that your problem is not as described above.

It is mandatory to test also the static connection identifiers names "StaticConnectIdentifier" which are part of the DG configuration. Do this for primary and standby database.

DGMGRL> show database verbose db12c;

Database - db12c

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    db12c

  Properties:
    DGConnectIdentifier             = 'db12c'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    RedoRoutes                      = ''
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyLagThreshold               = '0'
    TransportLagThreshold           = '0'
    TransportDisconnectedThreshold  = '30'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'MANUAL'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = ''
    LogFileNameConvert              = ''
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.110)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=db12c_DGMGRL)(INSTANCE_NAME=db12c)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
SUCCESS

DGMGRL> connect sysdg@'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.110)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=db12c_DGMGRL)(INSTANCE_NAME=db12c)(SERVER=DEDICATED)))';
Password:
Connected as SYSDG.

DGMGRL> connect sysdg@"(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.110)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=db12c_DGMGRL)(INSTANCE_NAME=db12c)(SERVER=DEDICATED)))"
Password:
Connected as SYSDG.

It is mandatory to put the value of StaticConnectIdentifier in single or double quotes.

Of course you can use SYS user to perform switchover operation. Again, connect via SQL*Net (as being remote) to be authenticated by password file. Needless to say that you have to have the right setup in litener.ora and tnsnames.ora for both databases. I found that the easiest way is to have static definition in listener.ora file:

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = olinux)(PORT = 1521))
      (ADDRESS = (PROTOCOL = IPC)(KEY = KEY1521))
    )
  )

SID_LIST_LISTENER =
 (SID_LIST =
  (SID_DESC =
    (GLOBAL_DBNAME = db12c_DGMGRL)
    (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
    (SID_NAME = db12c)
  )
  (SID_DESC =
    (GLOBAL_DBNAME = db12c)
    (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
    (SID_NAME = db12c)
  )
 )

Similar is for the standby database. Then in tnsnames.ora:

DB12C =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = olinux)(PORT = 1521 ))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = db12c)
    )
  )

DB12C_DGMGRL =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = olinux)(PORT = 1521 ))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = db12c_DGMGRL)
    )
  )

Conclusion:

Although one might think that the "validate database" command checks everything, it actually does not. Oracle could check also the connectivity during "validate database" command, but does not. So it is a big surprise when one encounters ORA-1017 error due to incomplete check performed by DG.

Auditing Oracle database stopping and starting using the ELK stack

This blog post is about two things: one how you can monitor who is bringing you database up and down (there is a twist at the end!) and two how you can very conveniently do that with aggregated logs in a browser with a tool called ‘Kibana’, which is the K in ELK.

What is the ‘ELK stack’?
The ELK stack gets it’s name from Elasticsearch, Logstash and Kibana.
– Elasticsearch is an open source search engine based on Apache Lucene, which provides a distributed, multitenant-capable full-text search engine with a http web interface and schema-free JSON documents.
– Logstash is a fully configurable open source data processing pipeline that can receive data from a multiple sources simultaneously, transform it and output it based on the output plugin, which is the elastic search plugin in this blogpost but could be anything from STDOUT, an unix pipe, a file, a file in CSV, HTTP, email, IRC, Jira, graphite, kafka, mongodb, nagios, S3, SolR, … really whatever you want.
– Kibana is an open source data visualisation plugin for Elasticsearch.
When looking at Kibana, it quite much looks like the splunk interface.

Installing the ELK stack.
Installing the ELK stack in a basic way is easy. In this blogpost I will install everything on the same host, everything being the ELK stack and an Oracle database installation. In reality you should have a log gatherer on every host (called ‘filebeat’) and a dedicated host which runs the rest of the stack (logstash, elasticsearch and kibana). The below install actions were executed on a Linux 64 bit host running Oracle Linux 6.8.
In order to make the installation really easy, I use the yum repository of the elastic company, this is how to set that up (all done as root, ‘#’ indicates root):

# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
# vi /etc/yum.repos.d/elastic.repo
[elastic-5.x]
name=Elastic repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Install elasticsearch:

# yum install java-1.8.0-openjdk
# yum install elasticsearch
# chkconfig --add elasticsearch
# service elasticsearch start

Install logstash:

# yum install logstash

Configure logstash input and output:

# vi /etc/logstash/conf.d/input.conf
input {
  beats {
    port => 5044
  }
}
# vi /etc/logstash/conf.d/output.conf
output {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Verify the logstash config files:

# sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK

If you see the ‘Configuration OK’ message, it means logstash could interprent the configuration files. It does not mean it will all work as desired, there could be runtime issues.
Now let’s start logstash. Logstash uses upstart (meaning a startup script in /etc/init) instead of the legacy startup mechanism using the chkconfig and service utilities.

# initctl start logstash

The last part of the data pipeline is ‘filebeat’. There are and could be multiple input products, in this blogpost I use ‘filebeat’, which keeps track of logfiles.

# yum install filebeat
# chkconfig --add filebeat

We are going to look into linux and oracle auditing. So we need to keep track of a couple of files:
– /var/log/secure: this is the default linux logfile which contains all kinds of authentication messages, as defined in /etc/rsyslog.conf (authpriv.* /var/log/secure).
– /u01/app/oracle/admin/*/adump/*.aud: this is the default place where the oracle database stores it’s audit files. These audit files provide what is called ‘mandatory auditing’, and includes at least connections to the instance with administrator privilege, database startup and database shutdown. The default is a normal text based logfile, it could be set to XML.
– /var/log/audit/audit.log: this is the logfile of the linux kernel based audit facility. This is actually a lesser known hidden gem in Linux, and provides audit information from the Linux kernel.

These files need to be configured in filebeat, in the file: /etc/filebeat/filebeat.yml. As the extension of the file indicates, this is a file organised in YAML syntax. The best way to configure the file is to move the file, and create your own file with your desired configuration. First of all we add the output, which is logstash in our case. Please mind the default configuration of filebeat is direct output to elasticsearch, which means we don’t have an option to enrich the data!

# mv /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.orig
# vi /etc/filebeat/filebeat.yml
output.logstash:
  hosts: ["localhost:5044"]

Please mind the two spaces in front of ‘hosts’, which is mandatory for a YAML document!
Next up we add the files to monitor in the configuration file. The linux based logfiles are easy:

filebeat.prospectors:
- input_type: log
  paths:
    - /var/log/secure
  document_type: secure

- input_type: log
  paths:
    - /var/log/audit/audit.log
  document_type: audit

One thing to notice is that a type is set for each file (which is really just a name for the file filebeat monitors), which makes it able to find data from these specific files later on. Now the Oracle audit file:

- input_type: log
  paths:
    - /u01/app/oracle/admin/*/adump/*.aud
  document_type: oracle_audit
  multiline:
    pattern: '^[A-Za-z]{3} [A-Za-z]{3} [0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2} [0-9]{4}'
    negate: true
    match: after

This looks a bit more complicated. The reason for the complication is the multiline specification. An Oracle database audit file contains a timestamp, after which the audit data is written; it looks like this:

Thu Jan 19 13:44:12 2017 +00:00
LENGTH : '198'
ACTION :[49] 'ALTER DATABASE OPEN /* db agent *//* {0:0:476} */'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[6] 'oracle'
CLIENT TERMINAL:[0] ''
STATUS:[1] '0'
DBID:[10] '2622783786'

The important things at this time: the ‘pattern’ keyword specifies the timestamp, you can see you can match it with the timestamp, and all the following data needs to be processed together, this is a single record, written over multiple lines. ‘negate: true’ means that anything that does not fit the pattern needs to be added to this piece of data, ‘match: after’ means that this is added after the pattern is matched.

Now that filebeat is setup, we can start the filebeat daemon:

# service filebeat start

The last component is kibana:

# yum install kibana
# chkconfig --add kibana
# service kibana start

Now that we’ve set the entire pipeline up, a next thing to do is to configure logstash to enrich the data. Here’s the how it’s done for the Oracle database audit file:

# vi /etc/logstash/conf.d/oracle-audit.conf
filter {
  if [type] == "oracle_audit" {
    grok {
      match => { "message" => "^%{DAY} %{MONTH:M} %{MONTHDAY:d} %{HOUR:h}:%{MINUTE:m}:%{SECOND:s} %{YEAR:y}" }
      add_tag => [ "grok", "oracle_audit" ]
    }
    grok {
      match => { "message" => "ACTION :\[[0-9]*\] '(?.*)'.*DATABASE USER:\[[0-9]*\] '(?.*)'.*PRIVILEGE :\[[0-9]*\] '(?.*)'.*CLIENT USER:\[[0-9]*\] '(?.*)'.*CLIENT TERMINAL:\[[0-9]*\] '(?.*)'.*STATUS:\[[0-9]*\] '(?.*)'.*DBID:\[[0-9]*\] '(?.*)'" }
    }
    grok {
      match => { "source" => [ ".*/[a-zA-Z0-9_#$]*_[a-z0-9]*_(?[0-9]*)_[0-9]*\.aud" ] }
    }
    mutate {
      add_field => { "ts" => "%{y}-%{M}-%{d} %{h}:%{m}:%{s}" }
    }
    date {
      locale => "en"
      match => [ "ts", "YYYY-MMM-dd HH:mm:ss" ]
    }
    mutate {
      remove_field => [ "ts", "y", "M", "d", "h", "m", "s" ]
    }
  }
}

It’s beyond the scope of this article to go through every detail, but as you can see we apply a filter. Everything in this filter takes place for the type “oracle_audit”, which is set by filebeat. The next thing we encounter a couple of times is ‘grok’s’. The term grok comes from the Robert Heinlein science-fiction novel ‘Stranger in a Strange land’. Essentially, a grok with logstash means you specify a pattern, for which the actions are applied if the specified pattern matches. The first grok looks for the date pattern for which extra fields are created (M,d,h,m,s, after the colon) in the field ‘message’, and adds a tag (a word in the tags field for the record that is created). The second grok also looks in the ‘message’ field, and specifies text (ACTION for example), some other characters and then (?.*) is visible. This is a custom pattern, for which the field name to be created is in between < and > and is followed by a pattern. This grok line (including all the patterns) creates fields for all the Oracle audit fields in the audit file! The next grok picks up the PID from the filename of the logfile (the filename is in a field ‘source’), and the two mutates create and destroy a new field ts which is used for the date, and date specifies the date/time with the data flowing through logstash is filled with the date and time in the ts field, instead of the time filebeat picked up the data and sent it through logstash. Please mind that if you add (or change) configuration in a logstash configuration file, you need to restart logstash.

We are all set now! Last words on this configuration: kibana and elasticsearch by default do not require authentication. Do not expose the ports of these products to the internet! I am using a tunnel to the kibana website, which runs on port 5601. It’s very easy to ssh into the machine running the ELK stack using ssh user@machine -L 5601:localhost:5601, which creates a port on localhost:5601 on my machine at home (-L = local), for which communication is tunnelled to localhost:5601 on the remote machine (the localhost in the ssh line example is an address on the machine you ssh in to, this could also be another server which is only visible from the machine you ssh into.

First let’s login to the machine, and see what information is revealed with /var/log/secure:
kibana-secure-loginhttps://fritshoogland.files.wordpress.com/2017/01/kibana-secure-login.pn... 600w, https://fritshoogland.files.wordpress.com/2017/01/kibana-secure-login.pn... 150w" sizes="(max-width: 300px) 100vw, 300px" />
You get this screen when you goto kibana at port 5601, enter: ‘type: secure’ in the search bar to display data of the type secure (which is what is set with document_type: secure in filebeat.yml), and login to the machine where filebeat is watching the /var/log/secure file. As you can see, you get two lines from the ssh deamon, one indicating something’s done with pam (pam_unix), and one line which indicates it authenticated via a public key for user ops from an ip address (which is anonymised) at port 39282 via ssh2.

With a lot of cloud providers you get a user which has public key authentication setup (which you saw above), after which you need to sudo to for example the oracle user. In a lot of companies, you get a personalised account to log on to servers, after which you need to sudo to oracle. In both cases you need to use sudo to become the user that you need to administer, for example oracle. This is what sudo generates in the /var/log/secure file:
kibana-secure-sudohttps://fritshoogland.files.wordpress.com/2017/01/kibana-secure-sudo.png... 600w, https://fritshoogland.files.wordpress.com/2017/01/kibana-secure-sudo.png... 150w" sizes="(max-width: 300px) 100vw, 300px" />
The secure log displays sudo was invoked by the user opc, on TTY pts/1 and the command executed via sudo was ‘/bin/su – oracle’.

Now that I have became oracle using sudo, I set the environment of my database using oraenv and started up a database. Now go over to kibana, and issued a search for ‘type: oracle_audit’. This is how that looks like:
kibana-oracle_audithttps://fritshoogland.files.wordpress.com/2017/01/kibana-oracle_audit.pn... 600w, https://fritshoogland.files.wordpress.com/2017/01/kibana-oracle_audit.pn... 150w" sizes="(max-width: 300px) 100vw, 300px" />
Now if you look at what the audit record provides, the only things that provide something useful for the purpose of investigating who did stop or start a database are ACTION and CLIENT TERMINAL (I assume the database is stopped and started by the ‘oracle’ user). Now change the ‘selected fields’ in kibana and add the (dynamically created!) fields: ora_audit_action, ora_audit_term and ora_audit_derived_pid, and remove message. This is how that looks like:
kibana-oracle-audit-startuphttps://fritshoogland.files.wordpress.com/2017/01/kibana-oracle-audit-st... 600w, https://fritshoogland.files.wordpress.com/2017/01/kibana-oracle-audit-st... 150w" sizes="(max-width: 300px) 100vw, 300px" />
The important thing to look for here is the ora_audit_action ‘startup’, then look at the ora_audit_derived_pid, and two rows down we see terminal ‘pts/1’ was the terminal on which this was entered.

Now that we know the terminal, we can add in searching in the message field for the secure type. Enter ‘type: oracle_audit OR (type: secure AND message: “*pts/1*”)’ in the search bar.
kibana-secure-oracle_audithttps://fritshoogland.files.wordpress.com/2017/01/kibana-secure-oracle_a... 600w, https://fritshoogland.files.wordpress.com/2017/01/kibana-secure-oracle_a... 150w" sizes="(max-width: 300px) 100vw, 300px" />
Okay, this works. But it’s far from perfect. In fact, it only works if the username of the session doing the sudo is the only session with that username, otherwise if there is more than one session it can be any of these sessions doing the sudo, since there is nothing more than the username. This also means that if there is a direct logon to the oracle user, there is no way to identify a session with a TTY, and thus database startup and shutdown are completely anonymous, there’s no way to link a specific session to that action outside of probably the oracle user and a TTY which can not be linked to anything like for example an ip address.

Is there a better way? Yes! We can also use the linux, kernel based, auditing service, which is on by default. This service keeps a log file at /var/log/secure/secure.log, and gives way more granular auditing events than the /var/log/secure log. Linux audit generates a lot of diverse types of rows, so it’s actually not easy to grok them, but in order to understand which session executed a startup or shutdown, the only audit row that is important for this specific use case is an audit type called ‘CRED_ACQ’. The grok for this type looks like this:

# vi /etc/logstash/conf.d/linux-audit.conf
filter {
  if [type] == "audit" {
    grok {
        match => { "message" => ""type=%{WORD:audit_type} msg=audit\(%{NUMBER:audit_epoch}:%{NUMBER:audit_counter}\): pid=%{NUMBER:audit_pid} uid=%{NUMBER:audit_uid} auid=%{NUMBER:audit_auid} ses=%{NUMBER:audit_ses} msg='op=%{NOTSPACE:audit_op} ((acct=\"%{GREEDYDATA:audit_acct}\")|(id=%{NUMBER:audit_id})|acct=%{BASE16NUM:audit_acct}) exe=\"%{GREEDYDATA:audit_exe}\" hostname=%{NOTSPACE:audit_hostname} addr=%{NOTSPACE:audit_addr} terminal=%{NOTSPACE:audit_terminal} res=%{NOTSPACE:audit_res}'" }
        add_tag => [ "grok", "audit" ]
    }
    date {
      locale => en
      match => [ "audit_epoch", "UNIX" ]
    }
  }
}

This grok matches the CREDIT_ACQ audit type which we will use to trace back the session via the audit log. Another nicety of this logstash configuration is the audit records time using an epoch timestamp, which logstash can translate back to a human readable timestamp. Once this is in place, log in again and use sudo to switch to oracle (or log in directly as oracle, it doesn’t really matter that much now!), and search in kibana for: ‘type: oracle_audit OR (type: audit AND audit_type: CRED_ACQ)’. Now get the relevant fields; remove ‘message’, and add: audit_hostname, audit_acct, audit_ses, audit_terminal, ora_audit_term, ora_audit_derived_pid, ora_audit_action. This probably returns a log of rows, now scroll (“back in time”) and search for the startup or shutdown command, and then follow the trail:
kibana-oracle_audit-audit-rawhttps://fritshoogland.files.wordpress.com/2017/01/kibana-oracle_audit-au... 600w, https://fritshoogland.files.wordpress.com/2017/01/kibana-oracle_audit-au... 150w" sizes="(max-width: 300px) 100vw, 300px" />
Startup points to (oracle server process) PID 17748, which was instantiated by a session using by pts/1 (two rows down), one row further down we see the audit information which shows pts/1, which is connected to audit_ses 4230. The audit_ses number is a number that sticks with a session, regardless of using sudo. If you follow down number 4230, you see multiple rows of audit_ses 4230, some of them with root, which is typical for sudo switching from one user to another. The final row shows the user logging in with it’s ip address. In other words: using the linux kernel audit facility, you can get all available information!

Okay, all happy now? Are you sure? Now for the twist!

Whenever you use RAC, or use ASM, or use both, or you are using the grid infra single instance as a framework to track your your listener(s) and database(s) and start and stop them automatically, you can still stop and start an instance directly using sqlplus, but in most cases you will be using the grid infrastructure crsctl or srvctl commands. When the grid infrastructure crsctl and srvctl commands are used, this is how the Oracle database audit information looks like:
kibana-oracle_audit-crs-shutdownhttps://fritshoogland.files.wordpress.com/2017/01/kibana-oracle_audit-cr... 600w, https://fritshoogland.files.wordpress.com/2017/01/kibana-oracle_audit-cr... 150w" sizes="(max-width: 300px) 100vw, 300px" />
As you can see, because the cluster ware brought the database down, there is no terminal associated with the shutdown. So the above mentioned way of first searching for startup and shutdown in the oracle audit information, finding the associated terminal, and then tracing it through the audit records can NOT be used whenever the Oracle cluster ware is used, because a grid infrastructure deamon is actually stopping and starting the database, and the grid infrastructure does not keep any information (that I am aware of) about which client invoked a grid infrastructure command. I guess a lot of auditors will be very unhappy about this.

Now the good news: you can solve this issue very easy. The downside is it requires additional configuration of the linux auditing. The solution is to put an ‘execution watch’ on srvctl and crsctl; this is how this is done:

# auditctl -w /u01/app/12.1.0.2/grid/bin/srvctl -p x -k oracrs
# auditctl -w /u01/app/12.1.0.2/grid/bin/crsctl -p x -k oracrs

In order to validate the working, I started a database using srvctl, and searched for: ‘(type: oracle_audit AND ora_audit_action: STARTUP) OR (type: audit AND message: key=\”oracrs\”)’. This is how that looks like:
kibana-oracle_audit-audit-watchhttps://fritshoogland.files.wordpress.com/2017/01/kibana-oracle_audit-au... 600w, https://fritshoogland.files.wordpress.com/2017/01/kibana-oracle_audit-au... 150w" sizes="(max-width: 300px) 100vw, 300px" />
As you can see, there’s the Oracle database record indicating the startup of the database, and a little while back in time there’s the linux audit row indicating the execution of the srvctl executable. Once you are at that point, you can using the earlier mentioned way of using the audit_ses number to trace the session execution, including sudo and ip address at logon time.

Tagged: audit, auditctl, elasticsearch, ELK, filebeat, grok, kibana, linux, linux audit, logging, logstash, oracle

plsql promises - chaining and thenables

So last blog entry Do or do not. on the promises library, covered the basics of what promises can give us.

This entry will cover more around chaining of promises and how you can easily run your code in parallel
and still make sure they run in the right order, when needed.

Promises by definition has a method called [then]. What this method does, is that it will take the value of
the promise once it is fulfilled, and use that as input to a new function defined by us. The input to then is,
either a function to run when fulfilled or a function to run if rejected or both. Although not in the standard
there are more and more libraries implementing a method called [done]. It has the same input options as the [then]
method but it does not return a new promise. So only use this method when you do not care about the result
of the chain call. Since we are in Oracle I cannot name the method [then] since it is a reserved word,
so I had to add an _f to the name, so it becomes [then_f].

So here is the first example. We create a promise based on the function p_multiplier, that takes a number and multiplies
it by 2. We then say that if this promise is fulfilled we want to execute the function called p_multi_convert. This function
takes the previous promise result, and will return a text that will tell us if the output is less or more than 50.

declare
test_promise promise;
begin
test_promise := promise('p_multiplier', 42);
test_promise := test_promise.then_f('p_multi_convert');
promises_ninja.promise_wait(test_promise);
dbms_output.put_line('Value of test_promise2 is: ' || test_promise.getanyvalue);
end;
/

I’m speaking at Advanced Spark Meetup & attending Deep Learning Workshop in San Francisco

In case you are interested in the “New World” and happen to be in Bay Area this week (19 & 21 Jan 2017), there are two interesting events that you might want to attend (I’ll speak at one and attend the other):

Advanced Spark and TensorFlow Meetup

I’m speaking at the advanced Apache Spark meetup and showing different ways for profiling applications with the main focus on CPU efficiency. This is a free Meetup in San Francisco hosted at AdRoll.

Putting Deep Learning into Production Workshop

This 1-day workshop is about the practical aspects of putting deep learning models into production use in enterprises. It’s a very interesting topic for me as enterprise-grade production-ready machine learning requires much more than just developing a model (just like putting any software in production requires much more than just writing it). “Boring” things like reliability, performance, making input data available for the engine – and presenting the results to the rest of the enterprise come to mind first (the last parts are where Gluent operates :)

Anyway, the speaker list is impressive and I signed up! I told the organizers that I’d promote the event and they even offered a 25% discount code (use GLUENT as the discount code ;-)

This will be fun!

Putting Deep Learning into Production

Saturday, Jan 21, 2017, 9:30 AM

Capital One
201 3rd St, 5th Floor San Francisco, CA

20 Spark and TensorFlow Experts Attending

RSVPhttps://conf.startup.ml/https://conf.startup.ml/options/reg 15% Off Discount Code BEFORE New Years Eve: FREGLYDateJan 21, 2017, 9:30a – 5pLocationCapital One 201 3rd St, 5th Floor San FranciscoDescriptionDeep learning models are achieving state-of-the-art results in speech, image/video classification and numerous other areas, but …

Check out this Meetup →

 

 

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Ozar Professional Salary Data- Post 1

style="display:inline-block;width:320px;height:100px"
data-ad-client="ca-pub-5103295461547706"
data-ad-slot="5182487270">

So Brent Ozar’s group of geeks did something that I highly support-  a survey of data professional’s salaries.  Anyone who knows me, knows I live by data and I’m all about transparency.  The data from the survey is available for download from the site and they’re promoting app developers to download the Excel spreadsheet of the raw data and work with it.

Now I’m a bit busy with work as the Technical Intelligence Manager at Delphix and a little conference that I’m the director for, called RMOUG Training Days, which is less than a month from now, but I couldn’t resist the temptation to load the data into one of my XE databases on a local VM and play with it a bit...just a bit.

It was easy to save the data as a CSV and use SQL Loader to dump it into Oracle XE.  I could have used BCP and loaded it into SQL Server, too, (I know, I’m old school) but I had a quick VM with XE on it, so I just grabbed that quick to give me a database to query from.  I did edit the CSV and removed both the “looking” column and took out the headers.  If you choose to keep them, make sure you add the column back into the control file and update the “options ( skip=0)” to be “options ( skip=1)” to not load the column headers as a row in the table.

The control file to load the data has the following syntax:

--Control file for data --
options ( skip=0 )
load data
 infile 'salary.csv'
 into table salary_base
fields terminated by ','
optionally enclosed by '"'
 (TIMEDT DATE "MM-DD-YYYY HH24:MI:SS"
 , SALARYUSD
 , PRIMARYDB
 , YEARSWDB
 , OTHERDB
 , EMPSTATUS
 , JOBTITLE
 , SUPERVISE
 , YEARSONJOB
 , TEAMCNT
 , DBSERVERS
 , EDUCATION
 , TECHDEGREE
 , CERTIFICATIONS
 , HOURSWEEKLY
 , DAYSTELECOMMUTE
 , EMPLOYMENTSECTOR)

and the table creation is the following:

create table SALARY_BSE(TIMEDT TIMESTAMP not null,
SALARYUSD NUMBER not null,
COUNTRY VARCHAR(40),
PRIMARYDB VARCHAR(35),
YEARSWDB NUMBER,
OTHERDB VARCHAR(150),
EMPSTATUS VARCHAR(100),
JOBTITLE VARCHAR(70),
SUPERVISE VARCHAR(80),
YEARSONJOB NUMBER,
TEAMCNT VARCHAR(15),
DBSERVERS VARCHAR(50),
EDUCATION VARCHAR(50),
TECHDEGREE VARCHAR(75),
CERTIFICATIONS VARCHAR(40),
HOURSWEEKLY NUMBER,
DAYSTELECOMMUTE VARCHAR(40),
EMPLOYMENTSECTOR VARCHAR(35));

I used Excel to create some simple graphs from my results and queried the data from SQL Developer, (Jeff would be so proud of me for not using the command line… :))

Here’s what I queried and found interesting in the results.

We Are What We Eat, err Work On

The database flavors we work on may be a bit more diverse than most assume.  Now this one was actually difficult, as the field could be freely typed into and there were some mispellings, combinations of capital and small letters, etc.  The person who wrote “postgress”, yeah, we’ll talk… </p />
</p></div></div>

    	  	<div class=

Smart Home Update

I make some odd New Year’s Resolutions and mine for 2017 was to add some smart home solutions to our home that made sense.  I’ve seen what can happen if you don’t plan these types of projects out well, (looking at you, Mark Rittman!) and the insanity that ensues!

The following smart home goals were made:

  1. Main lights
  2. Prime power outlets
  3. thermostat
  4. music center
  5. Roku/TV

The following technology would be used to accomplish the goals:

  1. Echo Dot, (3)
  2. Phillips Hue Bridge, (2) and lightbulbs, (8)
  3. TP-link smart plugs, (2)
  4. Ecobee Lite 3 Thermostat
  5. Raspberry Pi and Python Code

The Echo Dot is the newest member of the Alexa family.  They are small, powerful and don’t require a full-sized echo to function.  I chose three due to the size of our house-  two for the main floor and one for upstairs.

I chose the first four products due to reviews and ease of use.  Items 2-3 all are Alexa compatible and can communicate with the Echo Dot easily.  All of the smart home products can be controlled with either the Alexa voice app or mobile apps that can be downloaded on your phone.  As the Echo Dot has a smaller footprint, it’s easier to mount on the wall or side of a cabinet, giving back valuable counter space.

For my lighting solution, I stuck with Phillips Hue, which only drawback is that it requires a bridge to work.  I was easily able to connect the Phillips bridge to then control all the lights, which then was easy to add to the Alexa “Smart Home Devices” app on my phone.  Each of the lights can be grouped as necessary, (let’s say the family room light has three bulbs, so I group them and name the group “Family Room”.) which make configuration and turning on/off easier.  A bridge is required in close proximity to each location, so I could access the bulbs.  One bridge was connected to the kitchen Echo Dot to control main floor lights and then one upstairs to control the upstairs lights.

The library has a single lamp that made more sense, (and less expensive) to connect via a TP-link smart plug.

https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/01/IMG_1236.png?... 169w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/01/IMG_1236.png?... 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/01/IMG_1236.png?... 1125w" sizes="(max-width: 289px) 100vw, 289px" data-recalc-dims="1" />

The second smart plug was added upstairs to the towel warmer that I got Tim and me for Xmas.  Often, we’d get into the shower and forget to turn it on, but now we can just say, “Alexa, turn on towel warmer”, then that goes a long way!  It uses the same outlet as my hair straightener, so the idea that if I leave home after forgetting to unplug it, I can just shut it off in an app in my phone is pretty cool.  TP-Link Smart plugs are a great way to smart home anything in your house, which comes to our tea kettle….

Yes, we do love our electric tea kettle and it works really well on it’s own, but what if we could just tell it to turn on from the other room?  The steps to do this are simple-

  1.  Plug in the TP-Link Smart Plug.
  2. Ensure you have the TP-Link Kasa app on your phone
  3. Connect to the wi-fi for the smart plug with the wi-fi on your phone.
  4. Once connected, tell the plug which wi-fi to connect to in your house.
  5. Name the smart plug device a unique name, in this case, “Tea Kettle” , choose an icon, (I chose a tea kettle for the icon) and finish the setup.
  6. Open the Alexa app and go to Smart Home in the drop down
  7. Click on Discovery New Device and Alexa will discover “Tea Kettle”.
  8. Now, all you have to say is “Turn tea kettle on.”

If you don’t believe me, here’s proof, (watch and weep, Mark… :))

Now a number of you might be saying, “Why didn’t you choose a Nest thermostat?”  Its true that Nest has the market cornered pretty impressively, but the price point is relatively high and the product isn’t that revolutionary.  I did some research and actually found that the Ecobee Lite 3 received some great reviews and the price was significantly lower.

Music is a given and built in with the Echo Dot.  If you have a Spotify account, then you can get your music directly from the Dot and all you need for some quality sound is to add a speaker, bluetooth or audio jack connected.  You do have to have a premium Spotify account and I think the only thing Spotify is missing an easy way from the Alexa app to upgrade your account when you link it, (hint, hint!) but there’s also a lot of other streaming options if you don’t want to use Spotify.

Now for my fun project-  the TV.  We love our Roku and I’ve decided this is where I get to play with my Raspberry Pi and get it going with Alexa.  There’s already the project and the code, so I just need to duplicate and enhance with the pieces that I’m interested in.  I’ll be saving this for another day, as my day job is demanding I work and we won’t even talk about how many after hours tasks I have for RMOUG Training Days with the conference coming up this next month!

So stay tuned and I’ll let you know how I’m doing keeping my resolutions.  Let me know if you’ve done any cool smart home projects on your side of the fence!

 

 

 



Tags:  , , , , , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Smart Home Update], All Right Reserved. 2017.

The post Smart Home Update appeared first on DBA Kevlar.

Removing Redundant Indexes in PeopleSoft

This is the second of a two-part article that discusses how to identify and remove redundant indexes from a PeopleSoft system. 
This article describes a script on my website (psredundantfix.sql) that uses a similar query to that described in the previous article to identify redundant indexes from the metadata in the PeopleTools tables. It uses an anonymous block of PL/SQL so nothing is installed in the database. For each redundant index identified it:

  • Adds the record and redundant index into an Application Designer project called REDUNDANT INDEXES. The record must also be added, otherwise Application will not generate a DDL script to remove the index.
  • Unsets the platform flag on the redundant index if the superset index is active on the same platform. Thus Application Designer will generate a script to drop redundant indexes when the project is built. The redundant index definition remains in the record definition in case there is a need to revert the change.
  • If the superset index is active on Oracle
    • Creates and collects extended optimizer statistics on the combination of columns in the redundant index. If error ORA-00054 occurs, the exception will be ignored and the script will continue.
    • Makes the index invisible. Note that indexes are not dropped. That can be done later by building an alter script for the project. It would be normal to test the consequences of making the indexes invisible for a while, and drop them later. 

Sample Output

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">Project REDUNDANT INDEXES already exists

Record PSPROJECTITEM, Redundant Index C already in project
Creating Extended Statistics on PSPROJECTITEM for OBJECTVALUE4,OBJECTTYPE
Making index PSCPSPROJECTITEM invisible
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

Record PSTREENODE, Redundant Index C already in project

Record TL_RPTD_TIME, Redundant Index C added to project
Making index PSCTL_RPTD_TIME invisible
Record TL_TA_BATCH, Redundant Index _ added to project
Making index PS_TL_TA_BATCH invisible
Making index PS_TL_TA_BATCH1 invisible

Making index PS_TL_TA_BATCH48 invisible

Record WRK_XREF_CALC, Redundant Index A already in project
Creating Extended Statistics on PS_WRK_XREF_CALC for PROCESS_INSTANCE,SEQ_NBR
Creating Extended Statistics on PS_WRK_XREF_CALC1 for PROCESS_INSTANCE,SEQ_NBR

Creating Extended Statistics on PS_WRK_XREF_CALC6 for PROCESS_INSTANCE,SEQ_NBR
Making index PSAWRK_XREF_CALC invisible
Making index PSAWRK_XREF_CALC1 invisible

Making index PSAWRK_XREF_CALC6 invisible

Note:

  • The script commits any changes it makes. There is also a rollback command at the top to prevent it accidentally committing something else.
  • Having run it once, should you immediately run the script again, nothing will be found because any redundant indexes have been marked as disabled on Oracle
  • Should you want to revert changes, mark all indexes in the REDUNDANT INDEXES project as active on Oracle.
#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">UPDATE psindexdefn
SET platform_ora = 1
, activeflag = 1
WHERE (recname, indexid) IN (
SELECT objectvalue1, objectvalue2
FROM psprojectitem
WHERE objecttype = 1
AND projectname = 'REDUNDANT INDEXES')
AND platform_ora = 0;
COMMIT;

We can see in Application Designer that index PSAWRK_XREF_CALC is a subset of PS_WRK_XREF_CALC because both start with PROCESS_INSTANCE and SEQ_NBR.

Index A was marked inactive on all platforms because the superset index _ is active on all platforms.  

PeopleSoft delivers some indexes for some platforms only. For example, PSETREENODE is only active on Informix. Therefore the script only deactivates the Informix platform flag on PSCTREENODE, and the platform flag changes to 'Some'. The index is still added to the project, but no attempt is made to create extended statistics or to make it invisible, and the index continues to be built on Oracle.

The comment applied to index C indicates that only the Informix flag was changed.

The final step, when you are satisfied that making the indexes invisible has no unacceptable consequences, is to drop the redundant indexes using the script generated by Application Designer

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">
DROP INDEX PSCPSPROJECTITEM
/
DROP INDEX PSEPSPROJECTITEM
/

DROP INDEX PSAWRK_XREF_CALC
/
DROP INDEX PSAWRK_XREF_CALC1
/

DROP INDEX PSAWRK_XREF_CALC6
/

APEX 5.1 is Here!

It’s here! Without much fanfare APEX 5.1 became available for download on December 21, 2016! With all the holiday stuff going on I missed it until after the first of the year.

Here’s where you can download APEX 5.1 and find out more:

http://www.oracle.com/technetwork/developer-tools/apex/downloads/download-085147.html

APEX 5.1 is a significant release and should be pretty stable considering the long beta program and early adopter programs it went through.

Major new features include:

  1. Interactive Grid
    “Live editing” of reports/pages brought to a whole new level
  2. JET Charts
    Oracle JET brings an organized approach to JavaScript, CSS3, and HTML5 to improve data display
  3. Universal Theme
    Universal Theme has new templates, new styles, over 1100 icons, and “Live Template” options
  4. Page Designer
    Page Designer is even easier adding a component view and the ability to view two panes at once including drag-and-drop between the panes!
  5. Calendars
    Calendars now allow JavaScript customization and now INCLUDE end dates
  6. Wizards Improved
    Several wizards have been improved to reduce steps necessary to get things done
  7. New Packaged Apps
    APEX include brand-new apps for Competitive Analysis, Quick SQL, and REST Client Assistant; plus the sample and productivity apps have all been refreshed

So, download APEX 5.1 and give it a try soon!

Haversine PL/SQL

I didn’t see a PL/SQL version on https://rosettacode.org for the Haversine formula so here’s a version for anyone that needs it.



SQL> create or replace
  2  function p2p_distance(
  3              p_latitude1 number,
  4              p_longitude1 number,
  5              p_latitude2 number,
  6              p_longitude2 number) return number deterministic is
  7    earth_radius  number := 6371;
  8    pi_approx     number := 3.1415927/180; 
  9    lat_delta     number := (p_latitude2-p_latitude1)*pi_approx;
 10    lon_delta     number := (p_longitude2-p_longitude1)*pi_approx;
 11    arc           number := sin(lat_delta/2) * sin(lat_delta/2) +
 12                                 sin(lon_delta/2) * sin(lon_delta/2) * cos(p_latitude1*pi_approx) * cos(p_latitude2*pi_approx);
 13  begin
 14    return earth_radius * 2 * atan2(sqrt(arc), sqrt(1-arc));
 15  end;
 16  /

Function created.

SQL>
SQL> select p2p_distance(36.12, -86.67, 33.94, -118.4) from dual;

P2P_DISTANCE(36.12,-86.67,33.94,-118.4)
---------------------------------------
                             2886.40705

The Relational Data Model

Mid-eighties. I was studying Computer Science. Little did I know back then that this thing called "The Relational Data Model" (RDM) would become huge in the IT-industry. The topic was still hot in academia at that time. My luck was that I liked those courses. Predicate Logic, Set Theory, Database Design, SQL. I aced them all. It was no surprise then, that I ended up working with Oracle software