Search

Top 60 Oracle Blogs

Recent comments

Database Administration & Monitoring

Introduction to databases for {Power.Coders} with MySQL

    PC_IMG_5528
    This week I took some days off to do something related to my job but a bit different. I’ve given a course on Databases and SQL. But not for my usual customers. And not with the database I know the best. So, it is still in a domain that I know, but out of my comfort zone. And this is something that we should do more often because it gives a little apprehension and a big satisfaction.

Change Data Capture from Oracle with StreamSets Data Collector

With this trend of CQRS architectures where the transactions are streamed to a bunch of heterogenous eventually consistent polyglot-persistence microservices, logical replication and Change Data Capture becomes an important component, already at the architecture design phase. This is good for existing products vendors such as Oracle GoldenGate (which must be licensed even to use only the CDC part in the Oracle Database as Streams is going to be desupported) or Dbvisit replicate to Kafka. But also for Open Source projects.

Installing MAMP to play with PHP, MySQL and OpenFlights

You may wonder what I’m doing with those technologies that are completely different from what I usually do. I’ll detail in a future blog post but the idea is giving a short introduction to databases to students at https://powercoders.org/, a coding academy for refugees in Switzerland. They install MAMP (My Apache – MySQL – PHP) during their curriculum for website development, and then I’ll use this environment to teach data modeling and SQL basics. Thus, I’ve to look at MAMP, PHP and MySQL for the first time… And I decided to load the OpenFlights open data to play with.
That explains the title.

18c, Cloud First and Cloud Only features: think differently

Remember the times when the Oracle Software features were the same on all platforms? Where Oracle databases could be ported to any relevant platform? Where we were able to try any feature, freely, by downloading the latest release software? Now we need to think differently. Because:

VirtualBox 5.2 exports the VM to the Oracle Cloud

The new release of Oracle VM VirtualBox (aka VirtualBox) is there with a new functionality to export a VM to the Oracle Cloud Compute (aka Oracle Cloud Infrastructure). That can be interesting to prepare a VM on my laptop and move it to the Cloud to get it accessible from everywhere. Here’s my first try. In my opinion, it’s idea but probably need further evolution.

VirtualBox

Here is what is new: in addition to .ova you can export to an Oracle Public Cloud image:
CaptureVboxCloud000

Bequeath connect to PDB: set container in logon trigger?

There are little changes when you go to multitenant architecture and one of them is that you must connect with a service name. You cannot connect directly to a PDB with a beaqueath (aka local) connection. This post is about a workaround you may have in mind: create a common user and set a logon trigger to ‘set container’. I do not recommend it and you should really connect with a service. Here is an example.

Imagine that I have a user connecting with bequeath connection to a non-CDB, using user/password without a connection string, the database being determined by the ORACLE_SID. And I want to migrate to CDB without changing anything on the client connection configuration side. The best idea would be to use a service, explicitly or implicitly with TWO_TASK or LOCAL. But let’s imagine that you don’t want to change anything on the client side.

Get trace file from server to client

The old way to get a user dump trace file, for sql_trace (10046), Optimizer compilation trace (10053), lock trace (10704), Optimizer execution trace (10507),… is to go to the server trace directory. But if you don’t have access to the server (as in the ☁) the modern (12cR2) way is to select from V$DIAG_TRACE_FILE_CONTENTS. Before everybody is on 12.2 I’m sharing here a sqlplus script that I use for a long time to get the trace file to the client.

What is in a transportable tablespace dumpfile?

On 31st of May in Düsseldorf, at DOAG Datenbank, I’ll talk about transportable tablespaces and pluggable databases. Both methods are transporting data physically, the difference is in the transport of the metadata, which can be more flexible when transported logically, as with TTS, but faster when transported physically with PDB. I have a lot of demos to show transportable tablespaces with RMAN, and the different cloning features available in 12cR2. If I have time I’ll show what is inside the dumpfile when using Data Pump to export the metadata. Here is the idea.

expdp transport_tablespaces

Here is how we export metadata with Data Pump for transportable tablespaces.

12cR2 DBCA, Automatic Memory Management, and -databaseType

This post explains the following error encountered when creating a 12.2 database with DBCA:
[DBT-11211] The Automatic Memory Management option is not allowed when the total physical memory is greater than 4GB.
or when creating the database directly with the installer:
[INS-35178]The Automatic Memory Management option is not allowed when the total physical memory is greater than 4GB.
If you used Automatic Memory Management (AMM) you will have to think differently and size the SGA and PGA separately.

When automatic reoptimization plan is less efficient

11gR2 started to have the optimizer react at execution time when a misestimate is encountered. Then the next executions are re-optimized with more accurate estimation, derived from the execution statistics. This was called cardinality feedback. Unfortunately, in rare cases we had a fast execution plan with bad estimations, and better estimations lead to worse execution plan. This is rare, but even when 9999 queries are faster, the one that takes too long will gives a bad perception of this optimizer feature.