Search

Top 60 Oracle Blogs

Recent comments

Exadata

What’s new with Oracle database 19.5 versus 19.4

It seems that the most eye-catching difference between Oracle database versions 19.5 and 19.4 is three underscore parameters spare parameters being changed to specifically named underscore parameters, two of them have a bug number in them which is not publicly visible.

In v$sysstat/v$sesstat, a group of statistics are renamed from ‘OS %’ to ‘Server %’. All these statistics are about networking. One changed parameter directly points to networking (tcpinfo). These statistics are added in version 19.

One DBA view was added: DBA_DV_COMMON_OPERATION_STATUS. This view is specific and owned by to DVSYS.

Oracle memory troubleshooting using analysis on heapdumps, part 2

In the article oracle memory troubleshooting using analysis on heap dumps I introduced heap_analyze.awk.

The reason the tool exists is because I am using it myself. Therefore, I ran into additional things that I wanted the tool to do. I added some stuff, which is that significant, that I decided to make another blogpost to introduce the new features.

1. Percentages
In order to get an idea of the relative size of the summarised topic, I added a percentage. For example:

Oracle memory troubleshooting using analysis on heapdumps

This blogpost is about analysing Oracle heap dumps. It is an extension to earlier work, Tanel Poder’s heap dump analyzer. So hat tip to Tanel, he’s done the hard work, I merely increased the presentation options. The heap analyser script that I wrote analyses Oracle heapdumps from the trace file that the dump was written to by the Oracle database. Because the heap dump representation is the same between PGA and SGA memory, it can work on both. The reason for this is that memory management is done by the same memory manager, and is commonly called ‘kgh’ (kernel generic heap) managed memory.

Please mind that for PGA analysis, not all memory is managed by the kgh memory manager. For example memory used for networking (sqlnet) is allocated totally outside of the kgh memory manager.

What’s new with Oracle database 19.4 versus 19.3

The most notable thing here is an “official” (non-underscore) parameter has been introduced with 19.4, “ignore_session_set_param_errors”. The description is: ‘Ignore errors during alter session param set’. I did a quick check to see if I could set it to true or false, which I couldn’t (resulted in an error).

With the Oracle database version 19.3 patched to 19.4 on linux, the following things have changed:

What’s new with Oracle database 18.7 versus 18.6

With the Oracle database version 18.6 patched to 18.7 on linux, the following things have changed:

What’s new with Oracle database 12.2.0.1.190416 versus 12.2.0.1.190716

There are a couple of underscore parameters changed from spare to named ones.
It’s interesting to see that in sysstat, ‘spare statistic 2’ changed to ‘cell XT granule IO bytes saved by HDFS tbs extent map scan’. This obviously has to do with big data access via cell servers. What is weird is that this is the only version where this had happened.

What’s new with Oracle database 12.1.0.2.190416 versus 12.1.0.2.190716

There are a couple of undocumented spare parameters changed to named undocumented parameters, this is quite normal to see.

With the Oracle database version 12.1.0.2.190416 patched to 12.1.0.2.190716 on linux, the following things have changed:

Negative Offload

At the Trivadis Performance Days 2019 I did a presentation on using execution plans to understand what a query was doing. One of the examples I showed was a plan from an Exadata system (using 11.2.0.4) that needed to go faster. The plan was from the SQL Monitor report and all I want to show you is one line that’s reporting a tablescan. To fit the screen comfortably I’ve removed a number of columns from the output.

The report had been generated while the statement was still running (hence the “->” at the left hand edge) and the query had scanned 166 segments (with no partition elimination) of a table with 4,500 data segments (450 range partitions and 10 hash sub-partitions – note the design error, by the way, hash partitioning in Oracle should always hash for a powert of 2).

gather_system_stats

What happens when you execute dbms_stats.gather_system_stats() with the ‘Exadata’ option ?

Here’s what my system stats look like (12.2.0.1 test results) after doing so. (The code to generate the two different versions is at the end of the note).

Bloom Filter Efficiency And Cardinality Estimates

I've recently came across an interesting observation I've not seen documented yet, so I'm publishing a simple example here to demonstrate the issue.

In principle it looks like that the efficiency of Bloom Filter operations are dependent on the cardinality estimates. This means that in particular cardinality under-estimates of the optimizer can make a dramatic difference how efficient a corresponding Bloom Filter operation based on such a cardinality estimate will work at runtime. Since Bloom Filters are crucial for efficient processing in particular when using Exadata or In Memory column store this can have significant impact on the performance of affected operations.