Search

Top 60 Oracle Blogs

Recent comments

Database Administration & Monitoring

티베로 – The AWR-like “Tibero Performance Repository”

By Franck Pachot

.
In a previous post I introduced Tibero as The most compatible alternative to Oracle Database. Compatibility is one thing but one day you will want to compare the performance. I’ll not do any benchmark here but show you how you we can look at the performance with TPR – the Tibero Performance Repository – as an equivalent of AWR – the Oracle Automatic Workload Repository. And, as I needed to run some workload, I attempted to run something that has been written with Oracle Database in mind: the Kevin Closson SLOB – Silly Little Oracle Benchmark. The challenge is to make it run on Tibero and get a TPR report.

SLOB

I’ve downloaded SLOB from:

The myth of NoSQL (vs. RDBMS) agility: adding attributes

By Franck Pachot

.
There are good reasons for NoSQL and semi-structured databases. And there are also many mistakes and myths. If people move from RDBMS to NoSQL because of wrong reasons, they will have a bad experience and this finally deserves NoSQL reputation. Those myths were settled by some database newbies who didn’t learn SQL and relational databases. And, rather than learning the basics of data modeling, and capabilities of SQL for data sets processing, they thought they had invented the next generation of persistence… when they actually came back to what was there before the invention of RDBMS: a hierarchical semi-structured data model. And now encountering the same problem that the relational database solved 40 years ago. This blog post is about one of those myths.

DynamoDB: adding a Local covering index to reduce the cost

By Franck Pachot

.
This is a continuation on the previous post on DynamoDB: adding a Global Covering Index to reduce the cost. I have a DynamoDB partitioned on “MyKeyPart”,”MyKeySort” and I have many queries that retrieve a small “MyIndo001” attribute. And less frequent ones needing the large “MyData001” attribute. I have created a Global Secondary Index (GSI) that covers the same key and this small attribute. Now, because the index is prefixed by the partition key, I can create a Local Secondary Index (LSI) to do the same. But there are many limitations. The first one is that I cannot add a local index afterwards. I need to define it at the table creation.

Drop table

Here I am in a lab so that I can drop and re-create the table. In real live you may have to create a new one, copy the items, synchronize it (DynamoDB Stream),…

DynamoDB: adding a Global covering index to reduce the cost

By Franck Pachot

.
People often think of indexes as a way to optimize row filtering (“get item” faster and cheaper). But indexes are also about columns (“attribute projection”) like some kind of vertical partitioning. In relational (“SQL”) databases we often add more columns to the indexed key. This is called “covering” or “including” indexes, to avoid reading the whole row. The same is true in NoSQL. I’ll show in this post how, even when an index is not required to filter the items, because the primary key partitioning is sufficient, we may have to create a secondary index to reduce the cost of partial access to the item. Here is an example with AWS DynamoDB where the cost depends on I/O throughput.

SQLNET.EXPIRE_TIME and ENABLE=BROKEN

By Franck Pachot

.
Those parameters, SQLNET.EXPIRE_TIME in sqlnet.ora and ENABLE=BROKEN in a connection description exist for a long time but may have changed in behavior. They are both related to detecting dead TCP connections with keep-alive probes. The former from the server, and the latter from the client.

How SQL Server MVCC compares to Oracle and PostgreSQL

By Franck Pachot

.
Microsoft SQL Server has implemented MVCC in 2005, which has been proven to be the best approach for transaction isolation (the I in ACID) in OLTP. But are you sure that writers do not block readers with READ_COMMITTED_SNAPSHOT? I’ll show here that some reads are still blocked by locked rows, contrary to the precursors of MVCC like PostgreSQL and Oracle.

For this demo, I run SQL Server 2019 RHEL image on docker in an Oracle Cloud compute running OEL7.7 as explained in the previous post. If you don’t have the memory limit mentioned, you can simply run:

docker run -d -e "ACCEPT_EULA=Y" -e 'MSSQL_PID=Express' -p 1433:1433 -e 'SA_PASSWORD=**P455w0rd**' --name mssql mcr.microsoft.com/mssql/rhel/server:2019-latest

How SQL Server MVCC compares to Oracle and PostgreSQL

By Franck Pachot

.
Microsoft SQL Server has implemented MVCC in 2005, which has been proven to be the best approach for transaction isolation (the I in ACID) in OLTP. But are you sure that writers do not block readers with READ_COMMITTED_SNAPSHOT? I’ll show here that some reads are still blocked by locked rows, contrary to the precursors of MVCC like PostgreSQL and Oracle.

For this demo, I run SQL Server 2019 RHEL image on docker in an Oracle Cloud compute running OEL7.7 as explained in the previous post. If you don’t have the memory limit mentioned, you can simply run:

docker run -d -e "ACCEPT_EULA=Y" -e 'MSSQL_PID=Express' -p 1433:1433 -e 'SA_PASSWORD=**P455w0rd**' --name mssql mcr.microsoft.com/mssql/rhel/server:2019-latest

NVMe the afterburner for your database

Over 1 million IOPS (@8 KByte) and more than 26 GByte/s (@1MByte): Read more to see all impressive benchmark figures from a server with 14 NVMe drives and read why this is still not the best you could get…


End of last year, I have gotten a call from Huawei. They (finally) agreed to provide me a server with their enterprise NVMe drives for performance testing.
To say that I was delighted is a huge understatement. It felt like an 8-year old waiting for Christmas to get his present.

Choosing the right hardware for a database server is always important and challenging. Only if you build a rock-solid, stable and performant base you can build a reliable database service with predictable performance. Sounds expensive and most of the time, it is, but NVMe drives can be a game-changer in this field.

Should I go for ODA 19.5 or should I wait until 19.6?

Introduction

As you may know, Oracle Database 19c is available for new (X8-2) or older Oracle Database Appliances since several weeks. Current version is 19.5. But when you go to the official ODA documentation , it still first proposes version 18.7 not compatible with 19c databases. Here is why.

19c database is the final 12.2

First of all, 19c is an important release because it’s the terminal release of the 12.2, as 11.2.0.4 was for 11.2. Please refer to my other blog to understand the new Oracle versioning. ODA always supports new releases few months after being available on Linux, and it’s why it’s only available now.

Oracle Database on OpenShift

By Franck Pachot

.
In a previous post I described the setup of MiniShift on my laptop in order to run OpenShift for test purpose. I even pulled the Oracle Database image from the Docker Store. But the goal is to import it into OpenShift to deploy it from the Image Stream.