Recently I’ve seen not so smart optimizer behavior: one query took long time to parse, and ended with an error hitting PGA_AGGREGATE_LIMIT in few minutes; another query was just parsed for ages while using reasonable (under 2G :)) amount of PGA and still could hit PGA_AGGREGATE_LIMIT but after way more time – up to an hour.
Both cases were similar and involved queries which were accessing views; and those views’ code is generated by an application using lots of IN LISTs and other OR conditions. They both are really ugly SQLs with text length ~100K. When Oracle tried to parse them it took a lot of time and parse attempt had either failed with ORA-4036 soon or hanged for a long time and then failed. Strangely incident trace file generated for ORA-4036 doesn’t include PGA heaps breakdown and you have to manually enable PGA heapdump on error to get an idea what is taking up memory. Here’s what I’ve found in there:
Here’s one of those little improvements in 12c (including 12.1) that will probably end up being described as “little known features” in about 3 years time. Arguably it’s one of those little things that no-one should care about because it’s not the sort of thing you should do on a production system, but that doesn’t mean it won’t be seen in the wild.
Rather than simply state the feature I’m going to demonstrate it, starting with a little code to build a couple of tables with referential integrity:
To me, ‘cloud computing’ is renting a compute resource to perform a task. In order to use that compute resource, you need to instruct it to do something, which is typically done via the network. If the task the compute resource needs to fulfil is being an application server or being a client or both in the case of an application server that uses an Oracle database, the network latency between the client of the database and the database server is a critical property.
One of the optimizer enhancements that appeared in 12.2 for SQL is the “band join”. that makes certain types of merge join much more efficient. Consider the following query (I’ll supply the SQL to create the demonstration at the end of the posting) which joins two tables of 10,000 rows each using a “between” predicate on a column which (just to make it easy to understand the size of the result set) happens to be unique with sequential values though there’s no index or constraint in place:
I ran across an article from 2013 from Straight Talk on Agile Development by Alex Kuznetsov and it reminded me how long we’ve been battling for easier ways of doing agile in a RDBMS environments.
For general SLOB information, please visit: https://kevinclosson.net/slob.
This is just a quick blog entry to showcase a few of the publications from IT vendors showcasing SLOB. SLOB allows performance engineers to speak in short sentences. As I’ve pointed out before, SLOB is not used to test how well Oracle handles transaction. If you are worried that Oracle cannot handle transactions then you have bigger problems than what can be tested with SLOB. SLOB is how you test whether–or how well–a platform can satisfy SQL-driven database physical I/O.
There’s a thread running on OTN at present about deleting huge volumes of duplicated data from a table (to reduce it from 1.1 billion to about 22 million rows). The thread isn’t what I’m going to talk about, though, other than quoting some numbers from it to explain what this post is about.
There are more configurations for AWS than there are fish in the sea, but as the rush of folks arrive to test out the incredibly cool AWS Trial for Delphix, I’ll add my rendition of what to look for to know you’re AWS setup is prepped to successfully deploy.
I’ve been at Delphix for just over six months now. In that time, I was working with a number of great people on a number of initiatives surrounding competitive, the company roadmap and some new initiatives.