It’s no secret that there are some tasks that overwhelm existing database architectures, making projects consistently come in over budget and overdue. This holiday season, on behalf of Database Administrators everywhere, we’d like to ask Santa for:
An end to the constant struggle for more and more disk space for databases and copies
An army of smart elves to run and test backups
Less dependence on physical/virtual administration and storage teams to create working environments for development and QA
With every release of Oracle, more and more power comes to the optimizer. Many of these are new features (such as adaptive cursor sharing, adaptive optimization, dynamic sampling, etc)…but also within the "core" of the optimizer, there are continuing efforts to transform and interrogate SQL’s to try derive a ‘smarter’ query and hence hopefully a smarter plan.
Its always a balancing act…how much can you re-jig a query without running the risk of actually changing what the query does…
Here’s an example of where that balance is slightly wrong in 12c
In my blogpost When the oracle wait interface isn’t enough I showed how a simple asynchronous direct path scan of a table was spending more than 99% of it’s time on CPU, and that perf showed me that 68% (of the total elapsed time) was spent on a spinlock unlock in the linux kernel which was called by io_submit().
This led to some very helpful comments from Tanel Poder. This blogpost is a materialisation of his comments, and tests to show the difference.
First take a look at what I gathered from ‘perf’ in the first article:
Production data is clearly critical for the core of businesses, such as
My friend Kyle wrote a blog piece a while back
Now, not to criticise Delphix (in fact,the opposite – its a very very cool product, and you should read some of Kyle’s great blog content on it), but if you have not got it, or can’t get it, then as long as you have some imagecopy backups, and a NFS server hanging around, you can get to the "next best thing" using Oracle’s Direct NFS feature.
So…how close can we get to 1 minute 22 seconds…Lets try on a laptop :-)
Very common request I get from my customers is to parameterize the query executed by a Hive action in their Oozie workflow.
For example, the dates used in the query depend on a result of a previous action. Or maybe they depend on something completely external to the system – the operator just decides to run the workflow on specific dates.
There are many ways to do this, including using EL expressions, capturing output from shell action or java action.
Here’s an example of how to pass the parameters through the command line. This assumes that whoever triggers the workflow (Human or an external system) has the correct value and just needs to pass it to the workflow so it will be used by the query.
Here’s what the query looks like:
Please feel free to share your own experiences with these tools or others in the comments!
There are a number of tools out there to do I/O benchmark testing such as
My choice for best of breed is fio
(thanks to Eric Grancher for suggesting fio).
It’s not uncommon for people on one hand to expound the functionality, performance and features of Oracle, whilst on the other hand, lament the potential high cost of the product.
I’m not pontificating here – I’m commonly one of these people. So much good stuff in Oracle….yet so much to pay to get that good stuff :-(
So in the interests of fairness, I thought I’d share a little story where an Oracle solution was implemented with total expenditure of: ZERO