Who's online

There are currently 0 users and 29 guests online.

Recent comments

Oakies Blog Aggregator

NoSQL and Oracle, Sex and Marriage

At last week’s Dallas Oracle Users Group meeting, an Oracle DBA asked me, “With all the new database alternatives out there today, like all these open source NoSQL databases, would you recommend for us to learn some of those?”

I told him I had a theory about how these got so popular and that I wanted to share that before I answered his question.

My theory is this. Developers perceive Oracle as being too costly, time-consuming, and complex:

  • An Oracle Database costs a lot. If you don’t already have an Oracle license, it’s going to take time and money to get one. On the other hand, you can just install Mongo DB today.
  • Even if you have an Oracle site-wide license, the Oracle software is probably centrally controlled. To get an installation done, you’re probably going to have to negotiate, justify, write a proposal, fill out forms, know, supplicate yourself to—er, I mean negotiate with—your internal IT guys to get an Oracle Database installed. It’s a lot easier to just install MySQL yourself.
  • Oracle is too complicated. Even if you have a site license and someone who’s happy to install it for you, it’s so big and complicated and proprietary... The only way to run an Oracle Database is with SQL (a declarative language that is alien to many developers) executed through a thick, proprietary, possibly even difficult-to-install layer of software like Oracle Enterprise Manager, Oracle SQL Developer, or sqlplus. Isn’t there an open source database out there that you could just manage from your operating system command line?

When a developer is thinking about installing a database today because he need one to write his next feature, he wants something cheap, quick, and lightweight. None of those constraints really sounds like Oracle, does it?

So your Java developers install this NoSQL thing, because it’s easy, and then they write a bunch of application code on top of it. Maybe so much code that there’s no affordable way to turn back. Eventually, though, someone will accidentally crash a machine in the middle of something, and there’ll be a whole bunch of partway finished jobs that die. Out of all the rows that are supposed to be in the database, some will be there and some won’t, and so now your company will have to figure out how to delete the parts of those jobs that aren’t supposed to be there.

Because now everyone understands that this kind of thing will probably happen again, too, the exercise may well turn into a feature specification for various “eraser” functions for the application, which (I hope, anyway) will eventually lead to the team discovering the technical term transaction. A transaction is a unit of work that must be atomic, consistent, isolated, and durable (that’where this acronym ACID comes from). If your database doesn’t guarantee that every arbitrarily complex unit of work (every transaction) makes it either 100% into the database or not at all, then your developers have to write that feature themselves. That’s a big, tremendously complex feature. On an Oracle Database, the transaction is a fundamental right given automatically to every user on the system.

Let’s look at just that ‘I’ in ACID for a moment: isolation. How big a deal is transaction isolation? Imagine that your system has a query that runs from 1 pm to 2 pm. Imagine that it prints results to paper as it runs. Now suppose that at 1:30 pm, some user on the system updates two rows in your query’s base table: the table’s first row and its last row. At 1:30, the pre-update version of that first row has already been printed to paper (that happened shortly after 1 pm). The question is, what’s supposed to happen at 2 pm when it comes time to print the information for the final row? You should hope for the old value of that final row—the value as of 1 pm—to print out; otherwise, your report details won’t add up to your report totals. However, if your database doesn’t handle that transaction isolation feature for you automatically, then either you’ll have to lock the table when you run the report (creating an 30-minute-long performance problem for the person wanting to update the table at 1:30), or your query will have to make a snapshot of the table at 1 pm, which is going to require both a lot of extra code and that same lock I just described. On an Oracle Database, high-performance, non-locking read consistency is a standard feature.

And what about backups? Backups are intimately related to the read consistency problem, because backups are just really long queries that get persisted to some secondary storage device. Are you going to quiesce your whole database—freeze the whole system—for whatever duration is required to take a cold backup? That’s the simplest sounding approach, but if you’re going to try to run an actual business with this system, then shutting it down every day—taking down time—to back it up is a real operational problem. Anything fancier (for example, rolling downtime, quiescing parts of your database but not the whole thing) will add cost, time, and complexity. On an Oracle Database, high-performance online “hot” backups are a standard feature.

The thing is, your developers could write code to do transactions (read consistency and all) and incremental (“hot”) backups. Of course they could. Oh, and constraints, and triggers (don’t forget to remind them to handle the mutating table problem), and automatic query optimization, and more, ...but to write those features Really Really Well™, it would take them 30 years and a hundred of their smartest friends to help write it, test it, and fund it. Maybe that’s an exaggeration. Maybe it would take them only a couple years. But Oracle has already done all that for you, and they offer it at a cost that doesn’t seem as high once you understand what all is in there. (And of course, if you buy it on May 31, they’ll cut you a break.)

So I looked at the guy who asked me the question, and I told him, it’s kind of like getting married. When you think about getting married, you’re probably focused mostly on the sex. You’re probably not spending too much time thinking, “Oh, baby, this is the woman I want to be doing family budgets with in fifteen years.” But you need to be. You need to be thinking about the boring stuff like transactions and read consistency and backups and constraints and triggers and automatic query optimization when you select the database you’re going to marry.

Of course, my 15-year-old son was in the room when I said this. I think he probably took it the right way.

So my answer to the original question—“Should I learn some of these other technologies?”—is “Yes, absolutely,” for at least three reasons:

  • Maybe some development group down the hall is thinking of installing Mongo DB this week so they can get their next set of features implemented. If you know something about both Mongo DB and Oracle, you can help that development group and your managers make better informed decisions about that choice. Maybe Mongo DB is all they need. Maybe it’s not. You can help.
  • You’re going to learn a lot more than you expect when you learn another database technology, just like learning another natural language (like English, Spanish, etc.) teaches you things you didn’t expect to learn about your native language.
  • Finally, I encourage you to diversify your knowledge, if for no other reason than your own self-confidence. What if market factors conspire in such a manner that you find yourself competing for an Oracle-unrelated job? A track record of having learned at least two database technologies is proof to yourself that you’re not going to have that much of a problem learning your third.

Re-forming the Martin Cluster in Norway

Later this month, on April 17-20th, I am presenting again at the Norwegian Oracle user group (OUGN) spring conference {modern browsers will allow you to translate any Norwegian if you need to} . I loved it last year, as you can see from my third-day post on it. I’ve been lucky to attend some very good conferences over the last few years but those three days at the OUGN conference last year were, I think, my favourite single event to date. If you are within striking distance of Oslo and can negotiate the time out of the office, I would strongly recommend the event. If you can’t negotiate the time, heck, take a holiday and go anyway :-)

Part of what I really enjoyed about the event was the fact that two of the days are spent on a ferry/cruise ship from Oslo to Kiel and back. And oddly enough, that is what initially put me off going to the conference – I am very susceptible to Sea Sickness. I had no problems though, partly due to the large quantities of travel calm pills I swallowed, partly due to the good weather, but mostly because the talks were excellent and the atmosphere was fantastic. I don’t mean “hey, it was a bit like a holiday” {though in some ways it was as it was such fun} but because the speakers and the attendees can’t escape, at least not without a long swim, everyone is around all day and all evening. It just adds to the whole event. I spoke to more “new” people during that conference than I ever have before.

At most conferences the presentations at the end of the day tend to be less well attended and there can be a feeling of wind-down, especially on the last day. A fair number of people feel the need to make an early exit to get home before the worst of the traffic or are under pressure to get back to the office and just sort out some issue that is pressing. The people around in the evening tend to be the presenters and the conference die-hards and so are usually the same sad old bunch of geezers and gals :-) . However, on the OUGN Boat this is not the case. All sessions tend to be well attended and in the evening nearly everyone is out having something to eat, a few drinks (those North Europeans sure do like the odd drink, but in a relaxed and affable way) and just being sociable.

Over the last few years the conference has developed a reputation for being technically strong too. This is of course partly due to the excellent atmosphere attracting good presenters and the good presenters in turn help make the conference better. popular and well attended – and that in turn attracts presenters. A nice positive feedback loop. I certainly learnt a lot of stuff last year and I cannot think of a poor presentation that I attended. Hmm, maybe one of mine was a little weak :-| . The organisers do an excellent job of helping the presenters feel relaxed and appreciated too. For example, I was nervous about the boat part of the trip to they gave me one slot on the mainland the day before we sailed and suggested I could bail out at Kiel if I was suffering. As a presenter, that sort of consideration counts for a lot. I don’t want or expect to be treated like some minor celebrity and I was not, but for the whole conference I just felt like the organisers appreciated my taking time out from work and flying out to come and present.

The final reason I am looking forward to the event (and thus the odd title) is the re-forming of the Martin Oracle Cluster :-) – this is myself, Martin Nash and Martin Bach. We all do several conferences a year, we all used to go along to the London Oracle Beers and we have become good friends. Other Oracle Martin’s are welcome to join us – At the OUGN last year there was also Martin Büchi, who I had not met before, but this year I think we are the only Martins presenting. We just don’t seem to have managed to re-from the cluster for many months now, partly as Mr Bach returned to Germany.


Martin Nash – Martin Büchi – Martin Bach – Martin Widlake
Thanks to Øyvind Isene for the picture.

I suppose I should mention what I am presenting on? Well, as I mentioned in my last Friday Philosophy, I am concentrating more on introductory presentations. You can see my official agenda here. I am doing:

  • an introductory presentation on Row Level Security, VPD and hiding rows or columns of data {it will be interesting to see how large the audience is for that one!}
  • an introduction to SQL tuning where I cover the absolute basics, but hopefully in a way that allows those new to it (or maybe even not so new) to treat tuning as a logical and sensible process, as opposed to Black Magic of favourite hints and arcane practices
  • my disasters talk. I love giving my disasters talk. I’ve “been in the vicinity” of a lot of disasters and I only ever talk about things I have seen first hand, so no urban myths.

RMOUG and Hotsos

Recently I’ve spoken at the RMOUG training days 2013 in Denver (the mile high city). It was a first time for me to speak for the RMOUG and being in Denver. Thanks to the “sequestration” (federal budget cuts) the lines piled at immigration at Minneapolis (Minneapolis and Saint Paul, the twin cities) airport, and because my plane left more than one hour to late and my layover time was one hour and fifteen minutes, I tried to rebook my flight from Minneapolis to Denver. But, this flight turned out to be delayed too. This meant I was able to get on this flight!

At Denver airport Tim Gorman volunteered to fetch me at the airport and bring me to my Hotel. Tim did fetch multiple people, which meant we gathered at a bar where Debra Lilley, Mogens Norgaard, Claes (the tank commander), Martin Widlake amongst others where already waiting.

The RMOUG training days where held in the Denver conference centre, which have a very iconic blue bear at the front:


This is the same conference centre where Collaborate 2013 will be held!

I delivered 3 talks: About multiblock reads (which I will be presenting at collaborate too, but named “How Oracle secretly changed multiblock reads”), Exadata OLTP (a study on processing single block reads on Exadata configurations and non-Exadata using Kevin Closson’s SLOB (Silly Little Oracle Benchmark) and a live Oracle function call tracing hacking session. The idea for the last presentation was conceived just prior to the conference, when I learned some people didn’t make it to Denver, and there where session slots to be filled. All three of these presentations will be presented at theOracle Usergroup Norway Vårseminar 2013 too!

After the conference we went skiing for a few days in Breckenridge with a group of friends, organised by Kellyn Pot’vin and Tim Gorman. Thank you Kellyn and Tim for all the work at both RMOUG and with this famous ‘Faux Table’ event, I really appreciate all the effort you put in these events! I also want to mention much other people who put effort in making things happen at the Faux table, for cooking, washing, driving, etc.

A few weeks further down the line was the annual HotSOS conference in Dallas. I travelled with Toon Koppelaars. Once again we found a long line for immigration, but eventually we where through immigration quick enough to fetch a beer at a bar near the gate of the connecting flight where each and every seat was equipped with an iPad, which you had to use to order something. Remarkable. At the conference I delivered my ‘About multiblock reads’ presentation. I was very pleased to be invited to speak at a conference which is dedicated to Oracle and performance. Once again this was a terrific meet-up with a huge amount of friends. This is an impression from the game night: Karl Arao from Enkitec tuning a pile of wood blocks (Jenga), overlooked by Gwen Shapira from Pythian and Mark Farnham.foto-1
There was a big number of dutch people at this year’s Hotsos conference: Toon Koppelaars, with whom I travelled, Marco Gralike, Gerwin Hendriksen and Jacco Landlust.

i would like to thank my employer VX Company for their support to make this possible.


#5 COLLABORATE 13 Banner Ad

I’ll be speaking on Mon, Wednesday and Thursday:

  • Boosting performance on reporting and development databases
    April 8, 2:30-3:30pm (Mile High Ballroom 2B)
  • Database virtualization: accelerating application development
    April 10,1:00-2:00pm (Mile High Ballroom 2B)
  • NFS tuning for Oracle
    April 11, 8:30-9:30am (Mile High Ballroom 3A)

Attend COLLABORATE online

IOUG Forum for the Best in User-Driven Oracle Education and Networking

Have you made your Oracle training plans for 2013? As an IOUG RUG leader, I’d like to extend a personal invitation to the conference that I continue to attend for truly valuable personal and professional growth- COLLABORATE 13 – IOUG Forum, April 7-11 at the Colorado Convention Center in Denver, CO.

COLLABORATE is interactive: have your most difficult questions answered by other IOUG Oracle experts who’ve been there and come out swinging.

COLLABORATE is diverse: attend education sessions across all of the tracks offered at IOUG Forum, or pick the one that fits you best! Choose from Database, Business Intelligence, Big Data, Exadata, Security and more!

COLLABORATE is personal: tap into the shared resources of hundreds of IOUG members, Oracle ACEs and more! Meet new friends and contacts throughout the week and strengthen ties with existing business partners.

COLLABORATE is an awesome big conferences of over 6000 attendees with many opportunities to network, learn new technology and see what is happening in the industry.

By registering specifically with IOUG for COLLABORATE 13, you’ll reap the following benefits:

  • Pre-Conference Workshops
    On Sunday, April 7, IOUG offers an extra, comprehensive day of Oracle training complimentary for IOUG Forum attendees.
  • IOUG Curricula
    The IOUG Curricula are designed to give well-rounded training and education on specialized topics, chosen by peers, throughout COLLABORATE.
  • IOUG IT Strategic Leadership Package
    Get the chance to discuss corporate strategy with senior IT management and improve your skills at influencing without authority, present business cases and help develop business solutions for your company.
  • Access to hands-on labs
    Sessions on Upgrade to the Latest Generation of Oracle Database and Consolidate your Databases Using Best Practices, Hacking, Cracking, Attacking – OH MY! and RAC Attack.
  • Networking
    Admission to IOUG’s exclusive networking activities, including our Sunday evening reception
  • Conference Proceedings
    Online access to conference proceedings before, during and after the event

You will also be able to attend education sessions offered by the OAUG and Quest user groups with your COLLABORATE 13 registration through the IOUG. Make plans to join me and thousands of your Oracle peers to soak in the solutions at this unique Oracle community event.

If you are an Oracle technology professional, be sure to secure your registration through the IOUG to obtain exclusive access to IOUG offerings.

Please contact  IOUG headquarters at if you have any questions.

I hope to see you in Denver!

Best Regards,

Kyle Hailey

Delphix Overview

Update: Here’s the link to the recording of the webinar

I’ll b online tomorrow morning (Friday 5th, 9:00 Pacific time, 5:00 pm UK) in a webinar with Kyle Hailey to talk about my first impressions of Delphix, so I thought I’d write up a few notes beforehand.

I’ve actually installed a complete working environment on my laptop to model a production setup. This means I’ve got three virtual machines running under VMWare: my “production” machine (running Oracle on OEL 5, 64-bit), a “development” machine (which has the software installed, again on OEL 5, 64-bit), and a machine which I specified as Open Solaris 10, 64-bit for the Delphix server VM (pre-release bloggers’ version). The two Linux servers are running with 2.5GB of RAM, the Delphix server is running with 8GB RAM, and all three machines are running 2 virtual CPUs. (My laptop has an Intel quad core i7, running two threads per CPU, 16GB RAM, and 2 drives of 500GB each.) The Linux machines were simply clones of another virtual machine I previously prepared and the purpose of the exercise was simply to see how easy it would be to “wheel in” a Delphix server and stick it in the middle. The answer is: “pretty simple”. (At some stage I’ll be writing up a few notes about some of the experiments I’ve done on that setup.)

To get things working I had to create a couple of UNIX accounts for a “delphix” user on the Linux machines, install some software, give a few O/S privileges to the user (mainly to allow it to read and write a couple of Oracle directories), and a few Oracle privileges. The required Oracle privileges vary slightly with the version of Oracle and your prefered method of operation, but basically the delphix user needs to be able to run rman, execute a couple of Oracle packages, and query some of the dynamic performance views. I didn’t have any difficulty with the setup, and didn’t see any threats in the privileges that I had to give to the delphix user. The last step was simply to configure the Delphix server to give it some information about the Linux machines and accounts that it was going to have access to.

The key features about the Delphix server are that it uses a custom file system (DxFS, which is based on ZFS with a number of extensions and enhancements) and it exposes files to client machines through NFS; and there are two major components to the software that make the whole Delphix package very clever.

Oracle-related mechanisms

At the Oracle level, the Delphix server sends calls to the production database server to take rman backups (initially a full backup, then incremental backups “from SCN”); between backup requests it also pulls the archived redo logs from the production server – or can even be configured to copy the latest entries from the online redo logs a few seconds after they’ve been written (which is one of the reasons for requiring privileges to query some of the dynamic performance views, but the feature does depend on the Oracle version).

If you want to make a copy of the database available, you can use the GUI interface on the Delphix server to pick a target machine, invent a SID, and Service name, and pick an SCN (or approximate timetamp) that you want to database to start from, and within a few minutes the Delphix server will have combined all the necessary backup pieces, applied any relevant redo, and configured your target machine to start up an instance that can use the (NFS-mounted) database that now exists on the Delphix server. I’ll explain in a little while why this is a lot cleverer than a simple rman “restore and recover”.


Supporting the Oracle-related features, the other key component of the Delphix server is the Delphix file-system (DxFS). I wrote a little note a few days ago to describe how Oracle can handle “partial” updates to LOB values – the LOB exists in chunks with an index on (lob_id, chunk_number) that allows you to pick the right chunks in order. When you update a chunk in the LOB Oracle doesn’t really update the chunk, it creates a new chunk and modifies the index to point at it. If another session has a query running that should see the old chunk, though, Oracle can read the index “as at SCN” (i.e. it creates a read consistent copy of the required index blocks) and the read-consistent index will automatically be pointing at the correct version of the LOB chunk. DxFS does the same sort of thing – when a user “modifies” a file system block DxFS doesn’t overwrite the original copy, it writes a new copy to wherever there’s some free space and maintains some “indexing” metadata that tells it where all the pieces are. But if you never tell the file system to release the old block you can ask to see the file as at a previous point in time at no extra cost!

But DxFs is even cleverer than that because (in a strange imitation of the “many worlds” interpretation of quantum theory) a single file can have many different futures. Different users can be identified as working in different “contexts” and the context is part of the metadata describing the location of blocks that belong to the file. Imagine we have a file with 10 blocks sitting on DxFs - in your context you modify blocks 1,2 and 3 but at the same time I modify blocks 1,2 and 3 in my context. Under DxFS there are now 16 blocks associated with that file – the original 10, your three modified blocks and my three modified blocks and, depending on timestamp and context, someone else could ask to see any one of three different versions of that file – the original version, your version, or my version.

Now think of that in an Oracle context. If we copy an entire set of database files onto DxFS, then NFS-mount the files on a machine with Oracle installed, we can configure and start up an instance to use those files. At the same time we could NFS-mount the files on another machine, configuring and starting another instance to use the same data files at the same time! Any blocks changed by the first instance would be written to disc as private copies, any blocks changed by the second instance would be written to discs as private copies – if both instances managed to change 1% of the data in the course of the day then DxFs would end up holding 102% of the starting volume of data: the original datafiles plus the two sets changed blocks – but each instance would think it was the sole user of its version of the files.

There’s another nice (database-oriented) feature to Delphix, though. The file system has built-in compression that operates at the “block” level. You can specify what you mean by the block size (and for many Oracle sites that would be 8KB) and the file system would transparently apply a data compression algorithm on that block boundary. So when the database writer writes an 8KB block to disc, the actual disc space used might be significantly less than 8KB, perhaps by a factor of 2 to 3. So in my previous example, not only could you get two test databases for the space of 1 and a bit – you might get two test databases for the space of 40% or less of the original database.

Delphix vs. rman

I suggested earlier on that Delphix can be a lot clever than an rman restore and recover. If you take a full backup to Delphix on Sunday, and a daily incremental backup (let’s preted that’s 1% of the database per day) for the week, then Delphix can superimpose each incremental onto the full backup as it arrives. So on Monday we construct the equivalent of a full Monday backup, on Tuesday we construct the equivalent of a full Tuesday backup, and so on. But since DxFS keeps all the old copies of blocks this means two things that we can point an instance at a full backup for ANY day of the week simply by passing a suitable “timestamp” to DxFs – and we’ve 7 full backups for the space of 107% of a single full backup.

There are lots more to say, but I think they will have to wait for tomorrow’s conversation with Kyle, and for a couple more articles.

Register of Interests / Disclosure

Delphix Corp. paid my consultancy rates and expenses for a visit to the office in Menlo Park to review their product.

And now, ...the video

First, I want to thank everyone who responded to my prior blog post and its accompanying survey, where I asked when video is better than a paper. As I mentioned already in the comment section for that blog post, the results were loud and clear: 53.9% of respondents indicated that they’d prefer reading a paper, and 46.1% indicated that they’d prefer watching a video. Basically a clean 50/50 split.

The comments suggested that people have a lower threshold for “polish” with a video than with a paper, so one of the ideas to which I’ve needed to modify my thinking is to just create decent videos and publish them without expending a lot of effort in editing.

But how?

Well, the idea came to me in the process of agreeing to perform a 1-hour session for my good friends and customers at The Pythian Group. Their education department asked me if I’d mind if they recorded my session, and I told them I would love if they would. They encouraged me to open it up the the public, which of course I thought (cue light bulb over head) was the best idea ever. So, really, it’s not a video as much as it’s a recorded performance.

I haven’t edited the session at all, so it’ll have rough spots and goofs and imperfect audio... But I’ve been eager ever since the day of the event (2013-02-15) to post it for you.

So here you go. For the “other 50%” who prefer watching a video, I present to you: “The Method R Profiling Ecosystem.” If you would like to follow along in the accompanying paper, you can get it here: “A First Look at Using Method R Workbench Software.”

Not “how”, but “why” should we upgrade to JDeveloper & ADF ?

Followers of the blog know I’m an Oracle database guy, but my current job also has me honing my newbie WebLogic 11g skills, setting up a number of servers to deliver ADF and Forms & Reports 11gR2 applications.

As you’ve no doubt heard, Oracle have just released the version of JDeveloper and ADF. I tried applying the patch to a WebLogic 11g (10.3.6) installation and it worked without any problems (see here).

The real issue is, we currently have developers working hard to get applications converted from AS10g to ADF ( running on WebLogic 11g (10.3.6). As much as I would like to “force” them to upgrade to, it has to be justified. So why should we upgrade to JDeveloper & ADF

One of the great things about the Oracle ACE program is the level of access you get to experts in a variety of Oracle technologies. This network of people includes both Oracle ACEs and Oracle employees.

So how did I go about answering my question? Simple! I emailed my buddy Chris Muir (Oracle ADF Product Manager at Oracle), who is far better qualified to answer than me. :) In that email I asked the following three questions:

  1. Assuming we don’t need the extra functionality in ADF, what is the advantage of moving to it? Are the bug fixes and maybe browser compatibility changes enough to warrant the upgrade?
  2. Is there a significance as far as support lifecycle is concerned?
  3. Is the upgrade likely to break anything that has already been converted for

I suggested Chris might want to write a blog post based on these questions. He suggested a remote Q&A style post, so this is the “Q” and Chris will supply the “A” here!



Not “how”, but “why” should we upgrade to JDeveloper & ADF ? was first posted on April 3, 2013 at 10:02 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Hotsos Revisited 2013 – Presentatie materiaal

Hierbij nog dank voor allen die aanwezig waren bij de weer gevulde, informatieve & gezellige avond tijdens “Hotsos Revisited 2013″. Wij presentatoren hebben genoten van het ambiance. Hier ook nog voor degenen die graag het nog een keer willen nalezen het presentatie materiaal van Toon, Jacco, Gerwin, Frits en mij… Presentatie materiaal in alfabetische volgorde: …

Continue reading »

A High Performance Architecture for Virtual Databases

Delphix completely changes the way companies do business by removing critical bottlenecks in development productivity. Gone are the days when a simple database clone operation from production to development takes hours or even days. With Delphix database virtualization, cloning of Oracle databases becomes a fast and practically free operation. Additionally, infrastructure costs can be drastically reduced due to decreased disk utilization in your cloned environments; in fact, the more environments you deploy, the more cost savings you will realize.

The old adage is that your options are Fast, Good, and Cheap; pick two of three. But as a recent performance study between Delphix and IBM shows, all three options are within reach.

  • Fast database deployments due to Delphix database virtualization technology
  • High performance results through shared block caching in Delphix memory
  • Reduced costs through compression, block mapping, and other Delphix core capabilities

Delphix and IBM partnered to design a high performance virtual database implementation that could be used for reporting, development, quality assurance (QA), and user acceptance testing (UAT) at reasonable costs. In the end, the research shows that the virtual database environments provisioned with Delphix can achieve strong performance levels and scalability suitable for any organization.

The average organization provisions multiple copies of a single production database which can include multiple copies of development, QA, UAT, stress testing, maintenance QA, and others. Each of these copies requires time to build and provision and the end result is several identical instances with their own copies of largely identical data blocks across several shared memory regions. Delphix achieves a great reduction is disk usage by only saving identical blocks to disk once. But more importantly, Delphix also only caches unique data blocks in memory. This means that read performance across multiple systems that are provisioned from a single source can show dramatic improvements and scalability.

Purpose of the Tests

Delphix is typically used to create non-production environments from production sources. With a Delphix powered infrastructure you can:

  • Enable high performance virtualized databases for reporting, development, and QA which improves productivity and decreases bottlenecks in deployment schedules
  • Dramatically reduce the cost of non-production environments
  • Reduce the impact of cloning and/or reporting on the production database system
  • Reduce the occurrence of application errors due to inadequate testing on out of date or partially populated development and QA systems

By design, Delphix has a small performance overhead because the I/O is supplied not over a dedicated fiber channel connection to storage but over NFS; however, by properly architecting the Delphix infrastructure, environments can be designed that actually run faster than physical deployments. On top of that, the environments come at a lower cost and as you will see from the research, performance actually gets better as the number of users increases.

Test Goals

In the tests run by IBM and Delphix, physical and virtual databases were tested under several scenarios to identify the effective performance benefits and how best to architect the Delphix server to maximize virtual database performance. The goal of the test was to both determine optimal configuration settings for the Delphix infrastructure and to show optimal I/O throughput on virtualized databases compared to physical database environments.

Test Environment

In order to properly test physical and Delphix virtual database environments, tests were performed concurrently at IBM Labs in Research Triangle Park, NC and Delphix Labs in Menlo Park, CA. Both environments used the same configuration and server hardware.

Generally a standard physical Oracle deployment will involve a database host that connects directly to a SAN or other storage device, typically over fiber. As such the physical database test performed by IBM used a direct connection to SAN. For the Delphix tests however, the SAN is connected directly to the Delphix host via fiber and the Oracle host is connected to Delphix via NFS. This allows Delphix to act as a layer between the Oracle database server and the storage device.


While this extra I/O layer may seem counterproductive, Delphix actually acts as a caching tier (in addition to its many other virtues). The presence of a properly configured Delphix server augments the storage subsystem and improves I/O performance on read requests from the Oracle host. The bigger and ‘warmer’ the cache, the greater the performance gain. And if SSD is used for your dedicated disk area, you can get more money out of your investment thanks to the single-block sharing built into Delphix.

Storage Configuration

For the purposes of this test, both the physical and virtual database testing environments used the same SAN. The SAN configuration consisted of a 5-disk stripe of 10K RPM spindles. Two 200GB LUNs were cut from the stripe set.

IBM Hardware

One of the main goals in this joint test with IBM was to find an optimal configuration for Delphix on hardware that could provide flexibility and power at an attainable price. To this end we chose the IBM x3690 X5 with Intel Xeon E7 chips; a system that is reasonably priced, powerful, and supports up to 2TB memory. Delphix does not necessarily require a large amount of memory, but the more there is available the better Delphix and your virtualized databases will scale with extremely fast response times.

The test x3690 servers were configured with 256GB RAM. VMware ESX 5.1 was installed on the systems to serve as hypervisor for the virtual environments. The Delphix VM itself (did I mention Delphix can run on as a virtual guest?) was configured with 192GB RAM. Additionally, two Linux guests were created and configured with 20GB each to act as a source and target system respectively.

In this configuration, the source database is the initial provider of data to Delphix. Following the initial instantiation, change data is incorporated into Delphix to keep the environment up to date and for point-in-time deployment purposes. The target system connects to Delphix via NFS mounts and can have a virtualized database provisioned to it at any time.

Load Configuration with Swingbench

Swingbench (a free database loading utility found at was used to load and measure transactions per minute (TPM) on the database host. 60GB data sets were used to populate the source Oracle database that filled a 180GB datafile inside the DB. Tests were run using standard OLTP Think Times. User load was varied for more realistic testing between 1 and 60 concurrent users. Each test ran for a 60 second window.

Database Configuration

The Swingbench database served as a test bed for the physical database servers and as the source database for Delphix virtual database provisioning.
Delphix is capable of linking to multiple source databases. In this test, it connected and linked to a Swingbench source database. Once the initial link was made a virtual database was provisioned (extremely quickly) on the target database host.

The Oracle instance on both physical and virtual systems were set up with a 100MB buffer cache, a fact that is sure to make some administrators cringe with fear. But the buffer cache was intentionally set to a small size to emphasize the impact of database reads. A large cache would not show improvement to database reads, but simply that a cache was in use. In order to show the true power Delphix brings to your I/O configuration, a small cache on the target database instance shows the work that the benchmark has performed at the read level. As such, I/O latency becomes the key factor in benchmark performance. Caching at the virtualization layer will produce good performance, but uncached data could result in very poor I/O results.

Network Configuration

In the virtual database environment provisioned by Delphix, the network is just as important as storage due to the way the target Oracle host uses NFS to the Delphix server’s attached storage. The performance of NFS and TCP has a direct impact on the I/O latency of virtualized databases.

To reduce the I/O latency and increase I/O bandwidth in the virtual database test, the Oracle Linux guest used for the target is on the same ESX server as Delphix. By keeping the Delphix tier and target system on the same hardware and VM infrastructure NFS and TCP communication can be done without additional physical network latency by running communications through a NIC. This architecture is known as Pod Architecture and it eliminates typical issues with networks such as suboptimal NICs, routers, or congestion.

In the physical environment the network configuration was not important as processes and communications were all performed locally on the database host and storage on the SAN was accessed via fiber.

High Level Results

All tests were run twice – once with a completely cold cache (0 data in cache) and once with a warm cache (the full dataset has been touched and a working set of data in cache has been established).

With a load of 10 users, the throughput measured in TPM was as follows:

  • Physical Database
    • Cold Cache – 1800 TPM
    • Warm Cache – 1800 TPM
  • Virtual Database
    • Cold Cache – 2100 TPM
    • Warm Cache – 4000 TPM

The implications of these tests are absolutely astounding. Not only was Delphix capable of provisioning a target database host in a matter of minutes, but also the virtual database outperformed the physical counterpart with both a cold and warm cache. With nothing in cache, the virtualized database performed very well, and additional caching increased performance dramatically while on the physical database it remained stagnant.

Detailed Results

OLTP Tests
Performance improving by virtue of an increasingly warm cache is great, but there was much more the test revealed. During the course of the tests, it was found that an increased number of concurrent users (generally seen as a detriment to performance) improved performance even further. This level of scalability goes beyond standard environments where things like high levels of concurrency can bog down the database environment. Instead, higher concurrency improved throughput for scaling that comes with an exponential performance improvement.

With more than five users on a cold cache and testing 1-60 users on a warm cache, the virtual databases outperform the physical counterpart.

Cold Cache TPM Results by # of users:

phys_vs_virt_tpm_coldNumber of users

Warm Cache TPM Results by number users:

phys_vs_virt_tpm_warmnumber of users

Increased users (concurrency) brought greater performance benefits to the virtual database environment. Instead of degraded performance  as the number of concurrent users rose the tests conclusively showed dramatic improvements as more users were added to the testing process. As configured with a warm cache and 60 concurrent users, Delphix showed a TPM six times greater than the physical databases.

OLTP Load on Multiple Virtual Databases

In order to test the impact of multiple virtual databases sharing a single cache, two virtual databases were provisioned from the same production source. Tests were run similar to the previous testing exercise against:

  • A single physical database
  • Two concurrent physical databases using the same storage
  • A single virtual database
  • Two virtual databases using the same storage

In this test, we measured both the throughput in TPM and the average I/O latency.

TPM vs. Single Block I/O Latency by number of users:

TPM_vs_latencynumber of users

As you can see from these tests, as the number of a) database instances and b) concurrent users increased, latency degraded exponentially on the physical environment. As concurrency at the I/O layer rises, latency becomes a huge bottleneck in efficient block fetches resulting in flat or decreasing TPM.

On the other hand, the Delphix environment flourished under increased load. Thanks to shared block caching, more users and more virtual databases meant dramatically increased TPM because of the lack of latency due to shared blocks. Just as in the previous test, the scalability of Delphix is entirely unexpected based on traditional Oracle scaling; as systems and users rise the environment as a whole not only scales, but also scales exponentially.

Performance measured in seconds of a full table scan on the ORDERS table:
















It is worth clarifying here that the impacts seen with caching on Delphix for a single database can of course be attained in any environment by caching more on the database host with an increased buffer cache. However, if there are multiple copies of the database running (remember the need for Dev, QA, Stress, UAT, etc.) there will be no benefit on a physical environment due to the inability of Oracle to share cache data. In a Delphix environment this problem disappears due to its dual function as a provisioning tool and I/O caching layer. Blocks in cache on one virtual database target system will be shared in the Delphix shared cache between all virtual databases.

To show the impact of this functionality, throughput tests were run against two different virtual database targets. First against Virtual DB 1 with a cold cache, then Virtual DB 1 with a warm cache, following by Virtual DB 2 with a  cold cache and Virtual DB 2 with a warm cache.

Performance measured in seconds of a full table scan  on customers, orders and order_items














Three queries were used for this test, all performing full table scans against different tables in each virtual database. Each query was run as previously described (Virtual DB 1 cold, Virtual DB 1 warm, Virtual DB 2 cold, Virtual DB 2 warm). The purpose of this test was to simulate standard Decision Support System (DSS) type queries across multiple virtual databases to show the effects of block caching in Delphix.

We see that by warming the cache on Virtual DB 1 the query time dropped dramatically (which is to be expected). However, it is also clear that running the query on Virtual DB 2 with a cold cache retained the same caching benefit. Because Virtual DB 1 warmed the cache, Virtual DB 2 was able to utilize the fast I/O of cached data without any pre-warming. This behavior is the core of the exceptional results in the previous TPM tests when a higher user load and target database count was introduced.

Maximizing Resources and Performance
We have all been required at some point to create multiple databases on a single host, and each time it is difficult to decide exactly how to set each SGA to make the best use of RAM on the server. Make a single database instance’s SGA too large and you take away critical assets from the other databases on the host. Make all of the instance’s SGAs too large and you can seriously hurt the performance of all databases mounted by instances on that server.

RAM is an expensive resource to use; not necessarily in terms of cost, but in terms of importance to the system and limited availability. By sharing the I/O cache between virtual database clones of the same source via Delphix, the RAM usage on the server is optimized in a way that’s simply not possible on a physical database configuration.

Delphix removes the RAM wall that exists in a physical database environment. Multiple instances are able to share data blocks for improved read times and increased concurrency capabilities. While it is possible to share caching on a SAN in a physical database configuration, remember that each database copy will use different cached blocks on the SAN. Additionally, SAN memory is far more expensive than memory on standard commodity x86 hardware. For example, the cost of 1GB RAM on an x86 server is around $30. On an EMC VMAX the same 1GB RAM will cost over $900.*  And that SAN caching will not carry with it all the additional provisioning benefits that Delphix brings to the table.

Even though the tests were constructed to show maximum performance improvements due to a well-architected Delphix infrastructure, the impact of nearly any Delphix deployment will be dramatic. Slower hardware, smaller cache, or other factors can contribute to a less optimal architecture but the principle benefits remain. The actual minimum requirement for the Delphix host is 16GB RAM, but the average size among customers is 64GB. This is obviously not a small size but it is dramatically smaller than the rest of the test.

In real use cases of Delphix at our customers’ sites, we have found that on average 60% of all database I/O block requests to  Delphix are satisfied  by Delphix cache. This means that 60% of customer queries against their development, QA, reporting, and other environments provisioned via Delphix never have to touch the SAN at all. This relieves bottlenecks at the SAN level and improves latency for the I/O that actually does need to occur on disk. Price obtained on pages 897-898: Storage engine for VMAX 40k with 256 GB RAM is ~$393,000 Storage engine for VMAX 40k with 48 GB RAM is ~$200,000, thus 256GB – 48GB = 208GB  and $393,000 – $200,000 = $193,000, So the cost of RAM here is $193,000 / 208GB = $927/GB.


In nearly every test, Delphix outperformed the traditional physical database, a shift that flies in the face of every ‘performance scaling’ fact we have held as true until this point. By breaking outside of the normal optimization configuration (physical database with cache connected to SAN with cache), we are introducing a multipurpose layer which provides incredible shared caching benefits beyond any that can be found on a solitary database host.

Additionally, the more we threw at the Delphix targets the better it got. More databases, more concurrent users, all came back with better than linear scaling; dramatic gains were common as more work was required of the Delphix server. By implementing Delphix, the IBM x3690 we used for testing was capable of so much more than it could normally handle with the added benefit of cheap RAM as the cache layer and incredibly fast provisioning to boot. The architecture as a whole was significantly cheaper than a robust SAN caching configuration on purpose-built hardware while performing and scaling with dramatic improvements.

 Other reading

    The Delphix documentation is online at
    For more information on Delphix see
    A video on database virtualization along with a blog article
  • #3b5998; font-family: lucida grande,tahoma,verdana,arial,sans-serif;">Oaktable World presentation on Database Virtualization
and a few more blog posts on database virtualization:

I’m Back !! Oracle Index Internals Seminar Scheduled in Europe For June.

It’s been quite a while but I’ve recently been asked to present my Indexing Seminar in Europe for Oracle University and at this stage, all the planets seem to be aligning themselves to make this a distinct possibility. Seminar details are as follows: Location: Bucharest, Romania When: 17-18 June 2013 Cost: 2000 RON (approx. 450 […]