Who's online

There are currently 0 users and 23 guests online.

Recent comments

Introducing SLOB – The Silly Little Oracle Benchmark

BLOG UPDATE 2012.0.2.8: I changed the URL to the kit and uploaded a new tar archive with permissions changes

We’ve all been there.  You’re facing the need to assess Oracle random physical I/O capability on a given platform in preparation for OLTP/ERP style workloads. Perhaps the storage team has assured you of ample bandwidth for both high-throughput and high I/O operations per second (IOPS).  But you want to be sure and measure for yourself so off you go looking for the right test kit.

There is no shortage of transactional benchmark kits such as Hammerora, Dominic Giles’ SwingBench, and cost-options such as Benchmark Factory.  These are all good kits. I’ve used them all more than once over the years. The problem with these kits is they do not fit the need posed in the previous paragraph.  These kits are transactional so the question becomes whether or not you want to prove Oracle scales those applications on your hardware or do you want to test IOPS capacity? You want to test IOPS. So now what?

What About Orion?

The Orion tool has long been a standard for testing Oracle block-sizes I/O via the same I/O libraries linked into the Oracle server.  Orion is a helpful tool, but it can lead to a false sense of security.  Allow me to explain. Orion uses no measurable processor cycles to do its work. It simply shovels I/O requests into the kernel and the kernel (driver) clobbers the same I/O buffers in memory with the I/O (read) requests again and again. Orion does not care about the contents of I/O buffers and therein lies the weakness of Orion.

At one end of the spectrum we have fully transactional application-like test kits (e.g., SwingBench) or low-level I/O generators like Orion. What’s really needed is something right in the middle and I propose that something is SLOB—the Silly Little Oracle Benchmark.

What’s In A Name?

SLOB stands for Silly Little Oracle Benchmark. SLOB, however, is neither a benchmark nor silly. It is rather small and simple though. I need to point out that by force of habit I’ll refer to SLOB with terms like benchmark and workload interchangeably. SLOB aims to fill the gap between Orion and full function transactional benchmarks. SLOB possesses the following characteristics:

  1. SLOB supports testing Oracle logical read (SGA buffer gets) scaling
  2. SLOB supports testing physical random single-block reads (db file sequential read)
  3. SLOB supports testing random single block writes (DBWR flushing capacity)
  4. SLOB supports testing extreme REDO logging I/O
  5. SLOB consists of simple PL/SQL
  6. SLOB is entirely free of all application contention

Yes, SLOB is free of application contention yet it is an SGA-intensive workload kit. You might ask why this is important. If you want to test your I/O subsystem with genuine Oracle SGA-buffered physical I/O it is best to not combine that with application contention.

SLOB is also great for logical read scalability testing which is very important, for one simple reason: It is difficult to scale physical I/O if the platform can’t scale logical I/O. Oracle SGA physical I/O is prefaced by a cache miss and, quite honestly, not all platforms can scale cache misses. Additionally, cache misses cross paths with cache hits. So, it is helpful to use SLOB to test your platform’s ability to scale Oracle Database logical I/O.

What Is The History Of The Kit

I wrote parts of it back in the 1990s when I was in the Advanced Oracle Engineering group at Sequent Computer Systems and some of it when I was the Chief Architect of Oracle Database Solutions at PolyServe (acquired by HP in 2007). PolyServe was one of the pioneers in multi-headed scalable NAS and I needed workload harnesses that could drive I/O according to my needs.  The kit is also functionally equivalent to what I used in my own engineering work while I was the Performance Architect in Oracle’s Exadata engineering group (Systems Technology).  In short the kit has a long heritage considering how simple it is.

What’s In The Kit?

There are no benchmark results included. The kit does, however, include:

  • README files. A lot of README files. I recommend starting with ~/README-FIRST.
  • A simple database creation kit.  SLOB requires very little by way of database resources. I think the best approach to testing SLOB is to use the simple database creation kit under ~/misc/create_database_kit.  The directory contains a README to help you on your way. I generally recommend folks use the simple database creation kit to create a small database because it uses Oracle Managed Files so you simply point it to the ASM diskgroup or file system you want to test. The entire database will need no more than 10 gigabytes.
  • An IPC semaphore based trigger kit.  I don’t really need to point out much about this simple IPC trigger kit other than to draw your attention to the fact that the kit does require permissions to create a semaphore set with a single semaphore. The README-FIRST file details what you need to do to have a functional trigger.
  • The workload scripts. The setup script is aptly named and to run the workload you will use These scripts are covered in README-FIRST.
  • Init.ora files. You’ll find test.ora under ~/misc/sample_data. The purpose of this init.ora is to show just how little tweaking Oracle Database requires to scale physical I/O and logical reads. The directory is named sample_data because I originally intended to offer pairs of init.ora and AWR reports so folks could see what different systems I’ve tested, what performance numbers I’ve seen and the recipe I used (the combination of connected pseudo users and init.ora parameters). The name of the directory remains but I pulled the content so as to not excite Oracle’s lawyers.


The size of the SGA buffer pool is the single knob to twist for which workload profile you’ll generate. For instance, if you wish to have nothing but random single block reads you simply run with the smallest db_cache_size your system will allow you to configure (see README-FIRST for more on this matter).  On the other hand, the opposite is what’s needed for logical I/O testing. That is, simply set db_cache_size to about 4GB, perform a warm-up run and from that point on there will be no physical I/O. Drive up the number of connected pseudo users and you’ll observe logical I/O scale up bounded only by how scalable your platform is.  The other models involve writes. If you want to drive a tremendous amount of REDO writes you will again configure a large db_cache_size and execute with only write sessions. From that point you can reduce the size of db_cache_size while maintaining the write sessions, which will drive DBWR into a frenzy.

Who Has Used The Kit?

Several folks from the OakTable Network and other friends. Perhaps they’ll chime in on their findings and what they have learned about their platform as a result of testing with SLOB.

What You Should Expect From SLOB

I/O, lots of it! If you happen to be an Exadata user you’ll see about 190,000 physical read I/O (from Exadata Smart Flash Cache) generated from each instance of RAC in your configuration. Oracle does not misrepresent the truth in their datasheets regarding Exadata random cache reads. If you have a full-rack Exadata you too can now study the system characteristics under an approximated 1.5 million read IOPS workload. Testing Exadata with the write-intensive SLOB models will reveal capacities for DBWR and LGWR flushing.  If you have conventional storage you’ll drive the maximum it will sustain.

Where Is The Kit?

I’ve uploaded it to the website at the following URL. Simply extract the gzipped tar archive into a working directory and see README-FIRST.

Filed under: oracle