I just checked to find out that there has been 3,000 downloads of SLOB – The Silly Little Benchmark. People seem to be putting it to good use. That’s good.
Before I get very far in this post I’d like to take us back in time–back before the smashing popularity of the Orion I/O testing tool.
When Orion first appeared on the scene there was a general reluctance to adopt it. I suspect some of the reluctance stemmed from the fact that folks had built up their reliance on other tools like bonnie, LMbench, vxbench and other such generic I/O generators. Back in the 2006 (or so) time frame I routinely pointed out that no tool other than Orion used the VOS layer Oracle I/O routines and libraries. It’s important to test as much of the real thing as possible.
Who wants to rely on an unrealistic platform performance measurement tool after all?
Over time I built a list of reasons I could no longer accept Orion as sufficient for platform I/O testing. Please note, I just wrote “platform I/O testing” not “I/O subsystem testing.” I think the rest of this post will make the distinction between these two quoted phrases quite clear. The following is a short version of the list:
My present-tense views on Orion sometimes surface on twitter where I am occasionally met with vigorous disagreement–most notably from my friend Alex Gorbachev. Alex is a friend, co-member of the Oaktable Network, CTO of Pythian (I love those Pythian folks), and someone who generally disagrees with most everything I say.
I respect Alex, because he has vast knowledge and valuable skills. His arguments make me think. That’s a good thing. I’m not sure, however, our respective spheres of expertise overlap.
So how do these disagreements regarding SLOB get started? Recently I tweeted:
The difference between SLOB and Orion is akin to Elliptical trainer versus skiing on the side of a mountain.
I could just as well argue that SLOB is useless because that’s not real workload anyway and you should test with your app
This quick exchange of ideas set into motion some Pythian testing by Yury. As it turns out I think the goal of that test was to prove parity between SLOB and Orion for random reads–and perhaps not much more. If only I have published “My List” above before then.
Yury’s tests were good, albeit, exceedingly small in scope. His blog post suggests more testing on the way. That is good. If you read the comment thread on his blog entry you’ll see where I thank Yury for a good tweak to the SLOB kit that eliminates the db file parallel reads associated with the index range scans incurred by SLOB reader processes. Come to think of it though, Matt from Violin Memory pointed that one out to me some time back. Hmm, oh well, I digress. The modifications Yury detailed (init.ora parameters) will be included in the next drop of the SLOB kit. Again, thanks Yury for the testing and the init.ora parameter change recommendations!
Feel free to see Yury’s findings. They are simple: SLOB and Orion do the same thing. Really, SLOB and Orion do the same thing? Well, that may be the case so long as a) you compare SLOB to Orion only for simple random read testing and/or b) your testing is limited to a little, itsy-bitsy, teeny, tiny, teensy, minute, miniscule, meager, puny, Lilliputian-grade undersized I/O subsystem incapable of producing reasonable, modern-scale IOPS. Yury’s experiment topped out at roughly 4,500 random read IOPS. I’ll try to convince you that there is more to it than that (hint, modern servers are fit for IOPS in the 20,000/core range). But first, I have two quotable quotes to offer at this point:
When assessing the validity of an I/O testing tool, do so on a system that isn’t badly bottlenecked on storage.
If your application (e.g., Oracle Database) is “mindless” use a “mindless” I/O generator–if not, don’t.
So what do I mean when I say “mindless I/O?” The answer to that is simple. If the code performs an I/O into a memory buffer, without any application concurrency overhead, and no processes even so much as peeks at a single byte of that buffer populated through DMA from the I/O adapter device driver–it’s mindless. That is exactly how Orion does what it does. That’s what every other synthetic I/O generator I know of does as well.
So what does mindless I/O look like and why does it show up on my personal radar as a problem? Let’s take a look–but first let me just say one thing–I analyze I/O characteristics on extremely I/O capable platforms. Extremely capable.
The following screen shot shows a dd(1) command performing mindless I/O by copying an Oracle OMF datafile from an XFS file system to /dev/null using direct I/O. After that another dd(1) was used to show the difference between “mindless” and meaningful I/O. The second dd(1) was meaningful because after each 1 MB read the buffer is scanned looking for lower case ASCII chars to convert to their upper-case counterpart. That is, the second dd(1) did data processing–not just a mindless tickling of the I/O subsystem.
The mindless I/O was 2.5 GB/s but the meaningful case fell to about 1/6th that at 399 MB/s. See, CPU matters. It matters in I/O testing. CPU throttles I/O–unless you are interested in mindless I/O. What does this have to do with Orion and SLOB? A moment ago I mentioned that I test very formidable I/O subsystems commensurate with modern platforms–so hold on to your hat while I tie these trains of thought together.
Building on my dd(1) example of mindless I/O, I’ll offer the following screen shot which shows Orion accessing the same OMF SLOB datafile (also via direct I/O validated with strace). Notice how I force all the threads of Orion (it’s threaded with libpthreads) to OS CPU 0 using numactl(8) on this 2s12c24t Xeon 5600 server? What you are about to see is the single-core capacity of Orion to perform “mindless I/O”:
Unrealistic Platform Performance Measurement Tools
This is only Part I in this series. I’ll be going through a lot of proof points to solidify backing for my Orion-related assertions in the list above, but please humor me for a moment. I’d like to know just how realistic are platform performance measurements from an I/O tool that demonstrates capacity for 144,339 physical 8K random IOPS while pinned to a single core of a Xeon 5600 processor?
We are interested in database platform IOPS capacity, right?
Through this blog series I aim to help you conclude that any tool demonstrating such an unrealistic platform performance measurement is, well, an unrealistic platform performance measurement tool.
Do you feel comfortable relying on an unrealistic platform performance measurement tool? Before I crafted SLOB I too accepted test results from unrealistic platform performance measurement tools but I learned that I needed to include the rest of the platform (e.g., CPU, bus, etc) when I’m studying platform performance so I left behind unrealistic platform performance measurement tools.
Until recently I didn’t spend any time discussing measurements taken from unrealistic platform performance measurement tools. However, since friends and others in social media are pitting unrealistic platform performance measurement tools against SLOB (not an unrealistic platform performance measurement tool) such comparisons are blog-worthy. Hence, I’ll trudge forward blogging about how unrealistic certain unrealistic platform performance measurement tools are. And, if you stay with me on the series, you might discover some things you don’t know because, perhaps, you’ve been relying on unrealistic platform performance measurement tools.
As this series evolves, I’ll be sharing several similar unrealistic platform performance measurement tool results as I go though the list above. That is, of course, what motivated me to leave behind unrealistic platform performance measurement tools.
Final Words For This Installment
In Yury’s post he quoted me as having said:
It’s VERY easy to get huge Orion nums
His assessment of that quote was, “kind of FALSE on this occasion.” Having now shown what I mean by “VERY easy” (e.g., even a single core can drive massive Orion IOPS) and “huge Orion” numbers (e.g., 144K IOPS), I wonder whether Yury will be convinced about my assertions regarding unrealistic platform performance measurement tools? If not yet, perhaps he, and other readers will eventually. After all, this is only Part I. If not, Yury, I still want to say, “thanks for testing with SLOB and please keep the feedback coming.”
Alex and I may always disagree
Oh, by the way folks, if all you have is Orion, use it. It is better than wild guesses–at least a little better.
Filed under: oracle