In my previous post I described how the IO resource manager (IORM) in the exadata storage can be used both to guarantee a minimum amount of IO a database can get (which is what is covered in most material available), but also to set a maximum amount of IO. This is what Oracle calls an inter-database resource manager plan. This is set and configured at the cell level using the cellcli with ‘alter iormplan dbplan’.
The Exadata IO Resource Manager (IORM) is a fundamental piece of the exadata database machine since the beginning. The function of the IORM is to guarantee a database and/or category (set via the database resource manager) gets the share of IO as defined in the resource plan at each storage server. The “share” here means the minimal amount of IO. This is widely known and documented in the Exadata documentation. Having a minimal amount of IO means you can guarantee a service level agreement (SLA).
But what if you want to define the maximal amount of IO a database may get? The case of a maximal amount of IO a database can get is that you can use a database on Exadata, and do not (or very limited) change the performance once more databases get onto the same database machine. So instead of having the minimal amount of IO a database must get guaranteed (which is what is documented in the exadata documentation), having a limit on the maximal amount a database can get?
I will get back to the stats stuff at some point, but I'm quite busy at the moment working on something that I can't talk too much about, but which is throwing up enough generic issues to talk about. This is one that I meant to blog about ages ago when I first noticed it but when it caused us some problems last week, it was a useful reminder.
In summary you need to be careful when you upgrade to 11g because Resource Manager is enabled by default!
I don't want to blog about the ins and outs of Resource Manager and whether it's a good thing or not, but I do think this is a pretty extreme change to implement without a lot of surrounding publicity. It's a bit like the auto stats gather job that appeared in 10g that caused so many problems for Oracle users. It seems like it might be a good idea, but would you really want to introduce it on to a stable system that you're upgrading to 11g?
But rather than just talk about the change, I wanted to highlight how I first realised it was going on ...
Instance caging is another small but useful feature of Oracle Database 11g Release 2. Thanks to it the database resource manager is able, for the first time, to limit the number of CPUs that can be used by a given instance. (By the way, note that this limit has no “impact” on the number of [...]