As I suspected, I’m the only person on the course that doesn’t know what a network is. If I had not been tinkering with the reverse proxies over the last year I would have been pretty much lost.
The course itself is well structured and the teacher is good. The fact I’ve not flounced out in a huff is testament to that. The pattern will be quite familiar to anyone who has been on a hands-on course before. Discuss a topic with slides, then do a hands-on lab that works through that stuff.
It takes a while to get into the swing of things and I’ve proved to myself I am incapable of reading other people’s instructions, so it’s a good job I usually write my own. Now I’m starting to get used to the interface and command line, I’m hoping today will be a bit easier.
From a brief discussion, what I need from the load balancers seems *drastically* different to most of the other people in the room. I think this is going to be quite a long and arduous road when I start having to apply some of this to real situations. I sense a lot of external consultancy…
It is interesting coming from a different background to the others in the room and seeing how we approach things from different angles, and with different emphasis. I’ll write a blog post about this when I’ve finished the course, because it’s been something that has been brewing in my mind for a while…
As you will probably already know, I followed the day with a visit to see Guardians of the Galaxy.
I’ve been swimming this morning and now I have to log on to work to fix some stuff before starting day 2 of the course.
Depending on your setup, you might have automatically updated anyway. If not, go on to your dashboard and give it a nudge.
I’ve just got back from seeing Guardians of the Galaxy.
Take note Sci-Fi movie makers! This is your competition! This is what you need to aim to outdo!
It’s very cool. It looks great. You quickly start to give a crap about the characters. It doesn’t take itself too seriously.
Favourite Character: Groot. I challenge anyone to come out of the film and not want to say, “I am Groot”, in response to any situation.
I didn’t really fancy it when I saw the trailers. A couple of people were talking about how good the reviews were, so I thought I would give it a go. I’m glad I did. It’s excellent!
The Flash Cache Mode still defaults to Write-Through on Exadata X-4 because most customers are better suited that way – not because Write-Back is buggy or unreliable. Chances are that Write-Back is not required, so we just save Flash capacity that way. So when you see this
CellCLI> list cell attributes flashcachemode WriteThrough
it is likely to your best :-)
Let me explain: Write-Through means that writing I/O coming from the database layer will first go to the spinning drives where it is mirrored according to the redundancy of the diskgroup where the file is placed that is written to. Afterwards, the cells may populate the Flash Cache if they think it will benefit subsequent reads, but there is no mirroring required. In case of hardware failure, the mirroring is already sufficiently done on the spinning drives, as the pictures shows:
That changes with the Flash Cache Mode being Write-Back: Now writes go primarily to the Flashcards and popular objects may even never get aged out onto the spinning drives. At least that age out may happen significantly later, so the writes on flash must be mirrored now. The redundancy of the diskgroup where the object in question was placed on determines again the number of mirrored writes. The two pictures assume normal redundancy. In other words: Write-Back reduces the usable capacity of the Flashcache at least by half.
Only databases with performance issues on behalf of writing I/O will benefit from Write-Back, the most likely symptom of which would be high numbers of the Free Buffer Waits wait-event. And Flash Logging is done with both Write-Through and Write-Back. So there is a good reason behind turning on the Write-Back Flash Cache Mode only on demand. I have explained this just very similar during my present Oracle University Exadata class in Frankfurt, by the way :-)
This is Part I in a short series of posts dedicated to loading SLOB data. The SLOB loader is called setup.sh and it is, by default a concurrent, data loader. The SLOB configuration file parameter controlling the number of concurrent data loading threads is called LOAD_PARALLEL_DEGREE. In retrospect I should have named the parameter LOAD_CONCURRENT_DEGREE because unless Oracle Parallel Query is enabled there is no parallelism in the data loading procedure. But if LOAD_PARALLEL_DEGREE is assigned a value greater than 1 there is concurrent data loading.
Occasionally I hear of users having trouble with combining Oracle Parallel Query with the concurrent SLOB loader. It is pretty easy to overburden a system when doing something like concurrent, parallel data loading–in the absence of tools like Database Resource Management I suppose. To that end, this series will show some examples of what to expect when performing SLOB data loading with various init.ora settings and combinations of parallel and concurrent data loading.
In this first example I’ll show an example of loading with LOAD_PARALLEL_DEGREE set to 8. The scale is 524288 SLOB rows which maps to 524,288 data blocks because SLOB forces a single row per block. Please note, the only slob.conf parameters that affect data loading are LOAD_PARALLEL_DEGREE and SCALE. The following is a screen shot of the slob.conf file for this example:
The next screen shot shows the very simple init.ora settings I used during the data loading test. This very basic initialization file results in default Oracle Parallel Query, therefore this example is a concurrent + parallel data load.
The next screen shot shows that I directed setup.sh to load 64 SLOB schemas into a tablespace called IOPS. Since SCALE is 524,288 this example loaded roughly 256GB (8192 * 524288 * 64) of data into the IOPS tablespace.
As reported by setup.sh the data loading completed in 1,539 seconds or a load rate of roughly 600GB/h. This loading rate by no means shows any intrinsic limit in the loader. In future posts in this series I’ll cover some tuning tips to improve data loading. The following screen shot shows the storage I/O rates in kilobytes during a portion of the load procedure. Please note, this is a 2s16c32t 115w Sandy Bridge Xeon based server. Any storage capable of I/O bursts of roughly 1.7GB/s (i.e., 2 active 8GFC Fibre Channel paths to any enterprise class array) can demonstrate this sort of SLOB data loading throughput.
After setup.sh completes it is good to count how many loader threads were able to successfully load the specified number of rows. As the example shows I simply grep for the value of slob.conf->SCALE from cr_tab_and_load.out. Remember, SLOB in its current form, loads a zeroth schema so the return from such a word count (-l) should be one greater than the number of schemas setup.sh was directed to load.
The next screen shot shows the required execution of the procedure.sql script. This procedure must be executed after any execution of setup.sh.
Finally, one can use the SLOB/misc/tsf.sql script to report the size of the tablespace used by setup.sh. As the following screenshot shows the IOPS tablespace ended up with a little over 270GB which can be accounted for by the size of the tables based on slob.conf, the number of schemas and a little overhead for indexes.
This installment in the series has shown expected screen output from a simple example of data loading. This example used default Oracle Parallel Query settings, a very simple init.ora and a concurrent loading degree of 8 (slob.conf->LOAD_PARALLEL_DEGREE) to load data at a rate of roughly 600GB/h.
Filed under: oracle
I’m on an F5 Load Balancer training course for the next 3 days.
I have no idea what to expect and to be honest, I really don’t think I should be here. With the exception of a bit of fiddling with Apache reverse proxies, I don’t really know anything about this stuff, so I’m not sure if this will go over my head or be intensely slow and boring…
If anything comes out of it worth blogging about I certainly will.
Chertsey is like a seaside town. It’s full of cafes, restaurants and odd little shops. When I was searching for a place to swim Google came up with loads of pool installation and maintenance companies, so I think it’s a pretty rich area. I found a local swimming pool, but I’ve had to remortgage my house to afford to swim there. I went this morning at 06:30 and it wasn’t too crowded. It’s unusual to find a private gym with a 25M pool. Most of them in the UK have tiny little things that you can’t swim in. It was a bit on the warm side, but then I guess you have to expect that when it’s not a training pool. Hopefully I won’t be too much of a slob by the time I get home.
I’m thinking I might do a cinema visit every night to play catch-up.
I didn’t intend to write another blog post yesterday evening at all, but found something that was worth sharing and got me excited… And when I started writing I intended it to be a short post, too.
If you have been digging around Oracle session performance counters a little you undoubtedly noticed how their number has increased with every release, and even with every patch set. Unfortunately I don’t have a 11.1 system (or earlier) at my disposal to test, but here is a comparison of how Oracle has instrumented the database. I have already ditched my 18.104.22.168 system as well, so no comparison there either :( This is Oracle on Linux.
In the following examples I am going to use a simple query to list the session statistics by their class. The decode statement is based on the official documentation set. There you find the definition of v$statname plus an explanation of the meaning of the class-column. Here is the script:
with stats as ( select name, decode(class, 1, 'USER', 2, 'REDO', 4, 'ENQUEUE', 8, 'CACHE', 16, 'OS', 32, 'RAC', 64, 'SQL', 128, 'DEBUG', 'NA' ) as decoded_class from v$statname ) select count(decoded_class), decoded_class from stats group by rollup(decoded_class) order by 1 /
22.214.171.124 is probably the most common 11g Release 2 version currently out there in the field. Or at least that’s my observation. According to MOS Doc ID 742060.1 126.96.36.199 was released on 23 September 2011 (is that really that long ago?) and already out of error correction support by the way.
Executing the above-mentioned script gives me the following result:
COUNT(DECODED_CLASS) DECODED -------------------- ------- 9 ENQUEUE 16 OS 25 RAC 32 REDO 47 NA 93 SQL 107 USER 121 CACHE 188 DEBUG 638
So there are 638 of these counters. Let’s move on to 188.8.131.52
Oracle 184.108.40.206 is interesting as it has been released after 220.127.116.11. It is the terminal release for Oracle 11.2, and you should consider migrating to it as it is in error correction support. The patch set came out on 28 August 2013. What about the session statistics?
COUNT(DECODED_CLASS) DECODED -------------------- ------- 9 ENQUEUE 16 OS 25 RAC 34 REDO 48 NA 96 SQL 117 USER 127 CACHE 207 DEBUG 679
A few more, all within what can be expected.
Oracle 18.104.22.168 is fresh off the press, released just a few weeks ago. Unsurprisingly the number of session statistics has been increased again. What did surprise me was the number of statistics now available for every session! Have a look at this:
COUNT(DECODED_CLASS) DECODED -------------------- ------- 9 ENQUEUE 16 OS 35 RAC 68 REDO 74 NA 130 SQL 130 USER 151 CACHE 565 DEBUG 1178
That’s nearly double what you found for 22.214.171.124. Incredible, and hence this post. Comparing 126.96.36.199 with 188.8.131.52 you will notice the:
The debug class (128) shows lots of statistics (including spare ones) for the in-memory option (IM):
SQL> select count(1), class from v$statname where name like 'IM%' group by class; COUNT(1) CLASS ---------- ---------- 211 128
Happy troubleshooting! Reminds me to look into the IM-option in more detail.
SLOB can be obtained at this link: Click here.
This post is just a simple set of screenshots I recently took during a fresh SLOB deployment. There have been a tremendous number of SLOB downloads lately so I thought this might be a helpful addition to go along with the documentation. The examples I show herein are based on a 184.108.40.206 Oracle Database but these principles apply equally to 220.127.116.11 and all Oracle Database 11g releases as well.
If you already have a tablespace to load SLOB schemas into please see the next step in the sequence.
Provided database connectivity works with ‘/ as sysdba’ this step is quite simple. All you have to do is tell setup.sh which tablespace to use and how many SLOB users (schemas) load. The slob.conf file tells setup.sh how much data to load. This example is 16 SLOB schemas each with 10,000 8K blocks of data. One thing to be careful of is the slob.conf->LOAD_PARALLEL_DEGREE parameter. The name is not exactly perfect since this actually controls concurrent degree of SLOB schema creation/loading. Underneath the concurrency may be parallelism (Oracle Parallel Query) so consider setting this to a rather low value so as to not flood the system until you’ve practiced with setup.sh for a while.
After taking a quick look at cr_tab_and_load.out, as per setup.sh instruction, feel free to count the number of schemas. Remember, there is a “zero” user so setup.sh with 16 will have 17 SLOB schema users.
After setup.sh and counting user schemas please create the SLOB procedure in the USER1 schema.
This is an example of what happens if one misses the detail to create the semaphore wait kit as per the documentation. Not to worry, simply do what the output of runit.sh directs you to do.
The following is an example of a healthy runit.sh test.
Strictly speaking this is all optional if all you intend to do is test SLOB on your current host. However, if SLOB has been configured in a Windows, AIX, or Solaris box this is how one tests SLOB. Testing these non-Linux platforms merely requires a small Linux box (e.g., a laptop or a VM running on the system you intend to test!) and SQL*Net.
We don’t care where the SLOB database service is. If you can reach it successfully with tnsping you are mostly there.
The following is an example of a successful runit.sh test over SQL*Net.
Please note, loading SLOB over SQL*Net has the same configuration requirements as what I’ve shown for data loading (i.e., running setup.sh). Consider the following screenshot which shows an example of loading SLOB via SQL*Net.
Finally, please see the next screenshot which shows the slob.conf file the corresponds to the proof of loading SLOB via SQL*Net.
This short post shows the simple steps needed to deploy SLOB in both the simple Linux host-only scenario as well as via SQL*Net. Once a SLOB user gains the skills needed to load and use SLOB via SQL*Net there are no barriers to testing SLOB databases running on any platform to include Windows, AIX and Solaris.
Filed under: oracle
In the past few posts, I’ve covered setting up PDBaaS, using the Self Service portal with PDBaaS, setting up Schema as a Service, and using the Self Service Portal with Schema as a Service, all of these using Enterprise Manager Cloud Control 12c release 18.104.22.168. Now I want to move on to an area where you start to get more back from all of this work – metering and chargeback.
Metering is something that Enterprise Manager has done since its very first release. It’s a measurement of some form of resource – obviously in the case of Enterprise Manager it’s a measurement of how much computing resource such as CPU, I/O, memory, storage etc. has been used by an object. If I think way back when to the very first release of Enterprise Manager I ever saw – the 0.76 release whenever that was! – the thing that comes to mind most was it had this remarkably pretty tablespace map, that showed you diagrammatically just where every block in an object was in a particular tablespace. Remarkably pretty, as I said – but virtually useless, because all you could do was look at the pretty colours!
Clearly, metering has come a long, long way since that time, and if you have had Enterprise Manager up and running for some time you now have at your fingertips metrics on so many different things that you may be lost trying to work out what can you do with it all. Well, that’s where Chargeback comes into play. In simple terms, chargeback is (as the name implies) an accounting tool. In Enterprise Manager terms, it has 3 main functions:
Let me expand on that last point a little further. Within the Chargeback application, the cloud administrator can set specific charges for specific resources. As an example, you might decide to charge $1 a month per gigabyte of memory used for a database. Those charges can be transferred to some form of billing application such as Oracle’s “Self-Service E-Billing” application and end up being charged as a real cost to the end user. However, my experience so far has been that few people are actually using it to charge a cost to the end user. There are two reasons for that:
The end result of this is that I have most often seen customers choose to implement SHOWback rather than CHARGEback. Showback is in many ways very similar to chargeback. It’s the ability to provide reports to end users that show how much computing resource they have used, AND to show them how much it would have cost the end users if the IT department had indeed decided to actually charge for it. In some ways this is just as beneficial to the IT department as it allows them to have a much better grasp on what they need to know for budgeting purposes, and it avoids the endless arguments about whether end users are being charged too much.
OK, let’s talk about some of the new terminology you need to understand before we implement chargeback (from now on, I will use the term “chargeback” to cover both “chargeback” and “showback” for simplicity’s sake, and because the application is actually called “Chargeback” in the Enterprise Manager Cloud Control product).
The first concept you need to understand is that of a chargeback entity. In Enterprise Manager terms, a target typically uses some form of resource, and the Chargeback application calculates the cost of that resource usage. In releases prior to Enterprise Manager 22.214.171.124, the Chargeback application collected configuration information and metrics for a subset of Enterprise Manager targets. In the 126.96.36.199 release, you can add Chargeback support for Enterprise Manager target types for which there is no current out of the box Chargeback support via the use of EMCLI verbs. These chargeback targets, both out of the box and custom types, are collectively known as “entities”.
A charge plan is what Enterprise Manager uses to associate the resources being charged for and the rates at which they are charged. There are two types of charge plans available:
An out of the box extended plan is provided that you can use as a basis for creating your own extended plans. This plan defines charges based on machine sizes for the Oracle VM Guest entity.
Obviously, when charges for resource usage are implemented, these charges must be assigned to something. In the Chargeback application, the costs are assigned to a cost centre. Cost centres are typically organized in a hierarchy and may correspond to different parts of an organization — for example, Sales, Development, HR, and so forth – or they may correspond to different customers – for example, where you are a hosting company and host multiple customer environments. In either case, cost centres are defined as a hierarchy within the Chargeback application. You can also import cost centres that have been implemented in your LDAP server, if you want to use those.
The main benefit you get from using Chargeback is the vast amount of information it puts at your fingertips. This information can be reported on by administrators in a variety of formats available via BI Publisher, including pie charts and bar graphs, as well as drilling down to charges based on a specific cost centre, entity type, or resource. You can also make use of trending reports over time and can use this to aid you in your IT budget planning. Outside the Chargeback application itself, Self-Service users can view chargeback information related to the resources they have used within the Self Service Portal.
So now you have an understanding of the capabilities of the Chargeback application in the Enterprise Manager product suite. The next step, of course, is to set it up. I’ll cover that in another blog post, so stay tuned for that!
When I wrote about the remote cloning of PDBs, I said I would probably be changing some existing articles. Here’s a change I’ve done already.
There are also new articles.
I’m sure there will be some more little pieces coming out in the next few days…
As I mentioned before, the multitenant option is rounding out nicely in this release.