Search

OakieTags

Who's online

There are currently 1 user and 25 guests online.

Online users

Recent comments

Affiliations

Oakies Blog Aggregator

F5 Load Balancer Training Course : Day 1

As I suspected, I’m the only person on the course that doesn’t know what a network is. :) If I had not been tinkering with the reverse proxies over the last year I would have been pretty much lost.

The course itself is well structured and the teacher is good. The fact I’ve not flounced out in a huff is testament to that. :) The pattern will be quite familiar to anyone who has been on a hands-on course before. Discuss a topic with slides, then do a hands-on lab that works through that stuff.

It takes a while to get into the swing of things and I’ve proved to myself I am incapable of reading other people’s instructions, so it’s a good job I usually write my own. Now I’m starting to get used to the interface and command line, I’m hoping today will be a bit easier.

From a brief discussion, what I need from the load balancers seems *drastically* different to most of the other people in the room. I think this is going to be quite a long and arduous road when I start having to apply some of this to real situations. I sense a lot of external consultancy… :)

It is interesting coming from a different background to the others in the room and seeing how we approach things from different angles, and with different emphasis. I’ll write a blog post about this when I’ve finished the course, because it’s been something that has been brewing in my mind for a while…

As you will probably already know, I followed the day with a visit to see Guardians of the Galaxy.

I’ve been swimming this morning and now I have to log on to work to fix some stuff before starting day 2 of the course.

Cheers

Tim…


F5 Load Balancer Training Course : Day 1 was first posted on August 7, 2014 at 8:07 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

WordPress 3.9.2

WordPress 3.9.2 has been released. It’s a security release with a bunch of important fixes for some nasties. The changelog is here.

Depending on your setup, you might have automatically updated anyway. If not, go on to your dashboard and give it a nudge. :)

Cheers

Tim…


WordPress 3.9.2 was first posted on August 6, 2014 at 9:49 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Guardians of the Galaxy

I’ve just got back from seeing Guardians of the Galaxy.

Take note Sci-Fi movie makers! This is your competition! This is what you need to aim to outdo!

It’s very cool. It looks great. You quickly start to give a crap about the characters. It doesn’t take itself too seriously.

Favourite Character: Groot. I challenge anyone to come out of the film and not want to say, “I am Groot”, in response to any situation. :)

I didn’t really fancy it when I saw the trailers. A couple of people were talking about how good the reviews were, so I thought I would give it a go. I’m glad I did. It’s excellent!

Cheers

Tim…


Guardians of the Galaxy was first posted on August 6, 2014 at 9:41 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Why Write-Through is still the default Flash Cache Mode on #Exadata X-4

The Flash Cache Mode still defaults to Write-Through on Exadata X-4 because most customers are better suited that way – not because Write-Back is buggy or unreliable. Chances are that Write-Back is not required, so we just save Flash capacity that way. So when you see this

CellCLI> list cell attributes flashcachemode
         WriteThrough

it is likely to your best :-)
Let me explain: Write-Through means that writing I/O coming from the database layer will first go to the spinning drives where it is mirrored according to the redundancy of the diskgroup where the file is placed that is written to. Afterwards, the cells may populate the Flash Cache if they think it will benefit subsequent reads, but there is no mirroring required. In case of hardware failure, the mirroring is already sufficiently done on the spinning drives, as the pictures shows:

Flash Cache Mode Write-Through

Flash Cache Mode WRITE-THROUGH

That changes with the Flash Cache Mode being Write-Back: Now writes go primarily to the Flashcards and popular objects may even never get aged out onto the spinning drives. At least that age out may happen significantly later, so the writes on flash must be mirrored now. The redundancy of the diskgroup where the object in question was placed on determines again the number of mirrored writes. The two pictures assume normal redundancy. In other words: Write-Back reduces the usable capacity of the Flashcache at least by half.

Flash Cache Mode Write-Back

Flash Cache Mode WRITE-BACK

Only databases with performance issues on behalf of writing I/O will benefit from Write-Back, the most likely symptom of which would be high numbers of the Free Buffer Waits wait-event. And Flash Logging is done with both Write-Through and Write-Back. So there is a good reason behind turning on the Write-Back Flash Cache Mode only on demand. I have explained this just very similar during my present Oracle University Exadata class in Frankfurt, by the way :-)

Tagged: exadata

SLOB Data Loading Case Studies – Part I. A Simple Concurrent + Parallel Example.

Introduction

This is Part I in a short series of posts dedicated to loading SLOB data.  The SLOB loader is called setup.sh and it is, by default a concurrent, data loader. The SLOB configuration file parameter controlling the number of concurrent data loading threads is called LOAD_PARALLEL_DEGREE. In retrospect I should have named the parameter LOAD_CONCURRENT_DEGREE because unless Oracle Parallel Query is enabled there is no parallelism in the data loading procedure. But if LOAD_PARALLEL_DEGREE is assigned a value greater than 1 there is concurrent data loading.

Occasionally I hear of users having trouble with combining Oracle Parallel Query with the concurrent SLOB loader. It is pretty easy to overburden a system when doing something like concurrent, parallel data loading–in the absence of tools like Database Resource Management I suppose. To that end,  this series will show some examples of what to expect when performing SLOB data loading with various init.ora settings and combinations of parallel and concurrent data loading.

In this first example I’ll show an example of loading with LOAD_PARALLEL_DEGREE set to 8. The scale is 524288 SLOB rows which maps to 524,288 data blocks because SLOB forces a single row per block. Please note, the only slob.conf parameters that affect data loading are LOAD_PARALLEL_DEGREE and SCALE. The following is a screen shot of the slob.conf file for this example:

SLOB-data-load-3

The next screen shot shows the very simple init.ora settings I used during the data loading test. This very basic initialization file results in default Oracle Parallel Query, therefore  this example is a concurrent + parallel data load.

SLOB-data-load-6

The next screen shot shows that I directed setup.sh to load 64 SLOB schemas into a tablespace called IOPS. Since SCALE is 524,288 this example loaded roughly 256GB (8192 * 524288 * 64) of data into the IOPS tablespace.

SLOB-data-load-1

As reported by setup.sh the data loading completed in 1,539 seconds or a load rate of roughly 600GB/h. This loading rate by no means shows any intrinsic limit in the loader. In future posts in this series I’ll cover some tuning tips to improve data loading.  The following screen shot shows the storage I/O rates in kilobytes during a portion of the load procedure. Please note, this is a 2s16c32t 115w Sandy Bridge Xeon based server. Any storage capable of I/O bursts of roughly 1.7GB/s (i.e., 2 active 8GFC Fibre Channel paths to any enterprise class array) can demonstrate this sort of SLOB data loading throughput.

SLOB-data-load-2

 

After setup.sh completes it is good to count how many loader threads were able to successfully load the specified number of rows. As the example shows I simply grep for the value of slob.conf->SCALE from cr_tab_and_load.out. Remember, SLOB in its current form, loads a zeroth schema so the return from such a word count (-l) should be one greater than the number of schemas setup.sh was directed to load.

SLOB-data-load-4

The next screen shot shows the required execution of the procedure.sql script. This procedure must be executed after any execution of setup.sh.

SLOB-data-load-7

Finally, one can use the SLOB/misc/tsf.sql script to report the size of the tablespace used by setup.sh. As the following screenshot shows the  IOPS tablespace ended up with a  little over 270GB which can be accounted for by the size of the tables based on slob.conf, the number of schemas and a little overhead for indexes.

 

SLOB-data-load-5

Summary

This installment in the series has shown expected screen output from a simple example of data loading. This example used default Oracle Parallel Query settings, a very simple init.ora and a concurrent loading degree of 8 (slob.conf->LOAD_PARALLEL_DEGREE) to load data at a rate of roughly 600GB/h.

 

 

 

Filed under: oracle

F5 Load Balancer Training Course

I’m on an F5 Load Balancer training course for the next 3 days.

I have no idea what to expect and to be honest, I really don’t think I should be here. :) With the exception of a bit of fiddling with Apache reverse proxies, I don’t really know anything about this stuff, so I’m not sure if this will go over my head or be intensely slow and boring…

If anything comes out of it worth blogging about I certainly will.

Chertsey is like a seaside town. It’s full of cafes, restaurants and odd little shops. When I was searching for a place to swim Google came up with loads of pool installation and maintenance companies, so I think it’s a pretty rich area. I found a local swimming pool, but I’ve had to remortgage my house to afford to swim there. :) I went this morning at 06:30 and it wasn’t too crowded. It’s unusual to find a private gym with a 25M pool. Most of them in the UK have tiny little things that you can’t swim in. It was a bit on the warm side, but then I guess you have to expect that when it’s not a training pool. Hopefully I won’t be too much of a slob by the time I get home.

I’m thinking I might do a cinema visit every night to play catch-up.

Cheers

Tim…


F5 Load Balancer Training Course was first posted on August 6, 2014 at 8:09 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

A brief history of time^H^H Oracle session statistics

I didn’t intend to write another blog post yesterday evening at all, but found something that was worth sharing and got me excited… And when I started writing I intended it to be a short post, too.

If you have been digging around Oracle session performance counters a little you undoubtedly noticed how their number has increased with every release, and even with every patch set. Unfortunately I don’t have a 11.1 system (or earlier) at my disposal to test, but here is a comparison of how Oracle has instrumented the database. I have already ditched my 12.1.0.1 system as well, so no comparison there either :( This is Oracle on Linux.

The script

In the following examples I am going to use a simple query to list the session statistics by their class. The decode statement is based on the official documentation set. There you find the definition of v$statname plus an explanation of the meaning of the class-column. Here is the script:

with stats as (
        select name, decode(class,
                1, 'USER',
                2, 'REDO',
                4, 'ENQUEUE',
                8, 'CACHE',
                16, 'OS',
                32, 'RAC',
                64, 'SQL',
                128, 'DEBUG',
                'NA'
        ) as decoded_class from v$statname
)
select count(decoded_class), decoded_class
 from stats
 group by rollup(decoded_class)
 order by 1
/

Oracle 11.2.0.3

11.2.0.3 is probably the most common 11g Release 2 version currently out there in the field. Or at least that’s my observation. According to MOS Doc ID 742060.1 11.2.0.3 was released on 23 September 2011 (is that really that long ago?) and already out of error correction support by the way.

Executing the above-mentioned script gives me the following result:

COUNT(DECODED_CLASS) DECODED
-------------------- -------
                   9 ENQUEUE
                  16 OS
                  25 RAC
                  32 REDO
                  47 NA
                  93 SQL
                 107 USER
                 121 CACHE
                 188 DEBUG
                 638

So there are 638 of these counters. Let’s move on to 11.2.0.4

Oracle 11.2.0.4

Oracle 11.2.0.4 is interesting as it has been released after 12.1.0.1. It is the terminal release for Oracle 11.2, and you should consider migrating to it as it is in error correction support. The patch set came out on 28 August 2013. What about the session statistics?

COUNT(DECODED_CLASS) DECODED
-------------------- -------
                   9 ENQUEUE
                  16 OS
                  25 RAC
                  34 REDO
                  48 NA
                  96 SQL
                 117 USER
                 127 CACHE
                 207 DEBUG
                 679

A few more, all within what can be expected.

Oracle 12.1.0.2

Oracle 12.1.0.2 is fresh off the press, released just a few weeks ago. Unsurprisingly the number of session statistics has been increased again. What did surprise me was the number of statistics now available for every session! Have a look at this:

COUNT(DECODED_CLASS) DECODED
-------------------- -------
                   9 ENQUEUE
                  16 OS
                  35 RAC
                  68 REDO
                  74 NA
                 130 SQL
                 130 USER
                 151 CACHE
                 565 DEBUG
                1178

That’s nearly double what you found for 11.2.0.3. Incredible, and hence this post. Comparing 11.2.0.4 with 12.1.0.2 you will notice the:

  • same number of enqueue stats
  • same number of OS stats
  • 10 additional RAC stats
  • twice the number of REDO related statistics
  • quite a few more not classified (26)
  • 34 more sql related
  • 13 more in the user-class
  • 24 additional stats in the cache-class
  • and a whopping 298 (!) in the debug class

The debug class (128) shows lots of statistics (including spare ones) for the in-memory option (IM):

SQL> select count(1), class from v$statname where name like 'IM%' group by class;

  COUNT(1)      CLASS
---------- ----------
       211        128

Happy troubleshooting! Reminds me to look into the IM-option in more detail.

SLOB Deployment – A Picture Tutorial.

SLOB can be obtained at this link: Click here.

This post is just a simple set of screenshots I recently took during a fresh SLOB deployment. There have been a tremendous number of SLOB downloads lately so I thought this might be a helpful addition to go along with the documentation. The examples I show herein are based on a 12.1.0.2 Oracle Database but these principles apply equally to 12.1.0.1 and all Oracle Database 11g releases as well.

Synopsis

  1. Create a tablespace for SLOB.
  2. Run setup.sh
  3. Verify user schemas
  4. Create The SLOB procedure In The USER1 Schema
  5. Execute runit.sh. An Example Of Wait Kit Failure and Remedy
  6. Execute runit.sh Successfully
  7. Using SLOB With SQL*Net
    1. Test SQL*Net Configuration
    2. Execute runit.sh With SQL*Net
  8. More About Testing Non-Linux Platforms

 

Create a Tablespace for SLOB

If you already have a tablespace to load SLOB schemas into please see the next step in the sequence.

SLOB-deploy-1

Run setup.sh

Provided database connectivity works with ‘/ as sysdba’ this step is quite simple. All you have to do is tell setup.sh which tablespace to use and how many SLOB users (schemas) load. The slob.conf file tells setup.sh how much data to load. This example is 16 SLOB schemas each with 10,000 8K blocks of data. One thing to be careful of is the slob.conf->LOAD_PARALLEL_DEGREE parameter. The name is not exactly perfect since this actually controls concurrent degree of SLOB schema creation/loading. Underneath the concurrency may be parallelism (Oracle Parallel Query) so consider setting this to a rather low value so as to not flood the system until you’ve practiced with setup.sh for a while.

 

SLOB-deploy-2

Verify Users’ Schemas

After taking a quick look at cr_tab_and_load.out, as per setup.sh instruction, feel free to count the number of schemas. Remember, there is a “zero” user so setup.sh with 16 will have 17 SLOB schema users.

SLOB-deploy-3

Create The SLOB Procedure In The USER1 Schema

After setup.sh and counting user schemas please create the SLOB procedure in the USER1 schema.

SLOB-deploy-4

Execute runit.sh. An Example Of Wait Kit Failure and Remedy

This is an example of what happens if one misses the detail to create the semaphore wait kit as per the documentation. Not to worry, simply do what the output of runit.sh directs you to do.

SLOB-deploy-5

Execute runit.sh Successfully

The following is an example of a healthy runit.sh test.

SLOB-deploy-6

Using SLOB with SQL Net

Strictly speaking this is all optional if all you intend to do is test SLOB on your current host. However, if SLOB has been configured in a Windows, AIX, or Solaris box this is how one tests SLOB. Testing these non-Linux platforms merely requires a small Linux box (e.g., a laptop or a VM running on the system you intend to test!) and SQL*Net.

Test SQL*Net Configuration

We don’t care where the SLOB database service is. If you can reach it successfully with tnsping you are mostly there.

SLOB-deploy-7

Execute runit.sh With SQL*Net

The following is an example of a successful runit.sh test over SQL*Net.

SLOB-deploy-8

More About Testing Non-Linux Platforms

Please note, loading SLOB over SQL*Net has the same configuration requirements as what I’ve shown for data loading (i.e., running setup.sh). Consider the following screenshot which shows an example of loading SLOB via SQL*Net.

SLOB-deploy-9

Finally, please see the next screenshot which shows the slob.conf file the corresponds to the proof of loading SLOB via SQL*Net.

SLOB-deploy-10

 

Summary

This short post shows the simple steps needed to deploy SLOB in both the simple Linux host-only scenario as well as via SQL*Net. Once a SLOB user gains the skills needed to load and use SLOB via SQL*Net there are no barriers to testing SLOB databases running on any platform to include Windows, AIX and Solaris.

 

 

 

 

 

Filed under: oracle

Metering and Chargeback

In the past few posts, I’ve covered setting up PDBaaS, using the Self Service portal with PDBaaS, setting up Schema as a Service, and using the Self Service Portal with Schema as a Service, all of these using Enterprise Manager Cloud Control 12c release 12.1.0.4. Now I want to move on to an area where you start to get more back from all of this work – metering and chargeback.

Metering is something that Enterprise Manager has done since its very first release. It’s a measurement of some form of resource – obviously in the case of Enterprise Manager it’s a measurement of how much computing resource such as CPU, I/O, memory, storage etc. has been used by an object. If I think way back when to the very first release of Enterprise Manager I ever saw – the 0.76 release whenever that was! – the thing that comes to mind most was it had this remarkably pretty tablespace map, that showed you diagrammatically just where every block in an object was in a particular tablespace. Remarkably pretty, as I said – but virtually useless, because all you could do was look at the pretty colours!

Clearly, metering has come a long, long way since that time, and if you have had Enterprise Manager up and running for some time you now have at your fingertips metrics on so many different things that you may be lost trying to work out what can you do with it all. Well, that’s where Chargeback comes into play. In simple terms, chargeback is (as the name implies) an accounting tool. In Enterprise Manager terms, it has 3 main functions:

  1. It provides a way of aggregating the enormous amount of metrics data that Enterprise Manager has been collecting.
  2. It provides reports to the consumers of those metrics of how much they have used of those particular metrics.
  3. If you have set it up to do so, it provides a way for the IT department to charge those consumers for ther resources they have used.

Let me expand on that last point a little further. Within the Chargeback application, the cloud administrator can set specific charges for specific resources. As an example, you might decide to charge $1 a month per gigabyte of memory used for a database. Those charges can be transferred to some form of billing application such as Oracle’s “Self-Service E-Billing” application and end up being charged as a real cost to the end user. However, my experience so far has been that few people are actually using it to charge a cost to the end user. There are two reasons for that:

  • Firstly, most people are still not in the mindset of paying for computing power in the same way as other utilities i.e. paying for the amount of computing power that is actually consumed, as we do with our gas, electricity and phone bills.
  • Secondly, and as a direct extension (I believe anyway) of the first reason, most people are still not capable of deciding just how much to charge for a “unit” (whatever that might be) of computing power. In fact, I have seen arguments over just what to charge for a “unit” of computing power last much longer than any meetings held to decide to actually implement chargeback!

The end result of this is that I have most often seen customers choose to implement SHOWback rather than CHARGEback. Showback is in many ways very similar to chargeback. It’s the ability to provide reports to end users that show how much computing resource they have used, AND to show them how much it would have cost the end users if the IT department had indeed decided to actually charge for it. In some ways this is just as beneficial to the IT department as it allows them to have a much better grasp on what they need to know for budgeting purposes, and it avoids the endless arguments about whether end users are being charged too much. :)

Terminology

OK, let’s talk about some of the new terminology you need to understand before we implement chargeback (from now on, I will use the term “chargeback” to cover both “chargeback” and “showback” for simplicity’s sake, and because the application is actually called “Chargeback” in the Enterprise Manager Cloud Control product).

Chargeback Entities

The first concept you need to understand is that of a chargeback entity. In Enterprise Manager terms, a target typically uses some form of resource, and the Chargeback application calculates the cost of that resource usage. In releases prior to Enterprise Manager 12.1.0.4, the Chargeback application collected configuration information and metrics for a subset of Enterprise Manager targets. In the 12.1.0.4 release, you can add Chargeback support for Enterprise Manager target types for which there is no current out of the box Chargeback support via the use of EMCLI verbs. These chargeback targets, both out of the box and custom types, are collectively known as “entities”.

Charge Plans

A charge plan is what Enterprise Manager uses to associate the resources being charged for and the rates at which they are charged. There are two types of charge plans available:

  • Universal Charge Plan – The universal charge plan contains the rates for CPU, storage and memory. While it is called the “Universal” charge plan, in fact it isn’t really because it doesn’t apply to all entity types. For example, it is not applicable to J2EE applications.
  • Extended Charge Plans – The Universal Charge Plan is an obvious starting point, but there are many situations where entity-specific charges are required. Let’s say you have a lot of people who understand Linux, but there is a new environment being added to your data centre that requires Windows knowledge. If you had to pay a contractor to come in to look after that environment because it was outside your knowledge zone, it would be fair to charge usage of the Windows environment at a higher rate. As another example, let’s say your standard environments did not use Real Application Clusters (RAC), and a new environment has come in that requires the high availability you can get from a RAC environment. RAC is, of course, a database option that you need to pay anadditional license fee for, so that should be charged at a higher rate. An extended charge plan can be used to meet these sorts of requirements, as it provides greater flexibility toChargeback administrators. Extended charge plans allow you to:
    • Setup specific charges for specific entities
    • Define rates based on configuration and usage
    • Assign a flat rate regardless of independent of configuration or usage
    • Override the rates set for the universal plan

    An out of the box extended plan is provided that you can use as a basis for creating your own extended plans. This plan defines charges based on machine sizes for the Oracle VM Guest entity.

Cost Centres

Obviously, when charges for resource usage are implemented, these charges must be assigned to something. In the Chargeback application, the costs are assigned to a cost centre. Cost centres are typically organized in a hierarchy and may correspond to different parts of an organization — for example, Sales, Development, HR, and so forth – or they may correspond to different customers – for example, where you are a hosting company and host multiple customer environments. In either case, cost centres are defined as a hierarchy within the Chargeback application. You can also import cost centres that have been implemented in your LDAP server, if you want to use those.

Reports

The main benefit you get from using Chargeback is the vast amount of information it puts at your fingertips. This information can be reported on by administrators in a variety of formats available via BI Publisher, including pie charts and bar graphs, as well as drilling down to charges based on a specific cost centre, entity type, or resource. You can also make use of trending reports over time and can use this to aid you in your IT budget planning. Outside the Chargeback application itself, Self-Service users can view chargeback information related to the resources they have used within the Self Service Portal.

What’s Next?

So now you have an understanding of the capabilities of the Chargeback application in the Enterprise Manager product suite. The next step, of course, is to set it up. I’ll cover that in another blog post, so stay tuned for that!

More Oracle Multitenant Changes (12.1.0.2)

When I wrote about the remote cloning of PDBs, I said I would probably be changing some existing articles. Here’s a change I’ve done already.

There are also new articles.

I’m sure there will be some more little pieces coming out in the next few days…

As I mentioned before, the multitenant option is rounding out nicely in this release.

Cheers

Tim…

 

 


More Oracle Multitenant Changes (12.1.0.2) was first posted on August 4, 2014 at 7:34 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.