Search

OakieTags

Who's online

There are currently 0 users and 30 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Consistent Reads

Here’s a quick demo to make a point about consistent reads (prompted by a question on the Oracle-L mailing list):

SQL> drop table t1;

Table dropped.

SQL> create table t1 (n1 number);

Table created.

SQL> insert into t1  values(0);

1 row created.

SQL> begin
  2  for i in 1..1000 loop
  3  update t1 set n1 = i;
  4  end loop;
  5  end;
  6  /

PL/SQL procedure successfully completed.

Note that I haven’t issued a commit in this session, and all I’ve got is a single row in the table (and because it’s my usual demo setup of locally managed tablespaces with uniform extents of 1MB using freelist management I know that that one row is in the first available block of the table).

How much work is a second session going to do to scan that table ?

SQL> alter system flush buffer_cache;
SQL> execute snap_my_stats.start_snap
SQL> select * from t1;
SQL> set serveroutput on size 1000000 format wrapped
SQL> execute snap_my_stats.end_snap
---------------------------------
Session stats - 18-Apr 11:33:01
Interval:-  2 seconds
---------------------------------
Name                                                                     Value
----                                                                     -----
session logical reads                                                      967
consistent gets                                                            967
consistent gets from cache                                                 967
consistent gets - examination                                              964
consistent changes                                                       1,001
CR blocks created                                                            1
data blocks consistent reads - undo records applied                      1,001
IMU CR rollbacks                                                            41

The snap_my_stats package is similar in concept to Tom Kyte’s “runstats” or Tanel Poder’s “snapper” program to capture changes in values in the dynamic performance views over short time periods. In this case I’ve deleted all but a few of the larger changes, and a couple of small changes.

The figure that stands out (probably) is the “session logical reads” – we’ve done 967 logical I/Os to scan a tables of just one block. The reason for this is that we’ve created a read-consistent copy of that one block (“CR blocks created” = 1), and it has taken a lot of work to create that copy. We’ve had to apply 1,001 undo records (“data blocks consistent reads – undo records applied” = 1001).

Most of those undo records come from individual accesses (which are of the cheaper “consistent gets – examination” type that only need a single get on the “cache buffers chains” latch) to undo blocks, following the “UBA (undo block address)” pointer in the relevant ITL entry of the table block, but since this is a 10g database the last few undo records come out of the “In-memory Undo” of the other session. Basically the cloning operation is something like this:

  1. Get current block
  2. Notice uncommitted transaction
  3. Clone current block – then ignore current block
  4. Get Undo block indicated by uncommitted ITL and apply undo change vector
  5. Repeat step four – 1,000 times
  6. Block is now clear of all uncommitted transactions
  7. There are no committed transactions with a commit SCN later than the start of query
  8. Display contents of block

It is an interesting point that as the first session created undo records it would pin and fill undo blocks – so would only do a few current gets (one for each block) on the undo blocks it was using. As another process reverses out the changes in a CR clone it has to get and release each undo block every time it wants a single undo record … applying undo records introduces far more latch and buffer activity that the original generation of the undo.

Footnote

It’s worth knowing that there are three statistics relating to applying undo records:

transaction tables consistent reads - undo records applied        Estimating "old" commit SCNs during delayed block cleanout
data blocks consistent reads - undo records applied               Creating CR clones
rollback changes - undo records applied                           The result of a real "rollback;"

See this posting on comp.databases.oracle.server for a rough description of transaction table consistent reads; and this elderly posting highlighting a benefit of knowing about rollback changes.

Footnote 2

The second step in the list of actions is: “Notice uncommitted transaction”. It’s probably worth pointing out that another part of the ITL entry holds the transaction id (“xid”) which implicitly identifies the undo segment and transaction table slot in that segment that has been used to hold  the transaction state. The current contents of that slot allow Oracle to determine whether or not (and when, if necessary) the transaction was committed.

Chrysopylae, Part 2

Part 2 of my review of the Oracle GoldenGate 11g Implementer’s Guide begins with Chapter 5, Configuration Options.

The Configuration options chapter deals with more advanced options like batching, compression, encryption, triggering events, loop and conflict detection and DDL replication.

Batching and how SQL statements are cached to support batching, along with error handling and fallback processing are thoroughly explained.

Compression is also covered in some detail, with information about how GoldenGate cannot replicate data from Oracle compressed tables (including the EHCC compression from Exadata database machines).

In-flight (message) encryption and at-rest (trail) encryption is covered as well.

Event triggering is covered at a basic level, but gives a good insight as to what is possible – including the ability to have GoldenGate fire off a shell script in response to a particular set of values being detected in the capture process.

The discussion of bi-directional replication begins with a thorough list of items to be considered, including loops, conflict detection and resolution, sequences and triggers.

Conflict resolution options are slightly limited, and aren’t clearly defined – for example, applying a net difference instead of the after image is only useful in a subset of mathematical operations on numerical columns.  And there is no mention of prioritization by site (by which some sites updates always take precedence).  In truth, conflict resolution procedures can get pretty complicated, and I’m surprised there isn’t more information about them in this section or a referral to a later section (For example, Chapter 7 on Advanced Configuration).

The section on sequences is equally lacking in options, starting with a rather unclear statement about not supporting the replication of sequence values – what is really meant is that sequences themselves are not synchronized across multiple databases.  And the recommendation to use odd / even strategies is also rather simplistic – missing out on multi-master scenarios.  One can always reserve lower digits to enable more than 2 sites, and technically one can set up sequences to allow for an infinite number of sites as well…

Trigger handling advice is also rather simplistic – leading to more questions than answers as it talks about disabling triggers during the application of replicated data – there isn’t a mention of how that will affect an active / active system where local transactions are occurring.

There is a good discussion on DDL replication, with the information that the RECYCLEBIN must be disabled.

Chapter 6 – Configuring GoldenGate for HA

This chapter talks about GoldenGate in RAC environments, including the need for shared filesystems, and configuring GoldenGate with VIPs and with clusterware.  Sample scripts and commands are included – overall this chapter stays on point.

Chapter 7 – Advanced Configuration

In reality I’d call this chapter Configuration Details, but it does a very nice job of going through details around how to map objects in a replication configuration, as well as exploring the ability for GoldenGate to detect errors and execute SQL and/or stored procedures in response to those conditions.

Basic transformation is also covered.

Chapter 8 – Managing Oracle GoldenGate

This chapter covers basic command level security and spends a lot of time on the command interpreter GGSCI.  Also nice is a set of scripts and instructions to take performance output and format it for graphing in Excel.

Chapter 9 – Performance Tuning

The performance tuning chapter focuses on how to parallelize replication traffic and thoroughly exploit all system resources to increase throughput.  It also make mention of the 11.1 release of GoldenGate.  Details about tuning DBFS are also included. 

Like a lot of Performance Tuning advice, this section is more about throughput than performance optimization – in that vein it succeeds in covering ways to push more data more quickly.

Chapter 10 – Troubleshooting GoldenGate

The troubleshooting chapter begins with a good section on tracking down why replication may not be working – getting statistics on every process to see if they think they are capturing or sending data.  There is also good information on the CHECKPARAMS command which can be used to validate configuration files and the author also covers potential issues with the command.

The author covers the looking at checkpoints and networks as well.

There is a good section on creating exception handlers to capture and diagnose duplicate and missing record errors, including capture of before and after images.

Finally the chapter goes into detail on the LOGDUMP utility which can be used to examine trail files for error conditions.

Summary

Overall I found the book to be a good companion to the GoldenGate manuals and training materials.  It’s obvious that the author has a lot of configuration and operational experience with GoldenGate.  I found the book weak on design and planning for replication environments, so if you’re new to replication I’d suggest adding another book to your library above and beyond this one.

Can I Fit a 80MB Database Completely in a 80MB Buffer Cache?

This is in the Series "100 Things You Probably Didn't Know About Oracle". If you haven't already, I urge you to read the other parts -

  • Part1 (Commit does not force writing of buffers into the disk), 
  • Part2 (How Oracle Locking Works), 
  • Part3 (More on Interested Transaction Lists).

During the recently concluded Collaborate 11 (http://collaborate11.ioug.org) I was asked a question which led me to the idea for this entry - the fourth in the series. If the database size is 100GB and you have a 100GB buffer cache, all the blocks will be in the memory and there will never be an I/O to the datafiles other than the regular DBWR lazy writes, right?

This is a very important question and you must consider the implications carefully. Many folks assume that by getting a large buffer cache eliminates or reduces the buffer related I/O - a very wrong assumption.A large buffer cache helps; but the relationship between buffer and block is not one to one. A block may have more than one buffer in the buffer cache. How so? Let's see how that happens. We will examine this in a non-RAC database to keep it simple.

Setup

First, let's create a table and insert some rows into it.

SQL> create table bhtest (col1 number, col2 char(2000));

Table created.

SQL> insert into bhtest values (&n,'&c');
Enter value for n: 1
Enter value for c: x
old   1: insert into bhtest values (&n,'&c')
new   1: insert into bhtest values (1,'x')

1 row created.

SQL> /
Enter value for n: 2
Enter value for c: x
old   1: insert into bhtest values (&n,'&c')
new   1: insert into bhtest values (2,'x')

1 row created.

... insert 6 rows ...
commit;

Note how I used char(2000) instead of varchar2(2000). The reason is simple. The char datatype takes up entire string of values, all 2000 of them regardless of the actual value placed there. So, even though I put 'x' there, the entire row will be quite a long one.

After the insertion, check which blocks these rows went into. We can do that by calling dbms_rowid.rowid_block_number() function.

select col1, dbms_rowid.rowid_block_number(rowid)
from bhtest;

      COL1 DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
---------- ------------------------------------
         4                                 4419
         5                                 4419
         1                                 4423
         2                                 4423
         3                                 4423

5 rows selected.

From the output we can see that rows 1 through 3 went to block 4423 and 4 and 5 went to block 4419. We also need the object ID of the object

SQL> select object_id from dba_objects
  2  where object_name = 'BHTEST'

  3  /
 OBJECT_ID
----------
     99360

This completes the setup. In summary, we know that these rows are in blocks 4419 and 4423 and the object ID is 99360.

Experiment

If possible, clear out the cache by shutting down and restarting the database. This brings up an empty cache. It's not absolutely necessary though. Now select from the table:

SQL> select * from bhtest;  

This will bring up all the blocks of the table into the buffer cache, To check for the same, you can check the view V$BH (the buffer headers). The column OBJD is the object_id. (Actually it's the DATA_OBJECT_ID. In this case both are the same; but may not be in all cases). Here are the columns of interest to us:

  • FILE# - the file_id 
  • BLOCK# - the block number 
  • CLASS# - the type of the block, e.g. data block, segment header, etc. Shown as a code 
  • STATUS - the status of the buffer, Exclusive Current, Current, etc. 
  • LOCK_ELEMENT_ADDR - if there is a buffer lock on the buffer, then the address of the lock 

To make it simpler to understand, we will use a decode() on the class# field to show the type of the block. With that, here is our query:

select file#, block#,
    decode(class#,1,'data block',2,'sort block',3,'save undo block', 4,
'segment header',5,'save undo header',6,'free list',7,'extent map',
8,'1st level bmb',9,'2nd level bmb',10,'3rd level bmb', 11,'bitmap block',
12,'bitmap index block',13,'file header block',14,'unused',
15,'system undo header',16,'system undo block', 17,'undo header',
18,'undo block') class_type, status, lock_element_addr
from v$bh
where objd = 99360
order by 1,2,3
/

Save this query because we will be using it a lot in this experiment. Here is the output.

     FILE#     BLOCK# CLASS_TYPE         STATUS     LOCK_ELE
---------- ---------- ------------------ ---------- --------
         7       4418 segment header     cr         00
         7       4418 segment header     xcur       00
         7       4419 data block         xcur       00
         7       4420 data block         xcur       00
         7       4421 data block         xcur       00
         7       4422 data block         xcur       00
         7       4423 data block         xcur       00

7 rows selected.

There are 7 buffers. In this example we have not restarted the cache. So there are two buffers for the segment header. There is one buffer for each data block - from 4419 to 4423. The status is "xcur", which stands for Exclusive Current. I will explain that in detail in a later blog. But in summary it means that the buffer was acquired (or filled by a block) with the intention of being modified. If the intention is merely to select, then the status would have shown CR (Consistent Read). In this case since the rows were inserted modifying the buffer, the blocks were gotten in xcur mode. From a different session update a single row. For easier identification I have used Sess2> as the prompt:

Sess> update bhtest set col2 = 'Y' where col1 = 1;

1 row updated.

From the original session, check the buffers:

     FILE#     BLOCK# CLASS_TYPE         STATUS     LOCK_ELE
---------- ---------- ------------------ ---------- --------
         7       4418 segment header     cr         00
         7       4418 segment header     xcur       00
         7       4419 data block         xcur       00
         7       4420 data block         xcur       00
         7       4421 data block         xcur       00
         7       4422 data block         xcur       00
         7       4423 data block         cr         00
         7       4423 data block         xcur       00

8 rows selected.

There are 8 buffers now, up one from the previous seven. Note there are two buffers for block ID 4423. One CR and one xcur. Why two?

It's because when the update statement was issued, it would have modified the block. Instead of modifying the existing buffer, Oracle creates a "copy" of the buffer and modifies that. This copy is now [Note there was a typo earlier "not", it should have been "now". Corrected. Thanks to Martin Bex] XCUR status because it was acquired for the purpose of being modified. The previous buffer of this block, which used to be xcur, is converted to "CR". There can't be more than one XCUR buffer for a specific block, that's why it is exclusive. If someone wants to find out the most recently updated buffer, it will just have to look for the copy with the XCUR status. All others are marked CR.

Let's continue with this experiment. From a third session, update a different row in the same block.

Sess3> update bhtest set col2 = 'Y' where col1 = 2;

1 row updated.

From the original session, find out the buffers.

     FILE#     BLOCK# CLASS_TYPE         STATUS     LOCK_ELE
---------- ---------- ------------------ ---------- --------
         7       4418 segment header     xcur       00
         7       4418 segment header     cr         00
         7       4419 data block         xcur       00
         7       4420 data block         xcur       00
         7       4421 data block         xcur       00
         7       4422 data block         xcur       00
         7       4423 data block         cr         00
         7       4423 data block         xcur       00
         7       4423 data block         cr         00
         7       4423 data block         cr         00

There are 4 buffers for block 4423 now - up from 2. What happened? Since the buffer was required to be modified once more, Oracle created yet another "copy", marked it "xcur" and relegated the older one to "cr". What about the extra CR copy? That was done because Oracle had to perform something called CR processing to create a CR copy from another CR copy or an XCUR copy.

You can notice how the number of buffers proliferate. Let's change the experiment a little bit. From a 4th session, select from the table, instead of updating a row:

Sess4>  select * from bhtest ;

From the original session, check for the buffers.

     FILE#     BLOCK# CLASS_TYPE         STATUS     LOCK_ELE
---------- ---------- ------------------ ---------- --------
         7       4418 segment header     xcur       00
         7       4418 segment header     cr         00
         7       4419 data block         xcur       00
         7       4420 data block         xcur       00
         7       4421 data block         xcur       00
         7       4422 data block         xcur       00
         7       4423 data block         cr         00
         7       4423 data block         cr         00
         7       4423 data block         cr         00
         7       4423 data block         cr         00
         7       4423 data block         cr         00
         7       4423 data block         xcur       00

12 rows selected.

Whoa! there are 12 buffers now. Block 4423 now has 6 buffers - up from 4 earlier. This was merely a select statement, which, by definition does not change data. Why did Oracle create a buffer for that?

Again, the answer is CR processing. The CR processing creates copies of the buffer and rolls them back or forward to create the CR copy as of the correct SCN number. This created 2 additional CR copies. From one block, now you have 6 buffers and some buffers were created as a result of select statement. This should answer the question whether the buffer cache of size of the database will be able to hold all the buffers.

Free Buffers

While being on the subject, let's ponder over another question - what happens when you flush the buffer cache? Let's see.

SQL> alter system flush buffer_cache;

System altered.

Checking the buffers using the script shown earlier:

     FILE#     BLOCK# CLASS_TYPE         STATUS     LOCK_ELE
---------- ---------- ------------------ ---------- --------
         7       4418 segment header     free       00
         7       4418 segment header     free       00
         7       4419 data block         free       00
         7       4420 data block         free       00
         7       4421 data block         free       00
         7       4422 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00

The buffers are still there and marked as belonging to the object. However the status is "free", i.e. the buffers can be reused if some session wants them for some other block. If a session wants a free buffer and can't find one, it waits with the wait event "free buffer wait". At that point, Oracle makes room in the buffer cache for the blocks requested by the session by forcing the buffers out of the buffer cache. The CR copies are discarded (since they were not updated) and the XCUR copies were written to the disk (if not written already).

Who does the writing? It's the process known as DBWR - Database Buffer Writer, which is generally named DBW0, DBW1, etc. The DBWR is a very lazy process. It sleeps most of the time, unless it is woken up by someone. In this case the session (actually the server process) kicks DBWn to write the dirty buffers to the disk and change the status to non-dirty. This is why sometimes SELECT statement may generate writing to data files. Until the buffers are freed up, the session waits patiently and displays to all that it's waiting on free buffer waits. You can check it by selecting the EVENT column from V$SESSION.

Impact on Segment Header

So far we talked about the data block. When does the segment header see some action? Segment header does not contain any data; so it does not need to be updated every time an update is made. But when the segment itself is modified, segment header is updated. Let's see an example with adding a column:

SQL> alter table bhtest add (col3 number);

Table altered.

Check the buffers:

     FILE#     BLOCK# CLASS_TYPE         STATUS     LOCK_ELE
---------- ---------- ------------------ ---------- --------
         7       4418 segment header     cr         00
         7       4418 segment header     cr         00
         7       4418 segment header     cr         00
         7       4418 segment header     cr         00
         7       4418 segment header     free       00
         7       4418 segment header     free       00
         7       4419 data block         free       00
         7       4420 data block         free       00
         7       4421 data block         free       00
         7       4422 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00

16 rows selected.

There are 6 copies of the segment header. Table alteration changed the segment header block - block# 4418. Just like any other block, the buffer was copied over and modified, creating multiple copies of the block. Whe you issue another DDL operation - truncate - the result is the same:

SQL> truncate table bhtest; 

     FILE#     BLOCK# CLASS_TYPE         STATUS     LOCK_ELE
---------- ---------- ------------------ ---------- --------
         7       4416 1st level bmb      free       00
         7       4417 2nd level bmb      free       00
         7       4418 segment header     cr         00
         7       4418 segment header     cr         00
         7       4418 segment header     free       00
         7       4418 segment header     free       00
         7       4418 segment header     cr         00
         7       4418 segment header     cr         00
         7       4418 segment header     cr         00
         7       4418 segment header     free       00
         7       4419 data block         free       00
         7       4420 data block         free       00
         7       4421 data block         free       00
         7       4422 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00

20 rows selected.

There are now additional segment header buffers, since truncate is a DDL command and segment header is modified. Even though there was no block of the table in the buffer cache (remember, we flushed it), the segment header still needs to be updated. Truncate also marks all blocks as free, and resets the high water mark and updates the bitmap block. The bitmap block (BMB) is used in tablespaces with automatic segment space management (ASSM) to display whether a block is free or not, which is similar to the functionality of freelists. The truncate caused the BMBs (there are two - 1st level and 2nd level) to be modified as well and they also come into the buffer cache.

Takeaways

From the above discussion you saw how a table with just two blocks populated fills up the buffer cache with 20 buffers. Imagine a normal database with, say 10000 filled blocks (8KX10,000 = 80M). It might easily fill 200,000 buffers. With a 8K block size that amounts to 8 K X 200 K = 1600M, or about 1.6 GB of buffer cache. The amount of buffers taken up depends on several factors:

(1) less buffers consumed in selects
(2) less buffers consumed if the commits occur more frequently, since the demand for CR processing is less
(3) more buffers are consumed if the activity updates buffers
(4) more the logical I/O, the more the need for buffers
(5) DDL activities increase the demand for the buffers, even if the blocks are empty

Food for Thought

Now that you know how the buffers are consumed, what do you think the situation is in a RAC environment  where buffers are transferred from one instance to the other - better, worse, or about the same? The answer will come in a future blog. Stay tuned.

Fourth Day at Collaborate 11

Thank you to those who attended my fourth and last session Designing Applications for RAC at Collaborate 11. Considering it was at 8:30 AM on Thursday, right after the big party night, you must all be really committed to your work. Much appreciated.

You can get the slides here.

I look forward to seeing you all at Collab next year.

Second Day at Collaborate 11

Many thanks to those attended my session RAC for Beginners despite the 8 AM timeslot. You must be really motivated. I hope you found the session useful and leaned something you can start using.

You can download the slides here.

Thanks to those who came to my second session of the day - Secure Your Database in a Single Day. Hopefully that met your expectations. The slides are found here.

UKOUG

I’ve just bumped this to the top of the stack because the Call for Papers for the UKOUG annual conference (RDBMS and EBS themes, 5th – 7th Dec) is getting close to the closing date of 5th June. If you’re interested in presenting, you need to go to this link pretty soon.

Importing Multiple Databases AWR repositories

Gavin Soorma has a nice blog on exporting and importing AWR repositories http://gavinsoorma.com/2009/07/exporting-and-importing-awr-snapshot-data/

I  thought I would use this technique to load the AWR repositories of two databases I wante to compare.  I  loaded up the two repositories onto my laptop databases, and low and behold I could only find one new DBID in the data. Turns out I have two databases on different machines which are both the copy of the same database, so the have the same DBID.
I didn’t realize this until I had imported the two different AWR repositories into the database on my laptop but only found one new DBID. Then I realized, “oh they have the same DBID!”
So, OK, I’ll tell awrload to load the data into different schemas.
Well, awrload drops the staging user. First step was to stop awrload from dropping the staging user. I went  into @?/rdbms/admin/awrload.sql and commented out the drop user section.
I then ran awrload and tried to give it a unique name:

begin
*
ERROR at line 1:
ORA-20102: user name 'toto' is invalid
ORA-06512: at "SYS.DBMS_SWRF_INTERNAL", line 412
ORA-01918: user '' does not exist
ORA-06512: at "SYS.DBMS_SWRF_INTERNAL", line 1713
ORA-06512: at line 3

it only seemed to accept the default user AWR_STAGE
ok, so I load with AWR_STAGE and try to rename the user using

update user$ set name='T_NEWNAME' where name='T';

see http://arjudba.blogspot.com/2008/05/how-to-rename-user-through-oracle.html

no go, awrload gives me an error AWR_STAGE already exists even after the rename.
OK, so I export the data from the first AWR_STAGE user

 expdp schemas=awr_stage
 create a new user:
 create user usrtst
 identified by usrtst
 default tablespace sysaux
 temporary tablespace  temp;
 grant connect to usrtst;
 alter user usrtst quota unlimited on sysaux;

then import the data:
impdp REMAP_SCHEMA=awr_stage:usrtst directory=AWR_DATA DUMPFILE=usrtst_awr.DMP FULL=y
I do this same technique with the second database manually dropping the AWR_STAGE user between awrload loads.

 create user chprod
 identified by chprod
 default tablespace sysaux
 temporary tablespace  temp;
 grant connect to chprod;
 alter user chprod quota unlimited on sysaux;
 impdp REMAP_SCHEMA=awr_stage:chprod directory=AWR_DATA DUMPFILE=chprod_awr.DMP FULL=y

Now I have two schemas with the AWR data from the two different databases.
Unfortunately now I don’t have the benefit of the views on the WRH tables, so I have to create them myself.
First I have to give the new users access to some views:

 grant create view to chprod;
 grant create view to usrtst;
 grant select on sys.v_$event_name to usrtst;
 grant select on sys.v_$event_name to chprod;

Now I connect as each user and create an ASH view

 create or replace view v$ash as select
 en.name  event,
 decode(wait_time,0,'WAITING','ON CPU') session_state,
 ash.*
 from
 WRH$_ACTIVE_SESSION_HISTORY ash,
 v$event_name en
 where en.event_id = ash.event_id
 /

The “event_id” column comes in handy and I don’t have to work about event# mapping to different names.

Now I can runs some of my  favorite queries on the ASH data at least. More work to be done for sure to figure out the best method consolidating multiple AWR repositories with the same DBID.

New scripts, tools and broken links

I have some broken links in my old blog entries right now, so if you’re looking for something, then download the whole zip file from here:

I have uploaded a .zip file (for Windows) and a .tar.gz file (for Unix/Mac). The scripts are all the same with differences in the CR/LF bytes in the files and the init.sql and i.sql which have some OS specific commands in them.

I also uploaded the latest PerfSheet there where I fixed an annoying bug which complained about some missing reference files when opening the file.

I plan to fix the broken links some time between now and my retirement.

 

Brain Teaser: 10046 Extended SQL Trace Shows “EXEC #435118472:c=15600,e=510″, How is that Possible?

April 14, 2011 I find interesting details from time to time about Oracle Database.  I am in the process of organizing material for a presentation, and I encountered something interesting when testing Oracle Database 11.2.0.2 with patch 3 applied.  I traced the execution of an application and then processed the 660MB  trace file using a trace file [...]

SATAケーブルだけで1台あたり¥6000もします

SSDをたくさん、InfiniBandで連携。作りました。で組み立てたPCは1台に6個のSSDと2個のHDDを入れました。だから、SATAケーブルが8本必要になりました。SATAケーブルは180円で秋葉原で買えますが、それをビジネスで使うわけにはいきません。

(株)バリューウェーブから貰った資料を転記します:
ポリエチレンを発泡させてスポンジ状にすると、誘電率が低くなり、静電容量・減衰量を小さくすることができる。これによりデジタル信号の高容量化で信号線の速度を上げ、さらに信号間の速度差を小さくし信号同期のズレを抑えることができます。

バスタブ曲線(エラーの発生率の変化を表示した曲線)
曲線の落ち込みが急なほど、エラーの発生が少なく安定した性能を発揮します。

アイ・ダイアグラム(差動信号の波形を連続的に重ね合わせて表示したもの)
波形トレースの開口部が目(Eye)のように見えることからアイ・ダイアグラムと呼びます。
アイ開口が広いほどノイズやジッタに対するマージンが増加し、受信特性も良好になります。
また、波形がきれいに重なっているほど、安定した信号を伝えることができます。
「SATA6Gシリーズ」は従来商品と比較して、アイ開口が広く、波形も綺麗に重なっているので、より高品質で安定した製品であるということができます。

そして、16ノードのクラスターストレージを作ると、これだけ必要になる。

10万円分のSATAケーブル