A while ago I went on a crusade within my organization to review and clean up our init.ora files. Many of them had been around since versions 7.3 and 8.1 of Oracle and were simply added to over time. I still like the text-based init.ora files that I can check into source code control and liberally comment. I’m liking the fact that you can comment on parameters in spfiles too — they even have the comment fields displayable in DB Console and Grid Control.
I’m constantly amazed at the places I go where I still see the following text in their init.ora files:
# Use the following table to approximate the SGA size needed for the
# three scenarios provided in this file:
#
# ——-Installation/Database Size——
# SMALL MEDIUM LARGE
# Block 2K 4500K 6800K 17000K
# Size 4K 5500K 8800K 21000K
I’m guessing the init.ora file isn’t being reviewed at those places.
Anyway, I started doing this when I realized that many of the default values for particular parameters were higher / better than the ones we had “set”. And we didn’t have any documented reasons for setting them. I ended up with 2 goals:
End result was a lot more clarity around our settings and why we needed them. We also were able to basically make an init.ora template for ALL databases, since we made such heavy use of defaults.
I’d like to direct your attention to Chen Shapira’s latest blog entry, in which she talks about Oracle Streams. Having been a replication aficionado for years, I’ve always been interested in Streams, but slightly awed by their complexity and flexibility. I’m looking forward to the follow-up entries, as I’ve recently begun working with them myself. Perhaps we can all add to the collective knowledge on them. I can say this, you’ll be learning a lot about things you may not have played with before: Advanced Queuing (especially propagation), LogMiner, and (coolest of all, in my opinion) networked DataPump (in 10g and up). Just try to keep focused on what you’re trying to do and break Streams down into Capture processing, Propagation processing and Apply processing. Even though it’s about the older Advanced Replication, you may even want to read my old paper.
After my last post about SQL*Net message to client wait event I had a follow-up question about what’s the difference between SQL*Net message to client
and SQL*Net more data to client
wait events. I’ll post the answer here:
The first session data unit (SDU) bufferful of return data is written to TCP socket buffer under SQL*Net message to client
wait event.
If Oracle needs to return more result data for a call than fits into the first SDU buffer, then further writes for that call are done under SQL*Net more data to client
event.
So, whether and how much of the SQL*Net more data to client
vs. SQL*Net message to client
waits you see depends on two things:
After my last post about SQL*Net message to client wait event I had a follow-up question about what’s the difference between SQL*Net message to client
and SQL*Net more data to client
wait events. I’ll post the answer here:
The first session data unit (SDU) bufferful of return data is written to TCP socket buffer under SQL*Net message to client
wait event.
If Oracle needs to return more result data for a call than fits into the first SDU buffer, then further writes for that call are done under SQL*Net more data to client
event.
So, whether and how much of the SQL*Net more data to client
vs. SQL*Net message to client
waits you see depends on two things:
After my last post about SQL*Net message to client wait event I had a follow-up question about what’s the difference between SQL*Net message to client
and SQL*Net more data to client
wait events. I’ll post the answer here:
The first session data unit (SDU) bufferful of return data is written to TCP socket buffer under SQL*Net message to client
wait event.
If Oracle needs to return more result data for a call than fits into the first SDU buffer, then further writes for that call are done under SQL*Net more data to client
event.
So, whether and how much of the SQL*Net more data to client
vs. SQL*Net message to client
waits you see depends on two things:
After my last post about SQL*Net message to client wait event I had a follow-up question about what’s the difference between SQL*Net message to client
and SQL*Net more data to client
wait events. I’ll post the answer here:
The first session data unit (SDU) bufferful of return data is written to TCP socket buffer under SQL*Net message to client
wait event.
If Oracle needs to return more result data for a call than fits into the first SDU buffer, then further writes for that call are done under SQL*Net more data to client
event.
So, whether and how much of the SQL*Net more data to client
vs. SQL*Net message to client
waits you see depends on two things:
In a recent Oracle Forum thread a question came up how to use SQL*Net message to client wait events for measuring network latency between server and client. The answer is that you can’t use it for network latency measurements at all, due to how TCP stack works and how Oracle uses it.
I’ll paste my answer here too, for people who don’t follow Oracle Forums:
As I wrote in that reply, “SQL*Net message to client” does NOT measure network latency!
In a recent Oracle Forum thread a question came up how to use SQL*Net message to client wait events for measuring network latency between server and client. The answer is that you can’t use it for network latency measurements at all, due to how TCP stack works and how Oracle uses it.
I’ll paste my answer here too, for people who don’t follow Oracle Forums:
As I wrote in that reply, “SQL*Net message to client” does NOT measure network latency!
In a recent Oracle Forum thread a question came up how to use SQL*Net message to client wait events for measuring network latency between server and client. The answer is that you can’t use it for network latency measurements at all, due to how TCP stack works and how Oracle uses it.
I’ll paste my answer here too, for people who don’t follow Oracle Forums:
As I wrote in that reply, “SQL*Net message to client” does NOT measure network latency!
In a recent Oracle Forum thread a question came up how to use SQL*Net message to client wait events for measuring network latency between server and client. The answer is that you can’t use it for network latency measurements at all, due to how TCP stack works and how Oracle uses it.
I’ll paste my answer here too, for people who don’t follow Oracle Forums:
As I wrote in that reply, “SQL*Net message to client” does NOT measure network latency!
Recent comments
3 years 3 days ago
3 years 12 weeks ago
3 years 16 weeks ago
3 years 17 weeks ago
3 years 22 weeks ago
3 years 43 weeks ago
4 years 11 weeks ago
4 years 41 weeks ago
5 years 25 weeks ago
5 years 26 weeks ago