I recently worked on a PeopleTools 8.50 system in production for the first time and was able to make use of the new Oracle specific instrumentation in PeopleTools.
PeopleTools now uses the DBMS_APPLICATION_INFO package to set module and action session attributes. This data is then copied into the Active Session History (ASH).
If a table column is updated with the same value in a row, does Oracle RDBMS engine modify the data? (or) Does RDBMS engine have an optimization skipping the update, as value of that column is not changing? This was the essence of a question asked in Oracle-l list and I think, it is a good topic for further discussion. Jared Still came up with a fine method to understand this issue measuring redo/undo size. We will explore the same questions with redo log dump method in this blog entry.
Following few lines shows a test case creating a table, an index, and then populating a row in the table.
create table updtest (v1 varchar2(30)); create index updtest_i1 on updtest(v1); insert into updtest values ('Riyaj'); commit;
REDO records and change vectors
I had a few questions from readers whether or not there was going to be a kindle version of Pro Oracle Database 11g RAC on Linux.
The good news for those waiting is: yes! But it might take a couple of weeks for it to be released.
I checked with Jonathan Gennick who expertly oversaw the whole project and he confirmed that Amazon have been contacted to provide a kindle version.
As soon as I hear more, I’ll post it here.
Fedora 14 is here and so are the obligatory articles:
My attitude to Fedora and Ubuntu as changed today, with most of that shift due to VirtualBox.
Before I switched to VirtualBox I was always reliant on my OS being able to run VMware Server. Over the years I had repeatedly encountered problems running VMware Server on Ubuntu and Fedora. Not all of them show stoppers, but enough to put me off them as my main desktop OS. Why did I stick with VMware Server? Just because it supported shared virtual disks, which allowed me to easily create virtual RAC installations. Version 3.2.8 of VirtualBox included support for shared disks for the first time, so I ditched VMware Server and launched full scale into using VirtualBox.
While I was playing around with Fedora 14 I was thinking how cool it would be to have a newer OS on my desktop that could run Google Chrome, then it dawned on me that now I can. I’ve been free of VMware Server for a while now and I hadn’t realized the knock-on effect of that.
My years of using RHEL mean I feel a little more comfortable with Fedora than Ubuntu, but to be honest all I do on a desktop is fire up VirtualBox, use a browser (preferably Chrome) and use a terminal for SSH. Virtually everything else is done in VMs.
Now, do I waste a few days assessing the various options for my desktop, or do I just stick with CentOS and deal with the fact I can’t use Chrome on it?
November 4, 2010 Almost two months ago I was interviewed for an article that was to appear in an upcoming NoCOUG Jornal newsletter. It was a great opportunity to participate in the interview, and Chen (Gwen) Shapira provided to me a great set of thought provoking questions. The interview went a bit long, so some of [...]
When I was a Windows user, one tool I felt I couldn’t live without was UltraEdit. It’s awesome.
A few months ago I checked the UltraEdit website and saw a Linux version of the editor was available. Unfortunately, it only had a subset of the functionality found in the Windows version. I checked again yesterday, and the Linux version is still lagging behind, but it’s a bit better than it was. I wrote to the company (IDM Computer Solutions) to ask when/if some of the functionality I require would be coming and it looks like the next release (start of next year) will include everything I need for my day-to-day use. What’s more, towards the end of this year there should be a Mac version available. Joy!
There are of course alternatives out there, but I really like Ultraedit and I’m happy to pay for a lifetime updates license on each platform (I already have a Windows one) if I have to. I’m keeping my fingers crossed for a nice Christmas present from IDM Computer Solutions.
The intent of this blog piece is just to bring together the whole discussion of block dumps and how we can use block dumps to demonstrate Oracle behaviour. First, let’s start with a fresh little demo, creating an index on a NAME column with 500 entries (note this specific demo uses an 184.108.40.206 database running [...]
Being busy at work is both a blessing and a curse to blogging activity. On the one hand, the more that's going on, the more technical issues there are likely to be to blog about. On the other, when the pace is frantic and problems are coming thick and fast, taking good notes isn't a priority (although it's probably even more important). So this blog post has been stuck in the same place for weeks because when the problem occurred, I didn't take enough notes and I have struggled to recreate it in my own environment. It was sucking up so much time that I almost decided not to post it but I'll post what few details I remember in case it helps some desperate Google-scavenger one day.
We were migrating one of our main databases to version 220.127.116.11 and restructuring a large partitioned table at the same time. However, when we started trying to gather opitimiser statistics on the partitions of the table, performance was atrocious. Our approach was to gather the statistics at the Partition level, one partition at a time so that when all partitions had stats, they would be aggregated up to the table level as approximated global stats. At this point you might wonder why we didn't just use 11g new features and gather stats at the default granularity and sample size. We did try that several times, but gave up after several hours of running. The intention of the project was to simply port the existing system to 18.104.22.168 with the minimum number of changes and look at exploiting new features later. As usual, that was largely driven by our deadlines
Although I don't have the exact timings to hand, I recall it was taking about 3 minutes to gather stats on each partition where we'd normally expect it to take a few seconds. We spent days trying to work out what was going on and the two main symptoms were
Many executions of the following statement.
SELECT /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_size, MINIMUM, maximum, distcnt, lowval, hival, density, col#, spare1, spare2, avgcln FROM hist_head$ WHERE obj# = :1 AND intcol# = :2
and DML row locks on WRI$_OPTSTAT_HISTHEAD_HISTORY.
As we became more desperate to solve the problem and the deadline approached, I fell back on an old, basic but often successful technique. Compare what was different on the new database to the old one and noticed that the new database was missing Global Stats on the table. That made sense, though, because our intention was to use Aggregated Global Stats and as we hadn't gathered stats on all of the partitions, Global Stats shouldn't exist at that stage. But it still bugged me because that was the only difference I could see (apart from one database being on 10.2.0.3 and the other on 22.214.171.124 of course).
Rooting around Metalink, I also came upon bug number 282598.1 which was inappropriate because it was for a completely different version, had been fixed and the problem description was different, but it looked like it was in the same area of functionality so it was perhaps another little pointer to what might be going on.
To cut a long story short (one that included a brief email exchange with Greg Rahn), eventually I suggested we gather Global Stats just to reassure myself that wasn't the problem. As soon as we did, the partition stats gathering performance went back to what we'd expect. Obviously we needed to regather Global Stats once all the Partition stats were in place and then we'd have to implement something to keep the Global Stats up to date.
As I suggested at the start, that's not as much detail as I'd expect to include normally but hopefully it's another example of why you avoid gathering Global Stats and using default Oracle functionality at your peril.
Despite the title and content, this article is more about thinking and testing than it is about list partitions as such. It’s just based on a little thought experiment with list partitioning.
I have a list-partitioned table, with no indexes, defined as follows:
create table area_sales ( area varchar2(10) not null, dated date not null, quantity number(6), value number(8,2), padding char(100), constraint as_area_ck check (area in ('England','Ireland','Scotland','Wales')) ) partition by list (area) ( partition england values ('England'), partition ireland values ('Ireland'), partition scotland values ('Scotland'), partition wales values ('Wales') ) ;
You’ll notice that there is no default partition, the partitioning column is declared as not nul and is limited by a check constraint to a very specific set of values, and that every partition is defined to hold exactly one of the legal partition key values. Given those restrictions you might like to think about which of the following queries will be able to do perfect partition elimination:
select count(*) from area_sales where area = 'England' ; select count(*) from area_sales where area in ( 'England', 'Ireland', 'Wales') ; select count(*) from area_sales where area != 'England' ; select count(*) from area_sales where area not in ( 'England', 'Ireland', 'Wales') ; select count(*) from area_sales where area between 'England' and 'Ireland' ;
Once you’ve thought about what’s likely to happen, you might want to create and populate the table with a few thousand rows and run some tests to see if your thoughts were correct. Here’s a simple statement to spread some data evenly through the tables:
insert into area_sales select decode(mod(rownum,4), 0,'England', 1,'Ireland', 2,'Scotland', 3,'Wales' ), sysdate + 0.01 * rownum, rownum, rownum, rownum from all_objects where rownum <= 30000 ;
Once you’ve done the tests, and satisfied yourself that you understand what’s going on, you might start thinking about a few variations that might make the behaviour change, such as:
I’m not going to tell you the answers to these questions. The point I want to make is that this is the type of model, and these are the types of question you should ask yourself when you start to investigate a feature to see whether it will behave as well as you hope, and what special benefits you might get from it.
In fact, these are the questions I’m asking myself right now as I look at a system which is using list-partitioned tables to hold (lots of) sales data. I think it might be a good idea to restructure the table to disallow nulls, include a check constraintm, and hold just one key value per partition – and then modify some of the code to get rid of “not in” and “!=” predicates.
The trouble is it’s a massive table with lots of indexes, the number of partitions would jump from eight to 40, and the number of times that rows would migrate due to partition key updates would increase; so the cost of restructuring would be large and there would be some fairly undesirable impact on DML that would offset the benefits of more efficient queries.
November 2, 2010 (Updated November 3, 2010) An interesting ODBC problem was brought to my attention yesterday regarding the Microsoft Access 2010 program and the Oracle Database 126.96.36.199 ODBC client. What is the problem? The person reported that they were trying to update a table in a 10.2.0.4 Oracle Database using values from a database [...]