No tech or management this week – this Friday Philosophy is about something in my home life.
This week we are in Singapore, our first ever visit. The main reason that we have come here is to look at some pictures painted by Sue’s Uncle Stan. They are also called the Changi Murals. Stanley Warren painted these murals when he was gravely ill in Changi during World War 2. He was a POW, captured with the taking of Singapore by the Japanese. Conditions were extremely poor in the POW camps, and across Singapore as a whole. During the occupation thousands died from disease and malnutrition.
Stanley had been a graphic artist before the war and he did some painting whilst he was in the camp of what he saw. He was a deeply religious man and when people knew he could draw his fellow POWs asked him to draw murals on the walls of a chapel they’d built at Bukit Batok. Not long after, he was so ill with amoebic dysentery that he was moved to the Roberts Barracks hospital in Changi, block 151. I don’t think he was expected to live. Whilst he was there, he heard a choir singing in the local chapel for the hospital and his talking to the padre after that led to a request for him to paint some murals on the walls there.
Stanley had to paint the first mural bit-by-bit, he was too unwell to work for more than a few minutes at the start. They also had to use material stolen or obtained as they could. In the first mural there are some areas of blue – that came from a few cubes of billiard cue chalk. He had so little that it ran out after the second mural. The first mural was completed just in time for Christmas and he was carried back up to the wards and could only hear the service from there, no one knowing if the latest bout of dysentery would kill him or not. But it didn’t. Over the next few months Stanley drew four more murals as his health waxed and waned. The amazing things it that, despite the condition he was in, under a brutal regime with very little hope for survival, his message was all about reconciliation. The figures in the murals are from all races and the messages of reconciliation are constant through the murals.
You can read more about Stanley and the Murals at the wikipedia link at the top of this blog, at the RAF Changi association page here or in an excellent book about them by Peter W Stubbs, ISBN 981-3065-84-2
Stanley survived his time as a POW in Singapore and with the end of the war he came home. Stanley is actually Sue’s great uncle – his older sister was Sue’s paternal grandmother. After the war he became an art teacher and had a family. As well as being Sue’s great uncle, He also worked in the same school as Sue’s father and she saw a lot of him, so she knew “Uncle Stan” very well. And, of course, she knew all about the murals.
The story of the murals does not stop with the war as, after the war (during the later part of which the murals were painted over with distemper, when it stopped being a chapel) the murals were re-discovered. They became quite well known and there was a search for the original artist. When Stanley was found they asked him to go back and restore them. He was not keen! He’d spent years trying to forget his time and what he had endured as a POW. But eventually he was persuaded and over 20 or so years made three trips back to restore them. He still did not talk about the war much but the Murals are part of the family history. Stanley died in 1992, having lived a pretty long and happy life given where he was during the 1940’s.
Sue has long wanted to see the Changi Murals and, with the lose of her mother 2 years back, this desire to link back to another part of the family has grown stronger. So we organised this trip out to Asia with the key part being to visit Singapore and the Changi Murals.
There is an excellent museum about the history of Singapore during WWII, especially the area of Changi and the locations which were used to hold POWs and enemy civilians, the Changi Museum. It includes the murals. Only, it does not. This is a new museum which was built a few years back and it has a reproduction of the original Block 151 chapel, with all the murals. The reproductions are very accurate we are told and there is a lot of information in the museum – but they are not the originals as drawn by Uncle Stan.
We only really realised this a couple of weeks before we were heading out to Thailand (our first stop) but we felt it was not a problem as almost every web site that mentioned the murals said you could organise to see the original murals. Only, you can’t really. Someone at some point said you could, and maybe then it was easier, but none of the current articles tells you how to request to see the originals. They don’t even give a clue who to ask. They just repeat this urban myth that you can organise to see the originals. The only exception to this is the Changi Museum web site that lists an email to send a request to – but the email address is no longer valid! (prb@starnet…).
We managed to contact the museum and Dr Francis Li tried to help us, but he could not find out the proper route to make the request at first and then hit the problem we later hit – not much response.
After hours and hours on the net, failing to find out who to ask, I contacted a couple of people who had something to do with the Murals. One of them was Peter Stubbs, who wrote the book on the Changi Murals that I mentioned earlier. Peter was wonderful, he got in touch with people he knew and they looked into it and after a couple of days he had found out the correct group to approach – MINDEF_Feedback_Unit@defence.gov.sg. You email them and you get an automated response that they will answer your question in 3 days. Or 7-14 days. It’s the latter. We waited the 3 days (if you have dealt with government bureaucracy you will know you can’t side step it unless you know HOW to side step it) but time was now running out and I sent follow up emails to MINDEF and Mr Li.
Mindef did not respond. But Mr Li did – to let us know he had also had no response from MINDEF and had gone as far as to ring up – and no one seemed to know about how to see the original murals.
So we were not going to get to see the originals, which was a real shame, but out first full day we did go up to the Changi Museum. It was a very good, little museum. The museum is free. We took the audio tours which cost a few dollars but to be honest all the information is also on the displays. There was a lot of information about the invasion by Japan and what happened and the reproductions of the Murals were impressive. They also had some duplicates of some of the press stories about the murals, from local papers as well as UK ones. There are a lot more press stories than the museum show, we know this as there is a collection of them somewhere in Sue’s Mum’s stuff that we have not found yet.
It was quite emotional for Sue of course, and something well worth us doing. It really brought home to us an inclination of what he and the other POWs had gone through, and yet Stanley did these murals of reconciliation and belief. Of course we don’t really know what it was like, nothing like that has happened to either of us – we just got a peep into that horror.
The rules of the museum said “No photographs” – but we ignored this. These murals were the work of Sue’s Uncle Stan! (we noticed several other visitors were also ignoring the rule anyway). Most of the pictures are poor, no where as good as others you can find on the net (most from the originals) but they are important to us. I only include a couple in this blog.
If you wonder what the small picture of a man in a hat is, below the mural, it is one of only two we have by Uncle Stan. He painted this when on a school holiday in Spain with Sue’s dad also. We have no idea who the picture is of!
It is a great shame we did not get to see the original murals in the room in which her great uncle Stanley Warren painted them, as part of the chapel that was so important to people in such awful circumstances. After we got back from the museum we finally received a response from MINDEF. It was a simple refusal to consider granting us permission to see the murals as they only allow it for surviving Singapore POWs (there will be very few of them now) and direct family (whatever that limit is). I can’t help but feel that was a little inflexible of them, even a little heartless, and was applying a blind rule without consideration of the specifics of the situation.
When Sue is next going to Singapore, with me or not, I’ll see if I can make them relent and grant access to Sue to see the originals.
Irrespective, we got to see something of Uncle Stan’s murals, and that was worth all the effort.
Brian Hitchcock’s secret is simple. For twelve years, he has been reading books on Oracle Database. And taking extensive notes. And publishing them in the NoCOUG Journal. My secret is simple. For twelve years, I have been reading the extensive book notes that Brian has been publishing in the NoCOUG Journal. I’m the editor of the NoCOUG Journal but that’s not the point. It’s that simple.
Straight after the Oracle University Expert Summit in Berlin – which was a big success, by the way – the circus moved on to another amazing place: Istanbul!
The Turkish Oracle User Group (TROUG) did its annual conference in the rooms of the Istanbul Technical University with local and international speakers and a quite attracting agenda.
Do you recognize anyone here?
I delivered my presentation “Best of RMAN” again like at the DOAG annual conference last year:
Many thanks to the organizers for making this event possible and for inviting us speakers to dinner
The conference was well received and in my view, it should be possible to attract even more attendees in the coming years by continuing to invite high-profile international speakers
My special thanks to Joze, Yves and Osama for giving me your good company during the conference – even if that company was sometimes very tight during the car rides
I’m going to be at the OUG Scotland conference on 22nd June, and one of my sessions is a panel session on Optimisation where I’ll be joined by Joze Senegacnik and Card Dudley.
The panel is NOT restricted to questions about how the cost based optimizer works (or not), we’re prepared to tackle any questions about making Oracle work faster (or more efficiently – which is not always the same thing). This might be configuration, indexing, other infrastructure etc.; and if we haven’t got a clue we can always ask the audience.
To set the ball rolling on the day it would be nice to have a few questions in advance, preferably from the audience but any real-world problems will be welcome and (probably) relevant to the audience. If you have a question that you think suitable please email it to me or add it as a comment below. Ideally a question will be fairly short and be relevant to many people; if you have to spend a long time setting the scene and supplying lots of specific detail then it’s probably a question that an audience (and the panel) would not be able to follow closely enough to give relevant help.
I’ve already had a couple of questions in the comments and a couple by email – but keep them coming.
This is a follow on from my server problems post from yesterday…
Regarding the general issue, misiaq came up with a great suggestion, which was to use watchdog. It’s not going to “fix” anything, but if I get a reboot when the general issue happens, that would be much better than having the server sit idle for 5 hours until I wake up.
Someone pinged me earlier today and said, “Do I even really need to know about logs in Enterprise Manager? I mean, it’s a GUI, (graphical user interface) so the logs should be unnecessary to the administrator.”
You just explained why we receive so many emails from database experts stuck on issues with EM, thinking its “just a GUI”.
Yes, there are a lot of logs involved with the Enterprise Manager. With the introduction back in EM10g of the agent, there were more and with the EM11g, the weblogic tier, we added more. EM12c added functionality never dreamed before and with it, MORE logs, but don’t dispair, because we’ve also tried to streamline those logs and where we weren’t able to streamline, we at least came up with a directory path naming convention that eased you from having to search for information so often.
The directory structure for the most important EM logs are in the $OMS_HOME/gc_inst/em/OMGC_OMS1/sysman/log directory.
Now in many threads on Oracle Support and in blogs, you’ll hear about the emctl.log, but today I’m going to spend some time on the emoms properties, trace and log files. Now the EMOMS naming convention is just what you would think it’s about- the Enterprise Manager Oracle Management Service, aka EMOMS.
After all that talk about logs, we’re going to jump into the configuration files first. The emoms.properties is in a couple directory locations over in the $OMS_HOME/gc_inst/em/EMGC_OMS1/sysman/config directory.
Now in EM12c, this file, along with the emomslogging.properties file was very important to the configuration of the OMS and it’s logging, which without this, we wouldn’t have any trace or log files or at least the OMS wouldn’t know what to do with the output data it collected! If you look in the emoms.properties/emomslogging.properties files for EM13c, you’ll receive the following header:
Yes, the file is simply a place holder and you now use EMCTL commands to configure the OMS and logging properties.
There are, actually, very helpful commands listed in the property file to tell you HOW to update your EM OMS properties! Know if you can’t remember an emctl property commands, this is a good place to look to find the command/usage.
Trace files are recognized by any DBA- These files trace a process and for the emoms*.trc files, these are the trace files for EM OMS processes, including the one for the Oracle Management Service. Know that a “warning” isn’t always a thing to be concerned about. Sometimes it’s just letting you know what’s going on in the system, (yeah, I know, shouldn’t they just classify that INFO then?”
2016-04-09 01:00:07,523 [RJob Step 62480] WARN jobCommand.JvmdHealthReportJob logp.251 - JVMD Health report job has started
These files do contain more information than the standard log file, but it may be more than what a standard EM administrator is going to search through. They’re most helpful when working with MOS and I recommend uploading the corresponding trace files if there is a log that support has narrowed in on.
Most of the time, you’re going to be in this directory, looking at the emctl.og, but remember that the emoms.log is there for research as well. If you perform any task that involves the OMS and an error occurs, it should be written to the emoms.log, so looking at this log can provide insight to the issue you’re investigating.
The format of the logs are important to understand and I know I’ve blogged about this in the past, but we’ll just do a quick and high level review. Taking the following entry:
2016-01-12 14:54:56,702 [[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR deploymentservice.OMSInfo logp.251 - Failed to get all oms info
We can see that the log entry starts with timestamp, module, message, status, (ERROR, WARN, INFO) detail, error message. This simplifies it when having to read these logs or knowing how one would parse them into a log analysis program.
There are other emoms log files, simply specializing in loader processing and startup. Each of these logs commonly contain a log file with more detailed information about the data its in charge of tracing.
If you want to learn more, I’d recommend reading up on EM logging from Oracle.
From time to time we see a complaint on OTN about the stats history tables being the largest objects in the SYSAUX tablespace and growing very quickly, with requests about how to work around the (perceived) threat. The quick answer is – if you need to save space then stop holding on to the history for so long, and then clean up the mess left by the history that you have captured; on top of that you could stop gathering so many histograms because you probably don’t need them, they often introduce instability to your execution plans, and they are often the largest single component of the history (unless you are using incremental stats on partitioned objects***)
For many databases it’s the histogram history – using the default Oracle automatic stats collection job – that takes the most space, here’s a sample query that the sys user can run to get some idea of how significant this history can be:
SQL> select table_name , blocks from user_tables where table_name like 'WRI$_OPTSTAT%HISTORY' order by blocks; TABLE_NAME BLOCKS -------------------------------- ---------- WRI$_OPTSTAT_AUX_HISTORY 80 WRI$_OPTSTAT_TAB_HISTORY 244 WRI$_OPTSTAT_IND_HISTORY 622 WRI$_OPTSTAT_HISTHEAD_HISTORY 1378 WRI$_OPTSTAT_HISTGRM_HISTORY 2764 5 rows selected.
As you can see the “histhead” and “histgrm” tables (histogram header and histogram detail) are the largest stats history tables in this (admittedly very small) database.
Oracle gives us a couple of calls in the dbms_stats package to check and change the history setting, demonstrated as follows:
SQL> select dbms_stats.get_stats_history_retention from dual; GET_STATS_HISTORY_RETENTION --------------------------- 31 1 row selected. SQL> execute dbms_stats.alter_stats_history_retention(7) PL/SQL procedure successfully completed. SQL> select dbms_stats.get_stats_history_retention from dual; GET_STATS_HISTORY_RETENTION --------------------------- 7 1 row selected.
Changing the retention period doesn’t reclaim any space, of course – it simply tells Oracle how much of the existing history to eliminate in the next “clean-up” cycle. This clean-up is controllled by a “savtime” column in each table:
SQL> select table_name from user_tab_columns where column_name = 'SAVTIME' and table_name like 'WRI$_OPTSTAT%HISTORY'; TABLE_NAME -------------------------------- WRI$_OPTSTAT_AUX_HISTORY WRI$_OPTSTAT_HISTGRM_HISTORY WRI$_OPTSTAT_HISTHEAD_HISTORY WRI$_OPTSTAT_IND_HISTORY WRI$_OPTSTAT_TAB_HISTORY 5 rows selected.
If all you wanted to do was stop the tables from growing further you’ve probably done all you need to do. From this point onwards the automatic Oracle job will start deleting the oldest saved stats and re-using space in the existing table. But you may want to be a little more aggressive about tidying things up, and Oracle gives you a procedure to do this – and it might be sensible to use this procedure anyway at a time of your own choosing:
SQL> execute dbms_stats.purge_stats(sysdate - 7);
Basically this issues a series of delete statements (including a delete on the “stats operation log (wri$_optstat_opr)” table that I haven’t previously mentioned) – here’s an extract from an 11g trace file of a call to this procedure (output from a simple grep command):
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_tab_history where savtime < :1 and rownum <= NVL(:2, rownum) delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_ind_history h where savtime < :1 and rownum <= NVL(:2, rownum) delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_aux_history where savtime < :1 and rownum <= NVL(:2, rownum) delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_opr where start_time < :1 and rownum <= NVL(:2, rownum) delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_histhead_history where savtime < :1 and rownum <= NVL(:2, rownum) delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_histgrm_history where savtime < :1 and rownum <= NVL(:2, rownum)
Two points to consider here: although the appearance of the rownum clause suggests that there’s a damage limitation strategy built into the code I only saw one commit after the entire delete cycle, and I never saw a limiting bind value being supplied. If you’ve got a large database with very large history tables you might want to delete one day (or even just a few hours) at a time. The potential for a very long, slow, delete is also why you might want to do a manual purge at a time of your choosing rather than letting Oracle do the whole thing on auto-pilot during some overnight operation.
Secondly, even though you may have deleted a lot of data from these table you still haven’t reclaimed the space – so if you’re trying to find space in the sysaux tablespace you’re going to have to rebuild the tables and their indexes. Unfortunately a quick check of v$sysaux_occupants tells us that there is no official “move” producedure:
SQL> execute print_table('select occupant_desc, move_procedure, move_procedure_desc from v$sysaux_occupants where occupant_name = ''SM/OPTSTAT''') OCCUPANT_DESC : Server Manageability - Optimizer Statistics History MOVE_PROCEDURE : MOVE_PROCEDURE_DESC : *** MOVE PROCEDURE NOT APPLICABLE ***
So we have to run a series of explicit calls to alter table move and alter index rebuild. (Preferably not when anyone is trying to gather stats on an object). Coding that up is left as an exercise to the reader, but it may be best to move the tables in the order of smallest table first, rebuilding indexes as you go.
*** Incremental stats on partitioned objects: I tend to assume that sites which use partitioning are creating very large databases and have probably paid a lot more attention to the details of how to use statistics effectively and successfully; that’s why this note is aimed at sites which don’t use partitioning and therefore think that the space taken up by the stats history significant.
I’m pretty sure last night’s problem was caused by a disk failure in the RAID array. The system is working now, but it might go down sometime today to get the disk replaced. Hopefully they won’t do what they did last time and wipe the bloody lot!
It’s extremely nice to have a big audience. It’s very flattering that people care enough about what I say to be bothered to read it. The problem with having a large audience is people can get a very demanding at times.