A lot is happening here at Oracle Open World, more than my brain can master right now. Exalogic, ZFS storage, Unbreakable Enterprise Linux, Fusion Apps (finally), Oracle VM for Solaris, etc… Some of those topics aren’t that important for me and/or my customers right now, because they are just out of reach like, for example Exalogic. I read that a full box setup will go for just over 1.000.000 US$ so that is a setup I won’t be managing for a while. On the other hand from an Oracle perspective this is the logic next step to make to provide a solid complete solution from apps layer down to the hardware layer providing a boxed solution for Oracle’s most demanding customers regarding performance, best of breed and availability.
It has very good technology features in there of which I hope it will be open for us “general” database consumers in the near future. One of those Exadata features I would really like to get my hands on would be the in memory index functionality, if not only that it fits perfect with the way I handle XML data, that is “content” based. An other feature would be, for instance, those storage cell optimizations. Anyway, until now, when trying to enable them, it only comes back with a “Exadata Only” warning, so I will have to wait a little (I hope).
So now, I am currently following Wim Coekaert‘s OEL / Unbreakable Enterprise Linux kernel session called “Oracle’s Linux Roadmap”, if not only being interested in the statement of direction about ZFS, OEL and/or Oracle Linux. Oracle Linux will be a fork as far as I have read and/or will it be an option to chose while installing Oracle Linux. From now (Oracle 5.5 and onwards) you can chose installation for a Red Hat Linux kernel strict install or a for Oracle optimized Unbreakable Linux kernel install. You can, although I haven’t tried it yet, also update the kernel afterwards with the Oracle optimized kernel.
So what is this Oracle Unbreakable Linux kernel part all about…?
Until now you could download the freely available binaries or source code of Oracle Enterprise Linux and applications would run unchanged if they were compatible and supported on RedHat. No code change needed and Oracle will/would fully support it. Until now there were, as mentioned during the presentation, not one single bugs reported of Oracle Enterprise Linux not being compatible with RedHat. Oracle Enterprise Linux is used by an enormous amount of big customers and among others also implemented in the Exadata set-up’s and has cheaper support offerings than, for example, RedHat support. So far so good; nothing new.
But as was mentioned, these limits of being strict Red Hat Compatible had some mayor disadvantages for Oracle, like fixing bugs from a new Red Hat distribution to be properly working in, working with, an Oracle software environment. Red Hat does not validate Oracle software, plus Red Hat adapts community efforts very slowly. So these were some of the reasons why Oracle now created the Unbreakable Enterprise Kernel. Be aware this is not the old thing called the Unbreakable Linux program but actually a new Linux kernel.
So this new Oracle kernel is a fast, modern, reliable and optimized for Oracle software. It’s, among others, used in the Exadata and Exalogic machines to deliver extreme performance. From now on its also allows Oracle to innovate without sacrificing compatibility. In short, regarding the software: It now includes both the Unbreakable Enterprise Kernel and the existing strict Red Hat Compatible Kernel. During boot time you can chose for or the strict Red Hat one or the optimized for Oracle software Unbreakable Enterprise Linux kernel.
As Oracle states “Oracle now recommends only the Unbreakable Enterprise Linux Kernel for all Oracle software on Linux”. You could wonder about what this means regarding certified solutions. My guess would be that Oracle will push the the Oracle Unbreakable Enterprise kernel (OUEL?) but will certify on both kernels if not only that a lot of Oracle’s on Demand services are still based on Red Hat environments.
Until now the OUL kernel has made huge improvements compared to Red Hat, for example, 400% gain regarding 8kb flash cache reads (IOPS)… Oracle also has now the ability to support bigger servers, up to 4096 CPU’s and 2 TB of memory, up to 4 PB clustered volumes with OCFS2 and advanced NUMA support. CPU’s can stay now in a low power state when the system is idle. Also this power management supports ACPI 4.0 and fine grained CPU and memory resource control.
All these additions will flow back into the community which, of course, is what Linux is all about. Some other stuff which Oracle pushed back into the stream/kernel source is better data integrity, stop corrupted data in memory from actually being written by detecting in flight memory corruption; on hardware fault management level, errors detection and logging before they effect OS or application and, for example, automatic isolation of defective CPU’s and memory which will avoid crashes and improves application up time. Diagnostic tooling has been made less resource intensive hoping that people won’t switch them so when a performance or corruption issue happens, then there will trace info. During the presentation the “latencytop” diagnostic tool was mentioned as an example.
The Oracle Unbreakable Enterprise Linux kernel can be downloaded via or the public-yum.oracle.com from OEL 5.5 and onwards or use the ULN network. Follow instructions to download via the public yum server and/or look up instructions via public-yum-el5.repo and apply it on your current 5.5 system, a different way to achieve the same would be to use “up2date oracle-linux” via yum if you have already bought/installed/registered with Oracle Unbreakable Linux support.
Wim described it in more detail on his blog via
(1) On ULN we created a new channel “Oracle Linux Latest (ol5_x86_64_latest)” which you can subscribe to. It’s really very simple. Take a standard RHEL5.5 or OEL5.5 installation, register with ULN with the O(E)L5.5 channel and add the Oracle Linux 5 latest on top as well. Then just run up2date -u oracle-linux
the oracle-linux rpm will pull in all the required packages
(2) On our public yum repo we updated the repo file public-yum-el5.repo . So just configure yum with the above repo file and enable the Oracle Linux channel. [ol5_u5_base] enabled=1 and run yum install oracle-linux
The OUEL kernel improved also on some miscellaneous thinks like NFS IPv6 support and RAID 5 to RAID 6 support. There will be a new volume manager kind of GUI that will support your LUN creation, deletion, expansion and snapshot of NAS storage. All the work done is submitted to the mainline kernel and enhancements will trickle down and be tested in the Enterprise Linux distribution (be aware: Oracle Enterprise Linux is not the same as the Oracle Unbreakable Enterprise kernel).
Also see Wim’s post of today on blogs.oracle.com/wim to get more info. I guess, this also counts for Sergio Leunissen’s blog, who is currently hammering his laptop behind me in the audience, while I am writing this. Keep watching those blogs for more info
By the way, while speaking with Sergio about the “naming” of this software, which apparently was driven by the need to have clear distinction between “Oracle Solaris” and more general hardware based “Oracle Linux” architectures.
Currently sitting in at the Oracle Open World 2010 presentation of Sam Iducula, Consulting member of the tech. staff and Mark Drake, Sr. Product Manager for Oracle XML DB. Before getting into the more in-depth topics Sam explained XML schema usage, for validation via XML schema validators like for example XML Spy or JDeveloper. This is currently really needed because those more used XML Schema like the really big ones out there like H7, etc, are nowadays so very very big that a good XML Schema validator is really needed. XML Schema in binary XML format is stored in a post parsed binary format. This has the advantage that Oracle knows about the format when storing the XML document. Extra information can be shared by the database by registering the XML Schema in the database that validates the Binary XML content.
There can be a lot of recursive dependencies, via the import or include references in a XML Schema, which make it even more difficult to make optimal use of this information. For example in the H7 (Health Level 7 schemas) setup this includes over 100 included XML Schemas. Oracle 11gR2 has been greatly improved performance and handling of those very huge meta data information as stored in such XML Schemas. Via streaming schema validation and adding hints via xdb:annotations, this provides the database with even more information on how to optimal handle these structures and as such performance can be improved even more. Some of those hints could be used to avoid the creation of objects, in this case while using XMLType Object Relational storage via, for instance, xdb:defaultTable=”" (providing an empty string) or store parts of the XML document information out of line. By the way for this last example you should use JDeveloper because it will annotate the XML Schema incorrectly (the bug has been reported by me). One of the improvements in 18.104.22.168.0 is huge improvements were made in cycle detection recognition, so they are handled even better in the mentioned version.
On the XMLDB home page on the Oracle OTN website a package of tools provided (“Oracle XML DB Ease of Use Tools for Structured Storage“) which can make your life easier regarding those xdb:annotation’s especially for those enormous big XML Schemas. This tool set which enables you to automate a lot of hints in XML Schema optimization you would like to make. Via XQuery or other XML DB update statements you are also able to override the by the database generated naming or storage options. Via some simple anonymous PL/SQL blocks this can be very easily done via for example, DBMS_ANNOTATE-x packages contained in this XML tool set, as said which is freely available on the XML DB OTN Oracle website.
This tool set also comes with a white paper that shows and demonstrates some of the best XML handling ideas and experience gathered trough a lot of years handling customer use cases the Oracle XML DB Development team had. For example if you know it’s not applicable to your XML document you are able to switch off or alter DOM validation handling while storing or handling your XML document in the database. You can override ordering for example if it is applicable for your XML Schema, this avoids oracle checking it, which improves handling, but, be very aware, it can also be dangerous doing this if it was implemented by the person who created the XML Schema, but just didn’t care about the real life implement and/or it’s importance regarding being a actual mandatory requirement in practice.
I have experience multiple times that even with official XML Schemas the restrictions didn’t match real life use, so although automation really helps you to manage your XML registered schemas more easily, you must be aware of those exceptions. XML schemas can be created very loosely on real life implementations which can get you in a lot of problems after these storage models, based on such an XML schema, is used in your database design; those rules will be enforced via a XML schemas in the database.
As always, proper design with future needs in mind, takes time to do it properly. This is also the case regarding creating a good XML schema.
In Oracle 11you have now the possibility, via this tool set, to use DBMS_XML_MANAGE (for XMLType Object Relational storage) to rewrite table to column mapping, which figures out for you, makes it more easy, to identify and create supporting indexes on ComplexTypes. This has the advantage that you can create indexes with some more meaningful names like, for example “line_items_uniq_idx_01″ or whatever the naming convention within your company might be.
In the latest XDB toolset there will be now also a XDB_ANALYZE_XMLSCHEMA package which sorts out all of the scripting and possible options, while you feed it the actual XML Schemas. As was demonstrated by Mark Drake, all the FpML schemas which have a lot of dependencies of each other where analyzed, annotated, registered and it created over 100 tables and more than 2500+ objects in minutes. Try doing this by hand…
Also while using this package it will sort out the proper XML Schema dependencies and in which order all those XML schemas have to be registered in the correct order (based on includes, imports and ref’s used by Simple- and ComplexTypes). Sometimes you have to break up column create table statements because the maximum amount of columns allowed by Oracle in one single CREATE TABLE statement is “only” 1000 columns. This package will help you figure out how much of those Object Relational storage items will have to be moved “out of line” and/or to break up on a certain level in the XML hierarchy of the XML tree to avoid this 1000 column limitation but also to provide the design info needed to get the maximum performance.
This tool set used for XMLType Object Relational storage is only useful if you XML design is highly relational. If not then, your XMLType storage module should be based on Binary XML. The advantage of using XMLType Object Relational storage is that you make full use of Oracle relational technology and optimizations, which is available since a long long time and full use of, for example, the Cost Based Optimizer will kick into effect. On the other hand, be aware if your XML design is really relational, maybe you should have created it by relational means. There should be a proper use case to work with the XML format in the first place. My adagio always is: if it is not XML, don’t use Oracle XMLDB. If it is, go for Oracle XMLDB, if not only that is a “no cost option” within your Oracle database and it has been designed, since version 22.214.171.124.0, to optimal handle XML in your database.
For further information about choosing the proper XMLType storage model and how to optimally query these structures, have a look at:
Chips. I had an accidental delete from this mini blog post while clicking and editing it via my Android WordPress app on my phone, so here after searching some blog aggregates I have “re-builded” it. Leaves me to say that, if I am correct the games was won, if I am correctly informed, by 42 to 37 by Michigan… I really, Really liked it !
Got all those tweets from OOW ACED people in San Francisco while we here, the OakTable MOTS speakers, in Michigan got up at 6 am for a small breakfast and out the door at 8 am to join the 120.000 Michigan Football fans in “the Big House”.
So stop whining about early, I am sitting here in the rain, early, but with great people and warm coffee… Let the game begin…
Some attempts at “Live broadcasting” the event, via my mobile, here b.t.w.: http://bambuser.com/channel/MGralike
A long presentation title, “Learning from failures, design errors, problematic recoveries and downtime’s. Experience from CERN” from Eric Grancher about, indeed, experiences they learned while trying to handle those 19 Pentabytes of data produced at the CERN institute (?) in Switzerland. As Carol Dacko said just before I entered this session, this conference has so many high level, great quality presentations it is just a joy to be here and learn from those people, despite I had to skip more than half while preparing my presentations.
It’s a same that this one is already the last one this week…
OK, next one which looked interesting, and apparently I was not the only one, was Kyle Hailey’s “Visual SQL Tuning”. It’s funny to hear him tell that visual presentation is very important in seeing very quickly correlations which remembers me at a discussion I had with a smart colleague that he wanted to go for more numbers/text driven reports regarding monthly reporting instead of using OEM graphical customer overview statistics regarding progression of architecture behavior… Failures and success values are important to correctly interpret or see even more quickly what the problem is or searching for answers. Another great session. Thank god I haven’t a presentation to do anymore…
Currently “following” (yeah I know Alex I am also writing the mini post) Alex Gorbachev’s “Oracle ASM 11g – The Evolution” presentation, discussing, talking about ASM overview over the years, methods involved and the ins and outs you should or shouldn’t use Oracle ASM 11g for. Not sure if he will also do this presentation during Oracle Open World, but if you need more info about it, not yet (really?) sure if or why you should implement it, this is one to go to (as said, if Alex is presenting this one during OOW as well). Currently watching very nice slides regarding ASM striping and rebalancing data.
Probably I forget people, but Gwen Shapiro isn’t on this picture or Graham Wood, therefore the mentioning by me of “partial” lineup. As you can see, although there is still a half day to go, this was an impressive gathering of experienced speakers… If you weren’t. Yep. You have missed out.
So actually enjoying now some presentations of my own. Riyaj Shamsudeen did good on his “Advanced RAC Troubleshooting” presentation and now I am enjoying Jeremy Wilton’s “Database Deathmatch: Oracle vs SQL Server”.
A presentation about “breaking things”, mainly as Jeremy explained, because he was asked, more then once, to please go back to presentations that demo stuff more (and/or break it). Jeremy (biased regarding Oracle) wanted to do some more SQL Server stuff, just so he knew more about this SQL Server database and because he hates “not knowing” things. So this presentation was born. He is/was demo-ing stuff (Oracle and SQL Server) straight from the Amazon Cloud, trying to simulate both environments to be strain under load, the same amount (or at least hoping to do a honest comparison).
It was indeed a funny presentation (and insight full), were Jeremy explained the internal workings of SQL Server and Oracle regarding transaction logging, explicitly corrupting blocks/logs, for fun of course, switching those corrupted logs/blocks from Oracle to SQL Server and vice versa, see what happens, how those databases cope with such corrupted blocks and demonstrating both databases under strain. I really like it. Its a fun way to discover things and learn about them. But then again, its always fun to just break things right?
The “disadvantage” of presenting is that you can’t relax until the last presentation has been done. As some one once said to me, you will have to do your best because those people attending have paid top dollar to be there were they maybe should be working on customer issues getting things along. I try to keep doing my best in this perspective, so (Doug!) no party for me last night because my last presentation (that is today) was already on 09.00 AM so…
I picked up my so needed coffee infusion at 08:00 AM and to my surprise I was the first one in the hallway, in front of the conference rooms, were also a nice table was put up with drinks, coffee and breakfast.
Anyway, got three enthusiasts, brave guys attending at my presentation, although it was so early. Chapeau (“hat off”/”I salute you”)!
I ran overtime (again – damn me), as said before, nowadays I have more to tell than would fit in a 1 hour presentation. I really have to work on this. I think, hope, I did my best, and those (early up) people enjoyed it. Now I can relax and enjoy some presentations until next Wednesday when I have the 3rd presentation within a week on Oracle Open World, San Francisco. If you read this and are interested in seeing some demo’s and are attending OOW, then maybe you want to attend to my presentation. There is still some room.
Its time to listen in on Riyaj Shamsudeen‘s “Advanced RAC Troubleshooting” presentation.
So after this fun “Moans The Magnificent” special guest appearance, straight off to my hotel room to prepare my demo session about interfacing via XMLDB functionality. I wasn’t happy with those “old” demo’s so started to fiddle around, dropped half of them and created 2 more complete one’s which made a bit more sense regarding being a coherent “use-case” instead of the old demo “just a SQL statement”. Maybe not the smartest to do, being only 2 hours before my session. I got really bit stressed when I pressed on “shutdown” instead of “hibernate”, while finishing up, 30 minutes before my presentation started. Of course at that moment those brilliant Windows wizards kicked in an showed a desktop with the remark “Installing 1 of 7 updates”. It felled like those updates took ages… With only 10 minutes to go, I rushed down, but luckily for me Joze Senegacnik was busy with a great presentation and hadn’t finished in time, plus there were a lot of questions from his audience.
Despite Riyaj Shamsudeen and Cary Millsap, with his great presentation “Thinking Clearly About Performance” (part 1 of 2), I had after a minute of 5 in my presentation slot, even some 10 up to 15 people in the audience, which honestly, wouldn’t have expected to attend, at least not in those numbers. I had already settled in my mind with a “explain me your problem and I will demonstrate” kind of alternative (also cool) presentation / demo’s. Having around 15 people in the room I had to follow some of the lines I had set out for this presentation. I will give this presentation as well during OOW on Wednesday and although I had cut those demo’s in half; I again run out of time. I guess I have nowadays to much to tell plus I got some good questions from the audience on which I elaborated with some extra info.