I was presenting at the UKOUG event in Manchester on Thursday last week (21st April 2016), and one of the sessions I attended was Carl Dudley’s presentation of some New Features in 12c. The one that caught my eye in particular was “DDL Logging” because it’s a feature that has come up fairly frequently in the past on OTN and other Oracle forums.
So today I decided to write a brief note about DDL Logging – and did a quick search of my blog to see if I had mentioned it before: and I found this note that I wrote in January last year but never got around to publishing – DDL Logging is convenient, but doesn’t do the one thing that I really want it to do:
One of the little new features that should be most welcome in 12c is the ability to capture all DDL executed against the database. All it takes is a simple command (if you haven’t set the relevant parameter in the parameter file):
alter system set enable_ddl_logging = true;
All subsequent DDL will be logged to two different places (in two formats)
Unfortunately the one thing I really wanted to see doesn’t appear – probably because it doesn’t really count as DDL – it’s the implicit DDL due to inserting into not-yet-existing partitions of an interval partitioned table.
Note: If you’re using a container database with pluggable databases then the DDL for all the pluggable databases goes into the same log file.
The following text in the Oracle 12c Database Licensing document has just been brought to my attention:
The init.ora parameter ENABLE_DDL_LOGGING is licensed as part of the Database Lifecycle Management Pack when set to TRUE. When set to TRUE, the database reports schema changes in real time into the database alert log under the message group schema_ddl. The default setting is FALSE.
The licensing document is also linked to from the 12c online html page for the parameter.
The 11g parameter definition makes no mention of licensing, and the 11g “New Features” manual don’t mention the feature at all, but the parameter does get a special mention in the 11g licensing document where it is described as being part of the Change Management Pack.
The use of the following init.ora parameter is licensed under Oracle Change Management Pack:
■ ENABLE_DDL_LOGGING: when set to TRUE (default: FALSE)
Today’s video gives a quick run through of flashback version query.
If you prefer to read articles, rather than watch videos, you might be interested in these.
The star of today’s video is Tanel Poder. I was filming some other people, he saw something was going on, came across and struck a pose. I figured he knew what I was doing, but it’s pretty obvious from the outtake at the end of the video he was blissfully unaware, but wanted in on the action whatever it was! A true star!
I’d never tried, so I sat down to try and was amazed at how easy it was.
One of the coolest things about Delphix replication is that it makes it super easy to migrate to the cloud and also to fall back to in house if need be. For cloud migration, I just set up a Delphix engine in house and one in a cloud, for example Amazon EC2. Then I just give the in house engine the credentials to replicate to the engine in the cloud. The replication can been compressed and encrypted. The replication is active/active so I can use either or both engines. (stay tuned for a Delphix Express .ami file that we plan to release. Currently Delphix enterprise is supplied as an ami for AWS/EC2 but not Delphix Express though you could use the .ova to set up Delphix Express in AWS/EC2)
I created two Delphix Express installations.
On one engine, the source engine, (http://172.16.103.16/) I linked to an Oracle 126.96.36.199 database on Solaris Sparc called “yesky”.
On that same engine I went to the menu “system” and chose “replication”
That brought me to the configuration page
where I filled out
Then I clicked “Create Profile” in the bottom right.
And within a few minutes the replicated version was available on my replication target engine (172.16.100.92). On the target I choose from the Databases pulldown menu “DelphixExpress” and there is my “yesky” source replicated from my source Delphix Express engine.
Now I have two Delphix engines where engine 1 is replicating to engine 2. Both engines are active active so I can use the second engine for other work and/or actually cloning the Oracle datasource replicated from engine 1 (“yesky”).
Try it out yourself with our free version of Delphix called Delphix Express.
My friends from childhood will know my dad. He was likely their high school principal (he was mine too) in a very small town (of about 2500 people on a good day). Those who knew our school may have seen the inside of his office; some were there because they stopped in for a nice visit, others were directed there by upset teachers. In either case, seeing the wall in his office was somewhat overwhelming. At peak, he had 70+ 8×10 photos framed and hanging on his wall. The pictures were of various sports teams and graduating classes from his tenure as principal.
I found those pictures in some old boxes recently. Almost 100% of them were taken by one of our high school math teachers, Jim Mikeworth, who was also a local photographer. Mr. Mike said he was fine with me posting the pictures, so I scanned all of them in and posted them online. If you have a facebook account, you may have already seen them, but if not, they are still accessible without a facebook account. You can find the pictures at https://www.facebook.com/franknorriswall. I hope you enjoy them!
My dad died almost 20 years ago and arguably was one of the most loved men in the history of Villa Grove. He would love for everyone to enjoy this shrine to his office wall of pictures–he was very proud of all the kids that passed through VGHS during his time there (1978-1993, I think).
I don’t like the ‘C’ word, it’s offensive to some people and gets used way too much. I mean “cloud” of course. Across all of I.T. it’s the current big trend that every PR department seems to feel the need to trump about and it’s what all Marketing people are trying to sell us. I’m not just talking Oracle here either, read any computing, technical or scientific magazine and there are the usual adds by big I.T. companies like IBM and they are all pushing clouds (and the best way to push a cloud is with hot air). And we’ve been here before so many times. It’s not so much the current technical trend that is the problem, it is the obsession with the one architecture as the solution to fit all requirements that is damaging.
When a company tries to insist that X is the answer to all technical and business issues and promotes it almost to the exclusion of anything else, it leads to a lot of customers being disappointed when it turns out that the new golden bullet is no such thing for their needs. Especially when the promotion of the solution translates to a huge push in sales of it, irrespective of fit. Technicians get a load of grief from the angry clients and have to work very hard to make the poor solution actually do what is needed or quietly change the solution for one that is suitable. The sales people are long gone of course, with their bonuses in the bank.
But often the customer confidence in the provider of the solution is also long gone.
Probably all of us technicians have seen it, some of us time after time and a few of us rant about it (occasionally quite a lot). But I must be missing something, as how can an organisation like Oracle or IBM not realise they are damaging their reputation? But they do it in a cyclical pattern every few years, so whatever they gain by mis-selling these solutions is somehow worth the abuse of the customer – as that is what it is. I suppose the answer could be that all large tech companies are so guilty of this that the customer end up feeling it’s a choice between a list of equally dodgy second hand car salesemen.
Looking at the Oracle sphere, when Exadata came along it was touted by Oracle Sales and PR as the best solution – for almost everything. Wrongly. Utterly and stupidly wrongly. Those of us who got involved in Exadata with the early versions, especially I think V2 and V3, saw it being implemented for OLTP-type systems where it was a very, very expensive way of buying a small amount of SSD. The great shame was that the technical solution of Exadata was fantastic for a sub-set of technical issues. All the clever stuff in the storage cell software and maximizing hardware usage for a small number of queries (small sometimes being as small as 1) was fantastic for some DW work with huge full-segment-scan queries – and of no use at all for the small, single-account-type queries that OLTP systems run. But Oracle just pushed and pushed and pushed Exadata. Sales staff got huge bonuses for selling them and the marketing teams seemed incapable of referring to the core RDBMS without at least a few mentions of Exadata
Like many Oracle performance types, I ran into this mess a couple of times. I remember one client in particular who had been told Exadata V2 would fix all their problems. I suspect based solely on the fact it was going to be a multi-TB data store. But they had an OLTP workload on the data set and any volume of work was slaying the hardware. At one point I suggested that moving a portion of the workload onto a dirt cheap server with a lot of spindles (where we could turn off archive redo – it was a somewhat unusual system) would sort them out. But my telling them a hardware solution 1/20th the cost would fix things was politically unacceptable.
Another example of the wonder solution is Agile. Agile is fantastic: rapid, focused development, that gets a solution to a constrained user requirement in timescales that can be months, weeks, even days. It is also one of the most abused terms in I.T. Implementing Agile is hard work, you need to have excellent designers, programmers that can adapt rapidly and a lot, and I mean a LOT, of control of the development and testing flow. It is also a methodology that blows up very quickly when you try to include fix-on-fail or production support workloads. It also goes horribly wrong when you have poor management, which makes the irony that it is often implemented when management is already failing even more tragic. I’ve seen 5 agile disasters for each success, and on every project there are the shiny-eyed Agile zealots who seem to think just implementing the methodology, no matter what the aims of the project or the culture they are in, is guaranteed success. It is not. For many IT departments, Agile is a bad idea. For some it is the best answer.
Coming back to “cloud”, I think I have something of a reputation for not liking it – which is not a true representation of my thoughts on it, but is partly my fault as I quickly tired of the over-sell and hype. I think some aspect of cloud solutions are great. The idea that service providers can use virtualisation and container technology to spin up a virtual server, a database, an application, an application sitting in a database on a server, all in an automated manner in minutes, is great. The fact that the service provider can do this using a restricted number of parts that they have tested integrate well means they have a way more limited support matrix and thus better reliability. With the Oracle cloud, they are using their engineered systems (which is just a fancy term really for a set of servers, switches, network & storage configured in a specific way with their software configure in a standard manner) so they can test thoroughly and not have the confusion of a type of network switch being used that is unusual or a flavor of linux that is not very common. I think these two items are what really make cloud systems interesting – fast, automated provisioning and a small support matrix. Being available over the internet is not such a great benefit in my book as that introduces reasons why it is not necessarily a great solution.
But right now Oracle (amongst others) is insisting that cloud is the golden solution to everything. If you want to talk at Oracle Open World 2016 I strongly suspect that not including the magic word in the title will seriously reduce your chances. I’ve got some friends who are now so sick of the term that they will deride cloud, just because it is cloud. I’ve done it myself. It’s a potentially great solution for some systems, ie running a known application that is not performance critical that is accessed in a web-type manner already. It is probably not a good solution for systems that are resource heavy, have regulations on where the data is stored (some clinical and financial data cannot go outside the source country no matter what), alter rapidly or are business critical.
I hope that everyone who uses cloud also insists that the recovery of their system from backups is proven beyond doubt on a regular basis. Your system is running on someone else’s hardware, probably managed by staff you have no say over and quite possibly with no actual visibility of what the DR is. No amount of promises or automated mails saying backs occurred is guarantee of recovery reliability. I’m willing to bet that within the next 12 months there is going to be some huge fiasco where a cloud services company loses data or system access in a way that seriously compromises a “top 500” company. After all, how often are we told by companies that security is their top priority? About as often as they mess it up and try to embark on a face-saving PR exercise. So that would be a couple a month.
I just wish Tech companies would learn to be a little less single solution focused. In my book, it makes them look like a bunch of excitable children. Give a child a hammer and everything needs a pounding.
I was surprised on April 20th when I awoke to find a 1.3G OS update on my Samsung Galaxy 6 Edge+. I’d never experienced any issues with an update before, so I quickly connected my phone to the WiFi and let it download then upgrade my phone, anxiously awaiting what new Android features awaited me.
I proceeded through my day, but was concerned as battery usage was higher than usual and I suffered email failures from Gmail and a few tweets didn’t go through. I consider myself quite familiar with mobile phone trouble shooting and promptly performed the standard steps to address issues, but upon the next morning, I was faced with the same issues.
I researched and found that I wasn’t the only one, as numerous Note, Galaxy and even new Galaxy 7 users were reporting similar issues with texts, emails and network connectivity.
I happened to be running work errands and stopped at my neighborhood T-Mobile store to see if they’d heard anything. The tech was surprised by what I’d tried:
Then he was even more impressed with my phone- I have my Samsung set up at the most optimal settings.
I was having an issue clearing the cache partition on the phone and was looking to how it should be done with the 6.01 release. There had been a change in the button combination, (volume down, home button, power button combo brings to a screen instead of clears the partition) and he was able to help me out with this, clearing the partition.
There was a second fix that was added to the clearing of the partition:
The actual 6.01 upgrade system update HADN’T finished! Upon clearing the cache partition, the update completed and many of the issues I was experiencing stopped.
Then the second part of the problem showed itself. To conserve battery, on many Samsung 6 and 6 Edge devices, it was recommended to run in “Power Saving Mode“. In 6.01, there is a change to the features provided as part of this mode.
It now LIMITS the amount of data allowed to be SENT or RECEIVED.
Reason tweets with pictures and emails with attachments failing SOLVED. Take the phone out of “Power Saving Mode” and these emails and tweets stuck in “limbo” should immediately be sent!
So, to summarize- If you are having issues with emails, network connectivity and social media, do two things:
I’m happy I figured this out, as the Samsung Galaxy Edge Plus has been my favorite phone ever, so having it at top functionality after the upgrade was important! Hope these tips to fix issues after the upgrade to 6.01 helps you, too!
No, you didn’t get a review of my Samsung Tab S2….I didn’t have it long enough to review it! Actually, I had it just over six months and yes, it was beautiful and yes, I love it and no, it didn’t survive the fall on the tile, even in the protective cover at Denver International Airport.
Needless to say, I love my tablets and I’ve had a number of them. I’ve had everything from the first 10.1 from Samsung to the iPad Air. My daily work machine is a Microsoft Surface Pro 4, but it’s a beautiful, lovely piece of hardware that I use for work and I just want something light to do social media, read articles, books, magazines and such on. I did note that my Samsung Tab S2 was a little heavy to be held with one hand, but I loved the flawless screen and the impressive performance from the Octocore processor.
After last week’s “incident”, I was left with the question, “Do I replace it with another Samsung, (I’ve never broken anything other than keyboards in the past, so this was a new issue for me…) or is it time to reconsider the purpose of my having a tablet?”
At one time, I considered if it was possible to replace my work machine, outside of performing demos or housing virtuals on a tablet. At this point, I’ve answered that with my Surface. I still need a reading device that I can use and yes, going to admit this, when I’m lounging in the tub, on the couch or on a plane. I want it to be light, but did I still need a large screen?
I tested this out on the way back from Las Vegas. Tim loves his Kindle, so I picked up the standard, 7 inch Kindle Fire at the Best Buy vending machine. I was impressed with the simplicity of the setup and I quickly was able to install the Play Store, (Kindle runs Amazon’s version of Android’s OS) and had everything I’d had on my Samsung tablet.
After the hour of setup and 1 1/2 hour flight home, I had my answer to the screen size question. My eyes were thoroughly strained and I decided that I would need to upgrade. I boxed up the unit and went to my neighborhood Best Buy. I stand out a bit in a crowd, as one of the geek squad guys asked me how my Surface Pro 4 was handling my VMs and was glad to exchange the Kindle for me.
I checked out all the tablets, but I couldn’t justify what I was planning on using the tablet for and the cost of another high end Samsung or an iPad. I checked out the Acer, Asus and other Android tablets, but was really unimpressed with the screen quality. If you’re staring at a screen for a long time, the resolution quality can be a real deal-killer.
I also found that size was a consideration and wanted to stick to at least 9 inches on the screen size. I did like a lot about the Kindle, so I came to the Kindle Fire 10 inch HD. The screen resolution quality stuck out first as being much higher. I also found the price point to be more in line, (less than 1/2 for what I’d paid for my Samsung) with what I would consider this time around.
http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/04/kindle_fire.jp... 300w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/04/kindle_fire.jp... 768w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/04/kindle_fire.jp... 1300w" sizes="(max-width: 529px) 100vw, 529px" data-recalc-dims="1" />
If you are accustomed to Android devices and go the Kindle route, there are a few things I’d recommend.
1. Install the Play Store.
As this is a Kindle OS install and it will have the AppStore from Kindle. I’m not real thrilled with the quality of software offered and I found a lot of bogus apps on the site. Play Store is just more reliable. To install it, simply install something like Texture or another app that requires the Play Store, then choose to install that. It will then ask you to what device you wish to install it to. You will see the “official” name of your Kindle fire, (matches what is set in the device settings) and choose to install to it.
2. Install the GSam Battery Monitor
The battery monitor is pretty much non-existent in the settings for the Kindle and you’ll be surprised WHAT is using batter on the Kindle vs. other tablets or phones. I was able to tune the screen brightness, (not an issue in newer phones these days…) and address apps, (darn iTracing app was eating up battery like crazy!) that you’d know nothing about with this handy little app.
3. Install the Cisco Mobile VPN Client
Yes, I can get on the work VPN without issue and was able to verify a browser issue for an OTN question for EM on an Android tablet from my Kindle. Pretty slick.
This is an actual picture of my EM13c running from my Silk Browser on my Kindle Fire. It runs just fine on it and it responds well!
http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/04/20160421_09475... 272w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/04/20160421_09475... 768w" sizes="(max-width: 371px) 100vw, 371px" data-recalc-dims="1" />
It’s not important that you know the answer. It’s important you know how to get the answer!
I’m pretty sure I’ve written this before, but I am constantly surprised by some of the questions that come my way. Not surprised that people don’t know the answer, but surprised they don’t know how to get the answer. The vast majority of the time someone asks me a question that I can’t answer off the top of my head, this is what I do in this order.
Most of a PeopleSoft application is itself stored in the database in PeopleTools tables. Therefore there is lot of information about the configuration and operation of a PeopleSoft system. There are also performance metrics, particularly about batch processes.
PS360 is a new tool on which I am working. It just uses SQL scripts to extract that data to html files, and package them up in a zip file so that they can be sent for further analysis. The style and method is closely modelled on Enkitec's EDB360 by Carlos Sierra. This is another free tool used for health check and performance analysis of any Oracle database system. PS360 aims to gather PeopleSoft specific information that is not presented by EDB360. It also runs in Oracle's SQL*Plus tool, and so is only available for use with an Oracle database.
Every section of PS360 is just the output of a SQL query, sometimes pre-processing is done in an anonymous PL/SQL block. It does not install anything into the database, and does not update any table (other than the PLAN_TABLE which is used for temporary working storage). Each report is in tabular and/or graphical format. All the charts are produced with the Google chart API.
The tool can be run by anyone with access to the PeopleSoft Owner database user (usually SYSADM). That user will already have privilege to read the Oracle catalogue.
Download the tool and unzip it into a directory. Navigate to the ps360 (master) directory, open SQL*Plus and connect as SYSADM. Execute the script ps360.sql. The output will be written to a zip file in the same directory. Unpack that zip file on your own PC and open the file ps360_[database name]_0_index.html with a browser.
I am looking for feedback about the tool, and suggestions for further enhancements.
Please either leave comments here or e-mail me at firstname.lastname@example.org.