Tired of boring slide-shows? Join me for free to see Oracle core technology live in action!
Live demonstrations have always been a key part of my classes, because I consider them one of the best ways to teach.
This is your opportunity to have a glimpse into my classroom and watch a demo just as I have delivered it there.
Apparently, not many speakers are keen to do things live, so the term Demonar (Demonstration + Seminar) waited for me to be invented :-)
A positive effect towards your attitude about LVC and Oracle University Streams with its live webinars is intended, since the setting and platform is very similar there.
SQL> drop tablespace MY_TSPACE including contents; drop tablespace MY_TSPACE including contents * ERROR at line 1: ORA-00604: error occurred at recursive SQL level 1 ORA-24005: Inappropriate utilities used to perform DDL on AQ table MY_SCHEMA.MY_QUEUE_TABLE
You would think that if you ask to drop a tablespace, then you’re pretty confident that you want all the stuff inside it to disappear :-(
You can workaround the issue by running
on each queue table before dropping the tablespace.
Be aware (!), while following thru on the following, that there might be a change…
After creating the initial structures to scrape some JSON content of Wikipedia, as mentioned in…
When learning new things, I always try to apply it to a hopefully useful and…
This is a just a quick blog post to direct readers to the best Oracle-related paper detailing the value EMC XtremIO brings to Oracle Database use cases. I’ve been looking forward to the availability of this paper for quite some time as I supported (minimally, really) the EMC Global Solutions Engineering group in this effort. They really did a great job with this testing! I highly recommend this paper for readers who are interested in:
Click the following link to access the whitepaper: click here. Abstract:
This white paper describes the deployment of the XtremIO® all-flash array with Oracle RAC 11g and 12c databases in both physical and virtual environments. It describes optimal performance while scaling up in a physical environment, the effect of adding multiple virtualized database environments, and the impact of using XtremIO Compression with Oracle Advanced Compression. The white paper also demonstrates the physical space efficiency and low performance impact of XtremIO snapshots.
This whole thing about “not exists” subqueries can run and run. In the previous episode I walked through some ideas of how the following query might perform depending on the data, the indexes, and the transformation that the optimizer might apply:
select count(*) from t1 w1 where not exists ( select 1 from t1 w2 where w2.x = w1.x and w2.y <> w1.y );
As another participant in the original OTN thread had suggested, however, it might be possible to find a completely different way of writing the query, avoiding the subquery approach completely. In particular there are (probably) several ways that we could write an equivalent query where the table only appears once. In other words, if we restate the requirement we might be able to find a different SQL translation for that requirement.
Looking at the current SQL, it looks like the requirement is: “Count the number of rows in t1 that have values of X that only have one associated value of Y”.
Based on this requirement, the following SQL statements (supplied by two different people) look promising:
WITH counts AS (SELECT x,y,count(*) xy_count FROM t1 GROUP BY x,y) SELECT SUM(x_count) FROM (SELECT x, SUM(xy_count) x_count FROM counts GROUP BY x HAVING COUNT(*) = 1); SELECT SUM(COUNT(*)) FROM t1 GROUP BY x HAVING COUNT(DISTINCT y)<=1
Logically they do seem to address the description of the problem – but there’s a critical difference between these statements and the original. The clue about the difference appears in the absence of any comparisons between columns in the new forms of the query, no t1.colX = t2.colX, no t1.colY != t2.colY, and this might give us an idea about how to test the code. Here’s some test data:
drop table t1 purge; create table t1 ( x number(2,0), y varchar2(10) ); create index t1_i1 on t1(x,y); -- Pick one of the three following pairs of rows insert into t1(x,y) values(1,'a'); insert into t1(x,y) values(1,null); -- insert into t1(x,y) values(null,'a'); -- insert into t1(x,y) values(null,'b'); -- insert into t1(x,y) values(null,'a'); -- insert into t1(x,y) values(null,'a'); commit; -- A pair to be skipped insert into t1(x,y) values(2,'c'); insert into t1(x,y) values(2,'c'); -- A pair to be reported insert into t1(x,y) values(3,'d'); insert into t1(x,y) values(3,'e'); commit; execute dbms_stats.gather_table_stats(user,'t1')
Notice the NULLs – comparisons with NULL lead to rows disappearing, so might the new forms of the query get different results from the old ?
The original query returns a count of 4 rows whichever pair we select from the top 6 inserts.
With the NULL in the Y column the new forms report 2 and 4 rows respectively – so only the second query looks viable.
With the NULLs in the X columns and differing Y columns the new forms report 2 and 2 rows respectively – so even the second query is broken.
However, if we add “or X is null” to the second query it reports 4 rows for both tests.
Finally, having added the “or x is null” predicate, we check that it returns the correct 4 rows for the final test pair – and it does.
It looks as if there is at least one solution to the problem that need only access the table once, though it then does two aggregates (hash group by in 11g). Depending on the data it’s quite likely that this single scan and double hash aggregation will be more efficient than any of the plans that do a scan and filter subquery or scan and hash anti-join. On the other hand the difference in performance might be small, and the ease of comprehension is just a little harder.
I can’t help thinking that the “real” requirement is probably as given in the textual restatement of the problem, and that the first rewrite of the query is probably the one that’s producing the “right” answers while the original query is probably producing the “wrong” answer.
The Oracle Certified Master Exam is among the highest rated exams in the IT industry for a good reason: It is extremely hard to pass!
Unlike most other IT exams that are done as multiple choice tests, the OCM exam means two days of hands-on practical tasks. No chance you can pass it by just reading books or brain dumps and learning by heart without deep understanding. Without years of practical experience with Oracle database administration – don’t even think about it. Even as a seasoned DBA, you won’t find it easy to pass the OCM exam. Why is that so?
To help you prepare for the exam, we offer a quite useful class: Oracle Database 11g: OCM Exam Preparation Workshop Ed 2
I delivered it many times and it is probably the best preparation you can get – but also expensive, I admit.
I use to give following guidance to the attendees of the workshop – and the last two paragraphs may help you even if you don’t attend it:
It is of course possible to prepare also without the workshop. See here for an impressive description about it. Good luck with your journey to become an OCM and I hope you find this little article helpful :-)