Assuming my current WordPress prefix was “wp_” and I wanted to change it to “banana_”, then I would do the following…
Take a backup of your database and file system. Remember, if you screw up and trash your blog you will thank yourself for the extra time you spent doing this!
Amend the “$table_prefix” setting in the “wp-config.php” file.
$table_prefix = 'banana_'; // Only numbers, letters, and underscores please!
Rename all the tables, swapping the “wp_” prefix with “banana_”. For example, the following statement would rename the “wp_comments” table to “banana_comments”.
RENAME TABLE wp_comments TO banana_comments
Perform the following updates.
UPDATE banana_options SET option_name = 'banana_user_roles' WHERE option_name = 'wp_user_roles' UPDATE banana_usermeta SET meta_key = 'banana_capabilities' WHERE meta_key = 'wp_capabilities' UPDATE banana_usermeta SET meta_key = 'banana_user_level' WHERE meta_key = 'wp_user_level' UPDATE banana_usermeta SET meta_key = 'banana_autosave_draft_ids' WHERE meta_key = 'wp_autosave_draft_ids'
If you don’t do these updates, the blog will work, but when you try to access the admin site you will be greeted with a message saying,
You do not have sufficient permissions to access this page.
Here’s a deadlock graph that might cause a little confusion:
Deadlock graph: ---------Blocker(s)-------- ---------Waiter(s)--------- Resource Name process session holds waits process session holds waits TX-001a0015-00014787 34 90 X 32 3 S TX-00190008-0000601b 32 3 X 34 90 S session 90: DID 0001-0022-00000327 session 3: DID 0001-0020-000009E9 session 3: DID 0001-0020-000009E9 session 90: DID 0001-0022-00000327 Rows waited on: Session 90: obj - rowid = 00030CE5 - AAAwzlAAGAABDiuAAA (dictionary objn - 199909, file - 6, block - 276654, slot - 0) Session 3: obj - rowid = 00030CE5 - AAAwzlAAGAABp8gAAA (dictionary objn - 199909, file - 6, block - 433952, slot - 0)
Both sessions are holding X and waiting S on a TX lock. There are several well-known reasons why you might see a share (mode 4) lock on a transaction table slot: collisions on bitmap indexes, unique indexes, index organized tables, or referential integrity checks are the commonest “external” examples, and problems with interested transaction lists (ITLs) or freelists are the common “internal” ones – so the presence of the waits in share mode shouldn’t, of themselves, a source of confusion.
The confusion is in the reported rowids. If you try to interpret them as real rowids you may be unlucky and discover that they seem to be related to the SQL reported for the deadlocked sessions when really the information they hold is garbage. (Just to avoid confusion, let me make it clear that there are cases where the rowids reported definitely WILL be garbage; on the other hand, there may be some scenarios where the rowids are relevant – although I haven’t done any exhaustive check to see if there really are such scenarios.)
When I see a deadlock graph on transaction locks and the waits are for S mode I tend to assume that the information about the rows waited on is probably misleading; when the slot number for the rowid is zero this increases my confidence that the rowid is rubbish. (Zero is a legal value for a rowid slot, of course, so a zero doesn’t prove that the rowid is rubbish, it’s just a coincidence that allows me to continue following a hypothesis.)
The problem is that Oracle doesn’t waste resources tidying up after itself, and in the case of deadlock graphs this laziness shows up in the rowids reported. The trace is simply reporting whatever happens to be in the row_wait_obj#, row_wait_file#, row_wait_block# and row_wait_row# columns of v$session; and if the waiting process hasn’t updated these columns with current row information you could be looking at the details of the last row (or block) that the session waited for. Here’s the description of a test to demonstrate the behaviour:
create table t1 (n1 number, n2 number); insert into t1 values(1,1); create unique index t1_i1 on t1(n1); create unique index t1_i2 on t1(n2); session 1: insert into t1 values(2,11); session 2: insert into t1 values(3,21); session 1: insert into t1 values(4,21); session 2: insert into t1 values(5,11);
With this table, and sequence of events following it, session 1 raised a deadlock error, and dumped the following trace:
Deadlock graph: ---------Blocker(s)-------- ---------Waiter(s)--------- Resource Name process session holds waits process session holds waits TX-000a0021-0000c4aa 16 93 X 13 90 S TX-00020020-0000fdb9 13 90 X 16 93 S session 93: DID 0001-0010-00000057 session 90: DID 0001-000D-000000F2 session 90: DID 0001-000D-000000F2 session 93: DID 0001-0010-00000057 Rows waited on: Session 90: obj - rowid = 00000009 - AAAAAJAABAAAQJcAAA (dictionary objn - 9, file - 1, block - 66140, slot - 0) Session 93: obj - rowid = 0002E7DC - AAAufaAAFAAAAAJAAA (dictionary objn - 190428, file - 5, block - 9, slot - 0)
Object 9 is the I_FILE#_BLOCK# index in the data dictionary – and session 90 is definitely not doing anything with that object in this transaction.
Object 190428 is another table in the test schema, but session 93 didn’t access it in this transaction, and the block referenced is the segment header block, not a block that could hold a row.
In fact, just before I started the sequence of inserts I ran this query from a third session (connected as sys) with the following results:
select sid, row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row# from v$session where sid in (90,93) ; SID ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW# ---------- ------------- -------------- --------------- ------------- 90 9 1 66140 0 93 190428 5 9 0
The “rows” reported for the deadlock simply echoed the values that were already in the row_wait columns before the test started. (The sessions were waiting on “SQL*Net message from client” at the time.)
Session 90 created the tables and indexes, that’s probably why it happened to have an outstanding reference to the i_file#_block# index.
Session 93 had just run a script to drop all the objects in the schema, which may explain why it happened to have an outstanding reference to a segment header block
If the row_wait_obj# had been set to -1 for either session then the deadlock graph would have reported “No row” for that session.
By now the Denver Convention Center is probably cleaned up from IOUG Collaborate. The signs directing thousands of attendees to top-notch technical presentations have been removed, the twenty rental laptops which composed the classroom for Pythian’s RAC Attack class have been returned and the vendor exhibition floor has been completely cleared out. Flight delays notwithstanding (thanks to some midwest weather), attendees are generally home by now – even those coming from places as far away as Germany and Australia.
Now that the buzz is dying down, I’ve finally found a few minutes to post my personal highlights.
First off, my favorite part of Collaborate is the opportunity to meet so many old friends and make new acquaintances who are all using Oracle technology. It’s both fun and informative to hear about the ways others are using Oracle software.
One person deserves very special mention: this was my first time hanging out in person with my Australia-based colleague Yury Velikanov. You could hear Yury’s deep-chested, enthuiastic “Pythian” chants even if you were on the opposite side of the exhibition hall from our booth. His sense of fun and relentless demands for silly faces during photos were so unresistable that anybody within about thirty feet of him was affected. In the photo below, notice how the guy standing behind me has also made a silly face for the picture.
I wasn’t able to attend many technical sessions simply because I was so busy with the RAC Attack classes but the two I did attend didn’t disappoint.
Craig Shallahamer‘s session Practical Performance Forecasting for the Oracle DBA was excellent. He compared benchmarking, simulating and modelling for performance evaluation and then took a deep dive into best practices for creating and using models. He discussed a variety of potential input sources in depth and walked through practice exercises on a theoretical consolidation case. Finally we wrapped up with a brief discussion of the queueing theory. Excellent material!
If I had to choose one take-away which I think was the single most important point, it would be this: R = S + Q. Every DBA should know what this is referring to on a basic level. If you don’t know what I’m referring to, then I’d recommend Craig’s class – or any of Cary Millsap‘s talks for that matter (they both cover this very well).
Frits Hoogland‘s session How Oracle Secretly Changed Multiblock Reads (at most conferences this same session is simply titled About Multiblock Reads) is a must-see presentation for DBA’s with any interest in database internals. With the aid of operating system traces and debuggers, Frits has dissected the Oracle database kernel’s I/O process and compiled an incredible summary report. Showing surprising differences between recent releases of Oracle RDBMS, Frits dispels any myth that they aren’t making major changes to the database engine.
However, despite unmatched technical depth and quality, my favorite part of this session wasn’t technical. At one point during the presentation I remember Frits saying that he ran a test and was completely baffled by the output. While trying to problem-solve himself, he also fired off a few emails looking for feedback from others. An email from a colleague pointed him to the “total number of slots” statistic, which led to an “ah-ha” moment and explained the data he was seeing. This was my favorite part because it made me feel much better about asking for help myself, since someone as smart as Frits asks for help sometimes too!
The bulk of my energy this week was dedicated to the RAC Attack class.
First off, I am excited to announce a major development for RAC Attack: we have re-introduced printed textbooks. Two years ago we stopped printing textbooks – when the curriculum was moved to wikibooks in order to better support broad collaboration in creating the best beginner RAC curriculum available. Last week we completed and released the process for creating a printed textbook from the wikibooks content. The process is completely automated. You start up an EC2 instance and run a little code which lives at github; shortly after this you get a PDF which is ready to go to the printers. The 250-page textbook has over 200 screenshots. In the USA, I’ve had them printed in the past for US$30-40 each. This should go a long way toward supporting instructors who want to teach RAC Attack classes. Information about the textbooks is at http://racattack.org/book.
In addition to this, there have been a number of improvements to the organizer’s guide over the past few months. Anyone interested in organizing RAC Attack events can find a good deal of helpful material on the Events wiki page. We’ve learned a lot through the many events we’ve run in the past!
We had 15-20 people gather in the foyer area for the initial kick-off. Everyone listened to the description of the RAC Attack project with interest and some stayed to try starting out with their personal laptops, while many decided to attend one of the two later classroom sessions. We occupied most of the available tables and we were a very noticeable crowd as everyone passed by from the keynote to find technical sessions to attend!
We filled the classroom sessions. There were no major technical problems with lab machines as people worked through the labs. There were many good conversations; I remember in-depth discussions about listener architecture in 11.2 RAC (with SCAN) and about init parameters related to client connections and about services and about applications which require all connections to be made in a single instance.
There was significant positive feedback about RAC Attack. One guy stopped me as I was entering the exhibition hall Monday night just to tell me that that he had learned more in the RAC Attack class than any other session thus far at Collaborate – he was visibly pleased with the experience!
Collaborate was a memorable conference this year. I’m very thankful for the hard work by the IOUG leadership to put this event together! If you attended Collaborate then make sure to pass along your thanks and any other feedback to IOUG! And whether or not you attended – support your local user group; they are a crucial resource and tremendous asset to all of us.
For those of you who attended Collaborate, what were your favorite technical sessions or keynotes? What other parts of the conference did you find valuable? Leave a comment here to tell us what you thought about Collaborate!
Recent thread in the OakTable mailing list prompted me to create a poll and ask about the ways DBAs use system statistics in real systems. If you struggle to understand what system statistics is and what are the available options, here is the suggested reading:
Documentation – System Statistics
Best Practices for Gathering Optimizer Statistics, Oracle whitepaper
System Statistics – Troubleshooting Oracle Performance
Understanding the different modes of System Statistics, blog series
I recently did the technical review of some of the chapters in a new Packt book called Managing Multimedia and Unstructured Data in the Oracle Database by Marcelle Kratochvil. I’ve known Marcelle for years and although we don’t always see eye-to-eye on DBA matters, she is definitely the first person I speak to about matters concerning multimedia and Oracle databases. A number of people “talk the talk”, but Marcelle is one of the few people that can actually “walk the walk” on this subject!
If you are interested in getting a free e-copy of this book, Packt Publishing are organizing a give away. All you need to do is leave a comment on this post telling me why you think you deserve a copy and what you hope to achieve after reading it. In 2 weeks (approximately 26-April-2013) I’ll read and judge the responses and make sure an e-copy of the book gets to the 4 lucky winners. I’ll be contacting the winners by email, so you will have to use your real email address when you comment!
I’m not going to reveal my judging criteria, and I’ll probably ask Marcelle to help me decide, so try and be a little creative in your answers. Just asking for a copy is not going to make you a winner.
Let the games begin…
Note. Comments on Twitter, Facebook and Google+ will not be judged. Your entry must be a comment on this blog post.
On March 26, 2013, Oracle announced a server refresh based on the new SPARC T5 processor. The press release proclaims SPARC T5 is the “World’s Fastest Microprocessor”—an assertion backed up with a list of several recent benchmark results included a published TPC-C result.
This article focuses on the recent SPARC T5 TPC-C result–a single-system world record that demonstrated extreme throughput. The SPARC T5 result bested the prior non-clustered Oracle Database result by 69%! To be fair, that was 69% better than a server based an Intel Xeon E7 processor slated to be obsolete this year (with the release of Ivy Bridge-EX). Nonetheless, throughput is throughput and throughput is all that matters, isn’t it?
What Costs Is What Matters
There are several ways to license Oracle Database. Putting aside low-end user-count license models and database editions other than Enterprise Edition leaves the most common license model which is based on per-processor licensing.
To layman, and seasoned veteran alike, mastering Oracle licensing is a difficult task. In fact, Oracle goes so far as to publish a Software Investment Guide that spells out the necessity for licensees to identify personnel within their organization responsible for coping with license compliance. Nonetheless, there are some simple licensing principles that play a significant role in understanding the relevance of any microprocessor being anointed the “fastest in the world.”
One would naturally presume “fastest” connotes cost savings when dealing with microprocessors. Deploying faster processors usually should mean fewer are needed thus yielding cost savings spanning datacenter physical and environmental savings as well as reduced per-processor licensing. Should, that is.
What is a Processor?
Oracle’s Software Investment Guide covers the various licensing models available to customers. Under the heading “Processor Metric” Oracle offers several situations where licensing by the processor is beneficial. The guide goes on to state:
The number of required licenses shall be determined by multiplying the total number of cores of the processor by a core processor licensing factor specified on the Oracle Processor Core Factor Table
As this quoted information suggests, the matter isn’t as simple as counting the number of processor “sockets” in a server. Oracle understands that more powerful processors allow their customers to achieve more throughput per core. So, Oracle could stand to lose a lot of revenue if per-core software licensing did not factor in the different performance characteristics of modern processors. In short, Oracle is compelled to charge more for faster processors.
As the Software Investment Guide states, one must consult the Oracle Processor Core Factor Table in order to determine list price for a specific processor. The Oracle Processor Core Factor Table has a two-columns—one for the processor make and model and the other for the Licensing Factor. Multiplying the Licensing Factor times the number of processor cores produces list price for Oracle software.
The Oracle Processor Core Factor Table is occasionally updated to reflect new processors that come into the marketplace. For example, the table was updated on October 2, 2010, September 6, 2011 and again on March 26, 2013 to correspond with the availability of Oracle’s T3, T4 and T5 processor respectively. As per the table, the T3 processor was assigned a Licensing Factor of .25 whereas the T4 and T5 are recognized as being more powerful and thus assigned a .5 factor. This means, of course, that any customer who migrated from T3 to T4 had to ante-up for higher-cost software—unless, of course, the T4 allowed the customer to reduce the number of cores in the deployment by 50%.
The World’s Fastest Microprocessor
According to dictionary definition, something that is deemed fast is a) characterized by quick motion, b) moving rapidly and/or c) taking a comparatively short time. None of these definitions imply throughput as we know it in the computer science world. In information processing, fast is all about latency whether service times for transactions or underlying processing associated with transactions such as memory latency.
The TPC-C specification stipulates that transaction response times are to be audited along with throughput. The most important transaction is, of course, New Order. That said, the response time of transactions on a multi-processing computer have little bearing on transaction throughput. This fact is clearly evident in published TPC-C results as will be revealed later in this article.
Figure 1 shows the New Order 90th-percentile response times for the three most recently published Oracle Database 11g TPC-C results. Included in the chart is a depiction of Oracle’s SPARC T5 demonstrating an admirable 13% improvement in New Order response times compared to current Intel two-socket Xeon server technology. That is somewhat fast. On the contrary, however, one year—to the day—before Oracle published the SPARC T5 result, Intel’s Xeon E7 processors exhibited 46% faster New Order response times than the SPARC T5. Now that, is fast.
Cost Is Still All That Matters
According to the Oracle Technology Global Price List dated March 15, 2013, Oracle Database Enterprise Edition with Real Application Clusters and Partitioning has a list price of USD $82,000 “per processor.” As explained above in this article, one must apply the processor core factor to get to the real list price for a given platform. It so happens that all three of the processors spoken of in Figure 1 have been assessed a core factor of .5 by Oracle. While all three of these processors are on par in the core factor category, they have have vastly different numbers of cores per socket. Moreover, the servers used in these three benchmarks had socket-counts ranging from 2 to 8. To that end, the SPARC T5 server had 128 cores, the Intel Xeon E7-8870 server had 80 cores and the Intel Xeon E5-2690 server had 16 cores.
Performance Per Oracle License
Given the core counts, license factor and throughput achieved for the three TPC-C benchmarks discussed in the previous section of this article, one can easily calculate the all-important performance-per-license attributes of each of the servers. Figure 2 presents TPC-C throughput per core and per Oracle license in a side-by-side visualization for the three recent TPC-C results.
The Importance of Response Times
In order to appreciate the rightful importance of response time in characterizing platform performance, consider the information presented in Figure 3. Figure 3 divides response time into TPC-C performance per core. Since the core factor is the same for each of these processors this is essentially weighing response time against license cost.
To add some historical perspective, Figure 3 also includes an Oracle Database 11g published TPC-C result from June 2008 using Intel’s Xeon 5400 family of processors which produced 20,271 TpmC/core and .2 seconds New Order response times. It is important to point out that the core factor has always been .5 for Xeon processors. As Figure 3 shows, SPARC T5 outperforms the 2008-era result by about 35%. On the other hand, the Intel two-socket Xeon E5 result delivers 31% better results in this type of performance assessment. Finally, the Intel 8-socket Xeon E7 result outperformed SPARC T5 by 76%. If customers care about both response time and cost these are important data points.
I accept the fact that there are many reasons for Oracle customers to remain on SPARC/Solaris—the most significant being postponing the effort of migrating to Intel-based servers. I would argue, however, that such a decision amounts to postponing the inevitable. That is my opinion, true, but countless Oracle shops made that move during the decade-long decline of Sun Microsystems market share. In fact, Oracle strongly marketed Intel servers running Real Application Clusters attached to conventional storage (mostly sourced from EMC) as a viable alternative to Oracle on Solaris.
I don’t speak lightly of the difficulty in moving off of SPARC/Solaris. In fact, I am very sympathetic of the difficulty such a task entails. What I can’t detail, in this blog entry, is a comparison between re-platforming from dilapidated SPARC servers and storage to something 21st-century—such as a converged infrastructure platform like VCE. It all seems like a pay-now or pay-later situation to me. Maybe readers with a 5-year vision for their datacenter can detail for us why one would want to gamble on the SPARC roadmap.
 As of the production date of this article, 2013 is the release target for the Ivy Bridge-EP 22nm die shrink next-generation improvement in Intel’s Xeon E5 family
Filed under: oracle
I have not had a lot of time to post recently due to various reasons and a switch in jobs at Oracle. I currently am 100% dedicated to working in the Oracle Solution Center to help customers test the performance on Oracle’s engineered solutions. So, why am I sending this link?
Regardless of the performance tests that are performed, I often spend a fair amount of time showing the Availability aspects as well. Hopefully this video will help to demystify the availability aspects of Exadata.
I’m writing this (unusual) post as I am a long time Gmail user and recently I’ve seen plenty of people & articles complain about the Gmail’s new compose window (the one that shows up as a small hovering window in the bottom right of your screen):
The top google hits so far only return tips to disable the new editor completely, but I want to use the new one, just in a bigger window! There is a very easy workaround for that – and there’s no need to switch back to the old compose mode at all!
If you are using your mouse, then just:
Examples 1 & 2 below:
If you use Gmail keyboard shortcuts for productivity (they’re awesome!), then you can just:
Note that you can view the Gmail shortcuts reference, if this feature is enabled, just by pressing the ? (question mark) key:
So, there’s no need to switch back to the old compose mode completely, just remember SHIFT, CTRL and the “D” keys :)