Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Using wallets with dbca in Oracle 19c

One of the features I haven’t seen blogged about is the option to provide SYS and SYSTEM passwords (among other parameters) to dbca via a wallet. This is documented in chapter 2 of the Database Administration Guide 19c.

[oracle@server1 ~]$ dbca -silent -createDatabase -help
...
        [-useWalletForDBCredentials  Specify true to load database credentials from wallet]
            -dbCredentialsWalletLocation 
...

I was curious how to use this feature as it might provide slightly better security when deploying new databases via dbca. It turned out it wasn’t too hard in the end, and I decided to briefly put my efforts into this short article.

Prepare the wallet

Before you can use a wallet toghether with dbca, it has to be available. So the first step is obvious: create the wallet. I’m using /home/oracle/wallet as the wallet location. The mkstore utility lives in $ORACLE_HOME/bin, this is a 19.7.0 installation by the way.

[oracle@server1 ~]$ which mkstore
/u01/app/oracle/product/19.0.0/dbhome_1/bin/mkstore

[oracle@server1 ~]$ mkstore -wrl ~/wallet -create
Oracle Secret Store Tool Release 20.0.0.0.0 - Production
Version 21.0.0.0.0
Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.

Enter password:   
Enter password again:  
[racle@server1 ~]$

Once the wallet is created, a few keys have to be added. These are documented in section 2.14.3 in the chapter I linked to before. Since I’m deliberately keeping things simply by not creating a Container Database or register the database with directory services, all I need to populate are:

  • oracle.dbsecurity.sysPassword: SYS user password
  • oracle.dbsecurity.systemPassword: SYSTEM user password

This works by invoking mkstore with the -createEntry flag, as shown in this example:

[oracle@server1 ~]$ mkstore -wrl ~/wallet -createEntry oracle.dbsecurity.sysPassword
Oracle Secret Store Tool Release 20.0.0.0.0 - Production
Version 21.0.0.0.0
Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line 
Enter your secret/Password:   
Re-enter your secret/Password:   
Enter wallet password:   
[oracle@server1 ~]$ 

[oracle@server1 ~]$ mkstore -wrl ~/wallet -createEntry oracle.dbsecurity.systemPassword
Oracle Secret Store Tool Release 20.0.0.0.0 - Production
Version 21.0.0.0.0
Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line 
Enter your secret/Password:   
Re-enter your secret/Password:   
Enter wallet password:   
[oracle@server1 ~]$  

I tend to check the wallet’s contents before invoking dbca just to make sure all the necessary keys are present:

[oracle@server1 ~]$ mkstore -wrl ~/wallet -list
Oracle Secret Store Tool Release 20.0.0.0.0 - Production
Version 21.0.0.0.0
Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:   
Oracle Secret Store entries: 
oracle.dbsecurity.sysPassword
oracle.dbsecurity.systemPassword
[oracle@server1 ~]$  

That should be it! Remember to use strong passwords and follow any other security guidelines and industry best practices.

Create the database

The next and final step is to create the database. Instead of passing -sysPassword and -systemPassword on the command line, you provide the wallet. Here is the example from my lab (I use custom templates, and martins_db is one of them; don’t let that put you off):

[oracle@server1 ~]$ dbca -silent -createDatabase -gdbName WALLET -templateName martins_db.dbc \
> -useWalletForDBCredentials true -dbCredentialsWalletLocation ~/wallet \
> -datafileDestination /u02/oradata -useOMF true \
> -memoryMgmtType AUTO_SGA -createAsContainerDatabase false \
> -recoveryAreaDestination /u03/fast_recovery_area  \
> -totalMemory 4096 
Prepare for db operation
10% complete
Copying database files
40% complete
Creating and starting Oracle instance
42% complete
46% complete

...


70% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/WALLET.
Database Information:
Global Database Name:WALLET
System Identifier(SID):WALLET
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/WALLET/WALLET.log" for further details.
[oracle@server1 ~]$  

Voila! I have a new working database, WALLET. I didn’t specify a single password on the command line. To me that’s a step in the right direction. Yet this isn’t where it stops, you can use the -useWalletForDBCredentials with many other dbca sub-commands!

Happy automating

A Serverless Standby Database called Oracle Autonomous Data Guard

By Franck Pachot

.
Announced by Larry Ellison last week, here it is: the Autonomous Data Guard. You can try it, unfortunately not on the Free Tier.
First you create an Autonomous Database (ATP or ADW) and then you enable Autonomous Data Guard.
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/07/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/07/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/07/Annotat... 1536w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/07/Annotat... 2048w" sizes="(max-width: 1024px) 100vw, 1024px" />

You know that “Autonomous” is the marketing brand for the services that automate a lot of things, sometimes based on features that are in Oracle Database for a long time. So let’s see what is behind.

https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/07/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/07/Annotat... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/07/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/07/Annotat... 1536w" sizes="(max-width: 2280px) 100vw, 2280px" />

Is it a logical copy?

The slide for the announce mentions that this service maintains a remote copy. That’s right. But the message that it “maintains copy by applying logical changes, not physical changes” is not correct and misleading. What we call “logical changes apply” is logical replication where the changes are transformed to SQL statements and then can be applied to another database that can be a different version, different design,… Like Golden Gate. But Autonomous Data Guard is replicating physical changes. It applies the redo to an exact physical copy of the datafile blocks. Like… a Data Guard physical standby.
But why did Larry Ellison mention “logical” then? Because the apply is at software level. And this is a big difference from storage level synchronisation. We use the term “logical corruption” when a software bug corrupts some data. And we use “physical corruption” when the software write() is ok but the storage write to disk is wrong. And this is why “logical changes” is mentioned there: this software level replication protects from physical corruptions. Data Guard can even detect lost writes between the replicas.
And this is an important message for Oracle because on AWS RDS the standby databases for HA in multi-AZ is at storage level. AWS RDS doesn’t use Data Guard for multi-AZ Oracle. Note that it is different with other databases like Aurora where the changes are written to 6 copies from software redo, Or RDS SQL Server where multi-AZ relies on Always-On.

So, it is not a logical copy but a physical standby database. The point is that it is synchronized by the database software which is more reliable (protects for storage corruption) and more efficient (not all changes need to be replicated and only a few of them must be in sync waiting for the acknowledge).

Is it Autonomous?

Yes, all is automated. The only thing you do is enable it and switchover. Those things are not new. Data Guard was automated in previous versions or the Oracle Database, with the Data Guard Broker, with DBCA creating a standby, with recover from services, and even with automatic failover (FSFO and observer). More than that, “autonomous” means transparent: it happens without service interruption. And that again can be based on many existing features, Application Continuity, Connection Manager Traffic Director,…

So yes it is autonomous and up to a level where the competitors lagging are behind. Currently, AWS application failover is mostly based on DNS changes with all problems coming from caches and timeouts. However, recently, AWS has reduced the gap with AWS RDS Proxy, which is quite new.

This time I totally agree with the term “autonomous”. And I even think it could have been labeled as “serverless” because you don’t see the standby server: you don’t choose the shape, you don’t connect to it. I don’t even see the price </p />
</p></div>

    	  	<div class=

Unroll Your Twitter Threads Into WordPress

Have you ever written a Twitter thread, and then wished you could turn it into a blog post? You can now do it in seconds.

Writing Twitter threads, also known as tweetstorms, can be a great way to clarify your thoughts: keeping each paragraph under 280 characters forces you to focus on your message.

Threads are great for engagement and sudden bursts of inspiration. But when the thread is done, wouldn’t it be nice to have a quick way to capture the full conversation in one place — one page to read, one link to share, all your thoughts captured on a website you own?

Now, when you embed a tweet that’s part of a larger thread, you’ll see a new “Unroll” button on the block toolbar. Click or tap it to import the entire thread into your post.

Publish the whole thread as one easy-to-read post — and since WordPress has no character limit, add any extra thoughts that have popped up in the meantime!

Originally tweeted by WordPress.com (@wordpressdotcom) on July 17, 2020.

We hope you enjoy this new way to make your WordPress site the canonical place for everything you publish — leave a comment if you have any questions, suggestions for how we can make it easier for you to share content between your site and other platforms, or if you’d like to share a link to a post that started as a Twitter thread!

Expert Advice: Learn How to Podcast on WordPress.com

Podcasting isn’t just for professional broadcasters or celebrities. If you have a passion for a topic — no matter how niche — and want to explore your options beyond blogging and tweeting, consider launching a podcast! All you need to get started is a decent microphone and headset, an internet connection — and our next free webinar to learn the basics.

Date: Thursday, July 23, 2020
Cost: FREE
Time: 8:00 am PDT | 9:00 am MDT | 10:00 am CDT | 11:00 am EDT | 15:00 UTC
Registration linkhttps://zoom.us/webinar/register/5115944218471/WN_DEIBungPRlSs4hIKhN6ezA
Who’s invited: Bloggers, business owners, and anyone else interested in starting a podcast.

Your hosts, expert podcasters and Happiness Engineers Richard and Damianne, have years of experience in podcasting, radio journalism, and of course, helping our users get the most out of their WordPress.com sites. They’ll walk you through the basics of hosting your podcast on WordPress.com and adding it to the most popular podcast directories. They’ll also share some tips and best practices on crafting a successful podcast.

Please note that to host audio files on a WordPress.com site, your site must be on one of our paid plans.

The one-hour webinar will include a 45-minute presentation and 15 minutes of live Q&A. Dustin, one of our veteran Happiness Engineers and another longtime podcaster, will also be on hand to answer questions over Zoom chat during the webinar.

Seats are limited, so register now to save your seat. We look forward to seeing you then!

VirtualBox 6.1.12

The VirtualBox 6.1.12 maintenance release has appeared with all the Oracle quarterly patches.

The downloads and changelog are in the usual places.

I’ve installed it on Windows 10, macOS Catalina and Oracle Linux 7 hosts with no dramas. My Vagrant builds seem happy too…

Cheers

Tim…


VirtualBox 6.1.12 was first posted on July 15, 2020 at 3:25 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Fetch First vs. Rownum

I’ve pointed out fairly frequently that if you’re running Standard Edition but would like to take advantage of a few features of the Partitioning option then you might be able to do something appropriate with Partition Views (but I’ve just discovered while searching my blog for a suitable item to link to that I haven’t published any of my PV notes on the blog).

I’ve also pointed out that while 12c allows you to use “fetch first N rows” instead of “where rownum <= N” there’s a hidden threat to using the feature because “fetch first N” turns into a hidden row_number() over() analytic function.

Today’s note is a combination of these two topics, prompted by a request to solve a particular coding issue that has appeared a few times on the Oracle Developer Forum and is likely to be recognisable to a number of people.

I’ll start with a very simple model demonstrateing the simplest use of “fetch first N”:


rem
rem     Script:         fetch_first_union.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jul 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t_odd
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        'O'                             flag,
        chr(65 + mod(rownum,26))        class,
        2 * rownum - 1                  id,
        lpad(2 * rownum,10,'0')         v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4   -- > comment to avoid WordPress format issue
;

alter table t_odd modify(flag not null, class not null, id not null);
alter table t_odd add constraint to_chk_odd check (flag = 'O');

create index to_i1 on t_odd(class, id);

With this data set I want to write a query that selects rows for class A where id > 9500, ordered by id – but I only want the first two rows. Here’s a very simple query that gets the result I want, followed by the execution plan from 12.2.0.1 (the A-Rows and E-Rows from 19.3 are slightly different):

set serveroutput off
set linesize 180

alter session set statistics_level = all;

select  /*+ index(t_odd (class, id)) */
        *
from
        t_odd
where
        class = 'A'
and     id > 9500
order by
        class, id
fetch 
        first 2 rows only
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

--------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |      1 |        |    95 (100)|      2 |00:00:00.01 |       6 |
|*  1 |  VIEW                         |       |      1 |      2 |    95   (0)|      2 |00:00:00.01 |       6 |
|*  2 |   WINDOW NOSORT STOPKEY       |       |      1 |    202 |    95   (0)|      2 |00:00:00.01 |       6 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T_ODD |      1 |    202 |    95   (0)|      3 |00:00:00.01 |       6 |
|*  4 |     INDEX RANGE SCAN          | TO_I1 |      1 |    202 |     2   (0)|      3 |00:00:00.01 |       3 |
--------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=2)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "T_ODD"."CLASS","T_ODD"."ID")<=2)
   4 - access("CLASS"='A' AND "ID">9500)

The plan – with its stats – shows us that we’ve done an index range scan of an index which will return the data in exactly the order we want, and the “fetch first 2 rows” has been translated into the row_number() over() that we expected; but to our great joy the “window sort stopkey” makes the processing stop very early because Oracle recognises that the base data is arriving in the right order so it isn’t necessary to fetch all of it and sort it. The A-Rows column confirms this interpretation of what has happened.

You might notice, by the way, that the optimizer has costed the query as if it were fetching all the rows even though it “knows” that it’s going to fetch only the first two rows. That’s why I had to include the index hint to make the optimizer use the obvious index – a popular alternative is to use the /*+ first_rows(N) */ hint where N matches, or is similar to, the number of rows you want to fetch. If I had omitted the hint the optimizer would have done a full tablescan and then applied a “window sort pushed rank” operation to sort and limit the result to 2 rows.

So now we come to the real problem: the user has a “current” table and an identical “history” table, and would like to replace the table reference with a reference to a union all view for their clients on Standard Edition, or to a partitioned table for clients running Enterprise Edition – and they don’t really want to do any other code changes. So let’s see what happens when we model the union all. I started with a table called t_odd that held only odd values for id, so I’m going to add a table called t_even that holds only even values for id.

create table t_even
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        'E'                             flag,
        chr(65 + mod(rownum,26))        class,
        2 * rownum                      id,
        lpad(2 * rownum,10,'0')         v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4   -- > comment to avoid WordPress format issue
;

alter table t_even modify(flag not null, class not null, id not null);
alter table t_even add constraint te_chk_even check (flag = 'E');

create index te_i1 on t_even(class, id);

create or replace view v_bare 
as
select * from t_odd
union all
select * from t_even
/

select
        /*+ 
                index(vw.t_odd  (class, id)) 
                index(vw.t_even (class, id)) 
        */
        *
from
        v_bare vw
where
        class = 'A'
and     id > 9500
order by
        class, id
fetch 
        first 2 rows only
;

As you can see t_even is an identically structured table with similar data, and I’ve created a union all view on top of the two tables, changing the query to reference the view rather than referencing a table. Thanks to the costing anomaly (combined with the small size of the tables) I’ve had to supply a couple of “global” hints to tell the optimizer to use the indexes to access the two tables. So how well does the optimizer do its job when we have a union all view?


----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |        |      1 |        |   192 (100)|      2 |00:00:00.01 |     190 |       |       |          |
|*  1 |  VIEW                                   |        |      1 |      2 |   192   (2)|      2 |00:00:00.01 |     190 |       |       |          |
|*  2 |   WINDOW SORT PUSHED RANK               |        |      1 |    404 |   192   (2)|      2 |00:00:00.01 |     190 |  2048 |  2048 | 2048  (0)|
|   3 |    VIEW                                 | V_BARE |      1 |    404 |   191   (1)|    404 |00:00:00.01 |     190 |       |       |          |
|   4 |     UNION-ALL                           |        |      1 |        |            |    404 |00:00:00.01 |     190 |       |       |          |
|   5 |      TABLE ACCESS BY INDEX ROWID BATCHED| T_ODD  |      1 |    202 |    95   (0)|    202 |00:00:00.01 |      95 |       |       |          |
|*  6 |       INDEX RANGE SCAN                  | TO_I1  |      1 |    202 |     2   (0)|    202 |00:00:00.01 |       2 |       |       |          |
|   7 |      TABLE ACCESS BY INDEX ROWID BATCHED| T_EVEN |      1 |    202 |    95   (0)|    202 |00:00:00.01 |      95 |       |       |          |
|*  8 |       INDEX RANGE SCAN                  | TE_I1  |      1 |    202 |     2   (0)|    202 |00:00:00.01 |       2 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=2)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "VW"."CLASS","VW"."ID")<=2)
   6 - access("CLASS"='A' AND "ID">9500)
   8 - access("CLASS"='A' AND "ID">9500)

Answer: Bad luck, the optimizer isn’t smart enought to find a cheap way through this query. It’s fetched all the relevant data from the two tables before applying the window sort (which it does with some efficiency – the pushed rank) to produce the right answer. As you can see from the A-Rows column, though, it’s had to acquire a couple of hundred rows from each table before getting down to the 2 rows we wanted.

Partitioned Tables

So let’s try to solve the problem by buying into the partitioning option and creating a list-partitioned table with two partitions, one flagged for current data and one flagged for history data – or ‘O’dd and ‘E’ven data as I’ve created them in my model.



create table t_pt (
        flag,
        class,
        id,
        v1,
        padding
)
partition by list (flag) (
        partition pO values('O'),
        partition pE values('E')
)
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        'O'                             flag,
        chr(65 + mod(rownum,26))        class,
        2 * rownum - 1                  id,
        lpad(2 * rownum,10,'0')         v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4   -- > comment to avoid WordPress format issue
;

insert into t_pt
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        'E'                             flag,
        chr(65 + mod(rownum,26))        class,
        2 * rownum                      id,
        lpad(2 * rownum,10,'0')         v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4   -- > comment to avoid WordPress format issue
;


create index tp_i1 on t_pt(class, id) local;
alter table t_pt modify (flag not null, class not null, id not null);

execute dbms_stats.gather_table_stats(user,'t_pt',method_opt=>'for all columns size 1', cascade=>true, granularity=>'ALL')

Note particularly that I have created a local index on this partitioned table – so there’s a very close correspondance between the two tables in the previous example and the two partitions in this example. Here’s the plan when I query the partitioned table for the first two rows:


select  /*+ index(t_pt (class, id)) */
        *
from
        t_pt
where
        class = 'A'
and     id > 9500
order by
        class, id
fetch 
        first 2 rows only
;


--------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                             |       |      1 |        |   190 (100)|      2 |00:00:00.01 |     189 |       |       |          |
|*  1 |  VIEW                                        |       |      1 |      2 |   190   (2)|      2 |00:00:00.01 |     189 |       |       |          |
|*  2 |   WINDOW SORT PUSHED RANK                    |       |      1 |    404 |   190   (2)|      2 |00:00:00.01 |     189 |  2048 |  2048 | 2048  (0)|
|   3 |    PARTITION LIST ALL                        |       |      1 |    404 |   189   (1)|    404 |00:00:00.01 |     189 |       |       |          |
|   4 |     TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T_PT  |      2 |    404 |   189   (1)|    404 |00:00:00.01 |     189 |       |       |          |
|*  5 |      INDEX RANGE SCAN                        | TP_I1 |      2 |    404 |     4   (0)|    404 |00:00:00.01 |       4 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=2)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "T_PT"."CLASS","T_PT"."ID")<=2)
   5 - access("CLASS"='A' AND "ID">9500)

The optimizer has let us down again. The plan shows us that we have to acquire all the relevant data from the two partitions before applying the row_number() analytic function and discarding all but the two rows we want. (Again we can check the A-Rows column to see that we have started by fetching a total of 404 rows from the table.)

But what happens if we fall back to the good old-fashioned (non-standard) rownum method:


select
        *
from    (
        select  /*+ index(t_pt (class, id)) */
                *
        from
                t_pt
        where
                class = 'A'
        and     id > 9500
        order by
                class, id
        )
where
        rownum <= 2
;

----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                      | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                               |       |      1 |        |   190 (100)|      2 |00:00:00.01 |       6 |       |       |          |
|*  1 |  COUNT STOPKEY                                 |       |      1 |        |            |      2 |00:00:00.01 |       6 |       |       |          |
|   2 |   VIEW                                         |       |      1 |    404 |   190   (2)|      2 |00:00:00.01 |       6 |       |       |          |
|*  3 |    SORT ORDER BY STOPKEY                       |       |      1 |    404 |   190   (2)|      2 |00:00:00.01 |       6 |  2048 |  2048 | 2048  (0)|
|   4 |     PARTITION LIST ALL                         |       |      1 |    404 |   189   (1)|      4 |00:00:00.01 |       6 |       |       |          |
|*  5 |      COUNT STOPKEY                             |       |      2 |        |            |      4 |00:00:00.01 |       6 |       |       |          |
|   6 |       TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T_PT  |      2 |    404 |   189   (1)|      4 |00:00:00.01 |       6 |       |       |          |
|*  7 |        INDEX RANGE SCAN                        | TP_I1 |      2 |    404 |     4   (0)|      4 |00:00:00.01 |       4 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=2)
   3 - filter(ROWNUM<=2)
   5 - filter(ROWNUM<=2)
   7 - access("CLASS"='A' AND "ID">9500)

Oracle really knows how to use rownum well – notice how there is a count stopkey operation as a child to the partition list all operation, and that’s where our rownum <= 2 predicate is first applied. For each partition Oracle finds “the first two rows” and after it has collected two rows from every partition it sorts them (again with a stopkey) to find the top two in that subset. Check the A-Rows column – we selected a total of 4 rows from the table (2 per partition) and reduced that to 2 rows at operation 3.

Conclusion

There are some patterns of processing where partitioned tables can be a lot friendlier to coders than partition views; if you do have to stick with Standard Edition you can usually get what you want but the coding investment may be significantly higher. Even with partitioned tables, though, there are some “old-fashioned” Oracle methods that do a much nicer job than some of the new-fangled “ANSI” mechanisms.

Footnote

Part of the problem presented here revolves around the desire to keep a pattern of SQL generation that already exists, doing nothing more than replacing a table name with a view (or partitioned table) name.

As we’ve seen, if you start with a simple heap table and try to replace it with a partitioned table you have to use the rownum mechanism rather than the fetch first N rows mechanism.

If you’re running with Standard Edition you can’t do anything simple to replace a table name with the name of a union all view; you’d have to change your code generator to apply all the predicates twice (once for each table) and then apply the rownum predicate or fetch first directive again outside the union all. In other words you have to emulate exactly what Oracle EE manages to do with partitioned tables and rownum.

select
        flag, class, id, v1
from
        (
        select
                /*+ index(t_odd (class, id)) */
                flag, class, id, v1
        from
                t_odd
        where
                class = 'A'
        and     id > 9500
        order by
                class, id
        fetch
                first 2 rows only
        )
        union all
        (
        select
                /*+ index(t_even (class, id)) */
                flag, class, id, v1
        from
                t_even
        where
                class = 'A'
        and     id > 9500
        order by
                class, id
        fetch
                first 2 rows only
        )
order by
        class, id
fetch 
        first 2 rows only
;

-----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |        |      1 |        |   192 (100)|      2 |00:00:00.01 |       8 |       |       |          |
|*  1 |  VIEW                              |        |      1 |      2 |   192   (2)|      2 |00:00:00.01 |       8 |       |       |          |
|*  2 |   WINDOW SORT PUSHED RANK          |        |      1 |      4 |   192   (2)|      2 |00:00:00.01 |       8 |  2048 |  2048 | 2048  (0)|
|   3 |    VIEW                            |        |      1 |      4 |   191   (1)|      4 |00:00:00.01 |       8 |       |       |          |
|   4 |     UNION-ALL                      |        |      1 |        |            |      4 |00:00:00.01 |       8 |       |       |          |
|   5 |      VIEW                          |        |      1 |      2 |    95   (0)|      2 |00:00:00.01 |       4 |       |       |          |
|*  6 |       VIEW                         |        |      1 |      2 |    95   (0)|      2 |00:00:00.01 |       4 |       |       |          |
|*  7 |        WINDOW NOSORT STOPKEY       |        |      1 |    202 |    95   (0)|      2 |00:00:00.01 |       4 | 73728 | 73728 |          |
|   8 |         TABLE ACCESS BY INDEX ROWID| T_ODD  |      1 |    202 |    95   (0)|      3 |00:00:00.01 |       4 |       |       |          |
|*  9 |          INDEX RANGE SCAN          | TO_I1  |      1 |    202 |     2   (0)|      3 |00:00:00.01 |       2 |       |       |          |
|* 10 |      VIEW                          |        |      1 |      2 |    95   (0)|      2 |00:00:00.01 |       4 |       |       |          |
|* 11 |       WINDOW NOSORT STOPKEY        |        |      1 |    202 |    95   (0)|      2 |00:00:00.01 |       4 | 73728 | 73728 |          |
|  12 |        TABLE ACCESS BY INDEX ROWID | T_EVEN |      1 |    202 |    95   (0)|      3 |00:00:00.01 |       4 |       |       |          |
|* 13 |         INDEX RANGE SCAN           | TE_I1  |      1 |    202 |     2   (0)|      3 |00:00:00.01 |       2 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_007"."rowlimit_$$_rownumber"<=2)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "from$_subquery$_006"."CLASS","from$_subquery$_006"."ID")<=2)
   6 - filter("from$_subquery$_003"."rowlimit_$$_rownumber"<=2)
   7 - filter(ROW_NUMBER() OVER ( ORDER BY "CLASS","ID")<=2)
   9 - access("CLASS"='A' AND "ID">9500)
  10 - filter("from$_subquery$_005"."rowlimit_$$_rownumber"<=2)
  11 - filter(ROW_NUMBER() OVER ( ORDER BY "CLASS","ID")<=2)
  13 - access("CLASS"='A' AND "ID">9500)


As you can see, the E-Rows still predicts a lot of work, but the A-Rows tells us the work was kept to the minimum we want.

 

Upgrading…Its Time!

Gough Whitlam was an Australian politician who rose to power in the 1970s with the campaign slogan “It’s Time!”. Politics aside, it loosely ran on the premise that not to have the occasional dramatic change ultimately leads to stagnation in social and economic progress.

428px-Gough_Whitlam_1973

The same analogy applies to the Oracle database. Of course, it can be hard to justify an upgrade because the database engine has matured to a point where typically all of the core functions required by the database for businesses to success are rock solid.

We have all heard the phrase “If it ain’t broke, don’t fix it”, and I’m sure there are many customers out there happily running their 10g and 11g systems, probably accessing them with Windows XP, with client-server apps built with Forms 6i.

But the tech world continues to evolve, and even systems such as those typically are accessed from more and more sources within a business and potentially from the across the internet. That introduces more compelling reasons to upgrade. I think the best analogy for this is to take a look an old car. Sure it runs fine…but it is starting to show signs of needing an upgrade and doesn’t have the security and safety features of a more modern release of the vehicle.

Maybe your databases is too? If you’re in APAC, then make sure to reach out to your local reps about upgrade workshops coming your way, or click here.

 

(picture credit: https://commons.wikimedia.org/wiki/File:Gough_Whitlam_1973.jpg)

DBPod – le podcast Bases de Données

By Franck Pachot

.
J’essaie quelque chose de nouveau. Je publie beaucoup en anglais (blog, articles, présentations) mais cette fois quelque chose de 100% francophone. En sortant du confinement, on reprend les transports (train, voiture,…) et c’est l’occasion de se détendre en musique mais aussi de s’informer avec des podcasts. J’ai l’impression que c’est un format qui a de l’avenir: moins contraignant que regarder une video ou ou lire un article ou une newsletter. Alors je teste une plateforme 100% gratuite: Anchor (c’est un peu le ‘Medium’ du Podcast).

Ce qui me paraît intéressant sur Anchor, c’est qu’il y a une appli smartphone qui permet aussi de laisser des messages vocaux. Alors n’hésitez pas pour des remarques, ou questions. Vos expériences sur les bases de données peuvent être intéressantes pour d’autres. Cloud, migrations Open Source,… c’est dans l’air et c’est toujours utile de partager. Avec l’urgence sanitaire, on voit moins de meetups alors pourquoi pas essayer d’autres media.

Pour le moment j’ai publié 3 épisodes sur Oracle, les sujets importants pour cette année: les Versions (19c ou 20c), les Release Updates et le Multitenant. Et un épisode plus général sur les bases serverless. Les sujets viendront en fonction de l’actualité et de mon expérience quotidienne en consulting, ou recherche. Mais aussi de vos questions.

Anchoe me parait pratique, mais c’est aussi disponible sur Spotify (“Follow” permet d’être notifié des nouveaux épisodes):

Et plusieurs autres platformes, comme iTunes:
https://podcasts.apple.com/ch/podcast/dbpod-le-podcast-bases-de-donn%C3%A9es/id1520397763?l=fr

Ou RadioPublic:

Breaker:
https://www.breaker.audio/db-pod-le-podcast-bases-de-donnees
Google Podcasts:
https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy8yMDdjNmIyMC9wb2RjYXN0L3Jzcw==
Overcast:
https://overcast.fm/itunes1520397763/db-pod-le-podcast-bases-de-donn-es
Pocket Casts:
https://pca.st/qtv36wbn

Pour ces premiers épisodes, je découvre… L’enregistrement est beaucoup trop rapide: j’ai eu la main un peu lourde sur Audacity qui permet d’enlever facilement les imperfections. Et quand on s’écoute soi-même… on en voit partout des imperferctions! N’hésitez pas à me faire un feedback. Sur le fond (les sujets dont vous souhaitez que je parle) et la forme (ça ne peut que s’améliorer…). Et à partager autour de vous, collègues ou managers. Les sujets seront moins techniques et plus généralistes que mes blogs.

Cet article DBPod – le podcast Bases de Données est apparu en premier sur Blog dbi services.

19c: scalable Top-N queries without further hints to the query planner

By Franck Pachot

.
The FETCH FIRST … ROWS ONLY syntax arrived in Oracle 12c and is much more convenient than using a subquery with ‘ORDER BY’ wrapped in a “WHERE ROWNUM < …” around it. But as I mentioned in a previous post it required the FIRST_ROWS() hint to get correct estimations. In SQL you don’t want to overload your code for performance, right? The RDBMS optimizer does the job for you. This was a bug with this new FETCH FIRST syntax, that is fixed (See Nigel Bayliss post about this) in 19c:


SQL> select bugno,value,optimizer_feature_enable,description from  V$SYSTEM_FIX_CONTROL where bugno=22174392;

      BUGNO    VALUE    OPTIMIZER_FEATURE_ENABLE                                                      DESCRIPTION
___________ ________ ___________________________ ________________________________________________________________
   22174392        1 19.1.0                      first k row optimization for window function rownum predicate

You can use this query on database metadata to check that you have the fix enabled (VALUE=1 means that the bug is ON). Yes, Oracle Database is not open source, but a lot of information is disclosed: you can query, with SQL, all optimizer enhancements and bug fixes, and versions they appear. And you can even enable or disable them at query level:


select /*+ opt_param('_fix_control' '22174392:OFF') */
continentexp,countriesandterritories,cases from covid where cases>0
order by daterep desc, cases desc fetch first 5 rows only

This simulates previous versions where the fix were not there.

Here is an example on the COVID table I’ve created in a previous post:https://blog.dbi-services.com/oracle-select-from-file/
I’m running this on the Oracle Cloud 20c preview but you can run the same on any recent version.

FETCH FIRST n ROWS

I’m looking at the Top-5 countries with the highest covid cases in the latest date I have in my database. This means ORDER BY the date and number of cases (both in descending order) and fetching only the first 5 rows.


SQL> select continentexp,countriesandterritories,cases
  2  from covid where cases>0
  3  order by daterep desc, cases desc fetch first 5 rows only
  4  /


   CONTINENTEXP     COUNTRIESANDTERRITORIES    CASES
_______________ ___________________________ ________
America         United_States_of_America       57258
Asia            India                          28701
America         Brazil                         24831
Africa          South_Africa                   12058
Europe          Russia                          6615


SQL> select * from dbms_xplan.display_cursor(format=>'allstats last')
  2  /


                                                                                                         PLAN_TABLE_OUTPUT
__________________________________________________________________________________________________________________________
SQL_ID  753q1ymf0sv0w, child number 1
-------------------------------------
select continentexp,countriesandterritories,cases from covid where
cases>0 order by daterep desc, cases desc fetch first 5 rows only

Plan hash value: 1833981741

-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |       |      1 |        |      5 |00:00:00.01 |     239 |       |       |          |
|*  1 |  VIEW                    |       |      1 |      5 |      5 |00:00:00.01 |     239 |       |       |          |
|*  2 |   WINDOW SORT PUSHED RANK|       |      1 |  20267 |      5 |00:00:00.01 |     239 |   124K|   124K|  110K (0)|
|*  3 |    TABLE ACCESS FULL     | COVID |      1 |  20267 |  18150 |00:00:00.01 |     239 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("DATEREP") DESC ,INTERNAL_FUNCTION("CASES") DESC )0)

I have queried the execution plan because the RDBMS optimization is not a black box: you can ask for an explanation. Here, it reads the whole table (TABLE ACCESS FULL), SORT it, and FILTER the rows up to number 5.

This is not efficient at all. In a relational database, rather than streaming all changes to another data store, we add purpose-built indexes to scale with a new query use-case:


SQL> create index covid_date_cases on covid(daterep,cases)
  2  /
Index COVID_DATE_CASES created.

That’s all. This index will be automatically maintained with strong consistency, and transparently for any modifications to the table.

NOSORT STOPKEY

I’m running exactly the same query:


SQL> select continentexp,countriesandterritories,cases
  2  from covid where cases>0
  3  order by daterep desc, cases desc fetch first 5 rows only
  4  /

   CONTINENTEXP     COUNTRIESANDTERRITORIES    CASES
_______________ ___________________________ ________
America         United_States_of_America       57258
Asia            India                          28701
America         Brazil                         24831
Africa          South_Africa                   12058
Europe          Russia                          6615

SQL> select * from dbms_xplan.display_cursor(format=>'allstats last')
  2  /

                                                                                              PLAN_TABLE_OUTPUT
_______________________________________________________________________________________________________________
SQL_ID  753q1ymf0sv0w, child number 2
-------------------------------------
select continentexp,countriesandterritories,cases from covid where
cases>0 order by daterep desc, cases desc fetch first 5 rows only

Plan hash value: 1929215041

------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                  |      1 |        |      5 |00:00:00.01 |       7 |
|*  1 |  VIEW                         |                  |      1 |      5 |      5 |00:00:00.01 |       7 |
|*  2 |   WINDOW NOSORT STOPKEY       |                  |      1 |      6 |      5 |00:00:00.01 |       7 |
|   3 |    TABLE ACCESS BY INDEX ROWID| COVID            |      1 |  20267 |      5 |00:00:00.01 |       7 |
|*  4 |     INDEX FULL SCAN DESCENDING| COVID_DATE_CASES |      1 |      6 |      5 |00:00:00.01 |       3 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("DATEREP") DESC,INTERNAL_FUNCTION("CASES") DESC )0)
       filter("CASES">0)

There is a big difference here. I don’t need to read the table anymore. The index provides the entries already sorted and a FULL SCAN DESCENDING will just read the first ones. Look at the ‘Buffer’ column here which is the read units. In the previous test, without the index, it was 239 blocks. But what was bad there is that the FULL TABLE SCAN has a O(N) time complexity. Now with the index, I read only 7 blocks (3 to go down the index B*Tree and 4 to get the remaining column values). And this has a constant time complexity: even with billions of rows in the table this query will read less than 10 blocks, and most of them probably from memory. This will take less than a millisecond whatever the size of the table is.

3 small conclusions here:

  • If you think your RDBMS doesn’t scale, be sure that you run the latest version of it. Mainstream database have constant improvement and bug fixes.
  • You don’t need another database each time you have another use case. Creating indexes can solve many problems and you have full agility when the RDBMS can create them online, and when there’s zero code to change.
  • You don’t need to test on billions of rows. The execution plan operations tell you what scales or not. You don’t need to tell the optimizer how to proceed, but he tells you how he plans and how he did.

Cet article 19c: scalable Top-N queries without further hints to the query planner est apparu en premier sur Blog dbi services.

Vagrant and Docker Builds : ORDS 20.2 and SQLcl 20.2 Updates

https://oracle-base.com/blog/wp-content/uploads/2019/12/ords-2-258x300.png 258w" sizes="(max-width: 191px) 85vw, 191px" />

The recent Oracle REST Data Services (ORDS) 20.2 release prompted my usual reaction. I’ve gone through my Vagrant and Docker builds, and updated them to use ORDS 20.2 and SQLcl 20.2.

The Vagrant database builds, which include ORDS, can be found here.

The Docker ORDS builds can be found here.

There were also some small Tomcat mods.

  • Tomcat upgraded to 9.0.37.
  • HTTP/2 enabled.
  • Compression enabled.
  • Cache-Control enabled for images, CSS and Javascript.

All that went pretty well so as soon as I got to work yesterday I rolled ORDS 20.2 to all non-production environments, and a few “not yet production” environments. If you follow the blog you will know we use Docker for ORDS (similar to my Github builds). It makes rolling out a new version really simple. Just throw away all the containers and replace them with the spangly new ones.

If it’s looking OK after a few days we’ll push it out to the remaining production installations.

Cheers

Tim…


Vagrant and Docker Builds : ORDS 20.2 and SQLcl 20.2 Updates was first posted on July 14, 2020 at 8:13 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.