Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

18c: No Active Data Guard required (and detected) when only CDB$ROOT and PDB$SEED are opened in read-only

When you clone a PDB on the primary CDB, you want that the same is automatically done on the standby ones. Unfortunately, for this operation, the CDB must be opened read-only. In 12.1 even the PDB needed to be opened read-only (Ludovico Caldara reported this in his blog). This, as far as I know, was fixed in 12.2 where MRP reads the files without the need to have the PDB opened. But another problem comes with online cloning, as reported by Alain Fuhrer in his blog, where the CDB needs to be opened read-only with real-time apply. This again requires the Active Data Guard option, which is then mandatory to use all power from the multitenant option. This brings the use of multitenant from x1.36 the price for this option to x1.61 for both, on the primary and the standby servers. Hopefully, this has been addressed in 18c where you don’t need the Active Data Guard to have the CDB opened in read-only with real-time apply, as long as you ensure that the user PDB are always closed when apply is on.

This test was done on the Oracle Cloud DBaaS because it is very easy to provision a Data Guard configuration. It is 18.1 because this is what is created and the patch apply for 18.2 is not so easy. But behaviour the same in 18.2

CDB in MOUNT do not requires ADG

The standby is not opened:

SQL> select dbid,name,open_mode,database_role from v$database;
 
DBID NAME OPEN_MODE DATABASE_ROLE
---------- --------- -------------------- ----------------
1507386248 ORCL MOUNTED PHYSICAL STANDBY
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED MOUNTED
4 PDB1 MOUNTED

In the primary, we see that the standby destination is in real time apply, but not opened:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status;
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL MOUNTED-STANDBY MANAGED REAL TIME APPLY
3 LOG_ARCHIVE_DEST_3 INACTIVE LOCAL UNKNOWN IDLE
4 LOG_ARCHIVE_DEST_4 INACTIVE LOCAL UNKNOWN IDLE

This is the information that is logged into DBA_FEATURE_USAGE_STATISTICS. Here is the query that counts the usage of real-time query which is one of the features that require ADG option:

SQL> select count(*) from v$archive_dest_status where status = 'VALID' and recovery_mode like 'MANAGED%QUERY' and database_mode = 'OPEN_READ-ONLY';
 
COUNT(*)
----------
0

Here we have no ADG usage recorded because the database is not opened read-only. But if you try some PDB cloning on the primary you will quickly encounter the problems: the PDB on the standby has no datafiles.

CDB in READ and PDB closed do not require ADG in 18c

I’ll now open the standby read-only:

SQL> alter database open;
Database altered.
 
SQL> show pdbs;
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 READ ONLY NO

Unfortunately, this has also opened the PDB because the saved state was OPEN for the primary database and we cannot manage that properly on the standby (except with Grid Infrastructure or Oracle Restart or custom scripts or database triggers). I close it quickly before the ADG usage is recorded:

SQL> alter pluggable database pdb1 close;
Pluggable database altered.

So, my standby is opened read-only with real-time apply:

SQL> select dbid,name,open_mode,database_role from v$database;
 
DBID NAME OPEN_MODE DATABASE_ROLE
---------- --------- -------------------- ----------------
1507386248 ORCL READ ONLY WITH APPLY PHYSICAL STANDBY

but all the user PDBs are not opened:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED

Here is what we see from the primary about this archive destination:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status;
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL OPEN_READ-ONLY MANAGED REAL TIME APPLY
3 LOG_ARCHIVE_DEST_3 INACTIVE LOCAL UNKNOWN IDLE
4 LOG_ARCHIVE_DEST_4 INACTIVE LOCAL UNKNOWN IDLE

This in 12.2 would have recorded the usage of real-time query usage because the standby is opened for queries and we are in managed recovery mode. Here was the 12.2 query to detect it (you can see it in catfusrg.sql):

SQL> select count(*) from v$archive_dest_status where status = 'VALID' and recovery_mode like 'MANAGED%' and database_mode = 'OPEN_READ-ONLY';
 
COUNT(*)
----------
1

But the pattern has been changed in 18c to include ‘QUERY':

SQL> select count(*) from v$archive_dest_status where status = 'VALID' and recovery_mode like 'MANAGED%QUERY' and database_mode = 'OPEN_READ-ONLY';
 
COUNT(*)
----------
0

and this ‘QUERY’ word is added only when some user pluggable databases are not in MOUNT but opened for queries.

I check quickly that a PDB clone on the primary:

SQL> create pluggable database PDB2 from PDB1 keystore identified by "Ach1z0#d";
Pluggable database created.
 
SQL> alter pluggable database PDB2 open;
Pluggable database altered.

has the clone created on the standby, in closed mode:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED
5 PDB2 MOUNTED

So, finally, we have a good way to use multitenant features without the need for Active Data Guard. This is good for DBaaS where we want easy cloning, protect their availability with Data Guard, but with no need to query them on the standby.

CDB in READ and PDB opened requires ADG in 18c

Now, if I open the PDB:

SQL> select dbid,name,open_mode,database_role from v$database;
 
DBID NAME OPEN_MODE DATABASE_ROLE
---------- --------- -------------------- ----------------
1507386248 ORCL READ ONLY WITH APPLY PHYSICAL STANDBY
 
SQL> alter pluggable database pdb1 open;
 
Pluggable database PDB1 altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 READ ONLY NO

From the primary, the recovery mode for this standby mentions ‘WITH QUERY’ for the managed real time apply:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status;
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL OPEN_READ-ONLY MANAGED REAL TIME APPLY WITH QUERY
3 LOG_ARCHIVE_DEST_3 INACTIVE LOCAL UNKNOWN IDLE
4 LOG_ARCHIVE_DEST_4 INACTIVE LOCAL UNKNOWN IDLE

And only in this case the ADG usage is recorded:

SQL> select count(*) from v$archive_dest_status where status = 'VALID' and recovery_mode like 'MANAGED%QUERY' and database_mode = 'OPEN_READ-ONLY';
 
COUNT(*)
----------
1

The proper way to open without activating ADG

In the previous test, I had the PDB opened for a short time when I opened the CDB because of the saved state. If you want to avoid this, the proper way is to ensure that the apply is off when you open it so that you do not have, at the same time, a PDB opened and real time apply.

I’m back to the state where the standby is in mount and apply on:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status where status='VALID';
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL MOUNTED-STANDBY MANAGED REAL TIME APPLY

I stop the apply in the standby:

DGMGRL> edit database "ORCL_02" set state=apply-off;
Succeeded.

I check that the primary sees that apply is off:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status where status='VALID';
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL MOUNTED-STANDBY IDLE

I open the standby CDB read-only and ensures that all PDB are closed:

SQL> alter database open;
Database altered.
 
SQL> alter pluggable database all close;
Pluggable database altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED

The primary sees the open state:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status where status='VALID';
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL OPEN_READ-ONLY IDLE

I can now restart the apply:

DGMGRL> edit database "ORCL_02" set state=apply-on;
Succeeded.

Then the primary detects the managed recovery

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status where status='VALID';
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL OPEN_READ-ONLY MANAGED

Then once the gap is resolved, we are in real time apply:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status where status='VALID';
  DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL OPEN_READ-ONLY MANAGED REAL TIME APPLY

But there is no mention of ‘WITH QUERY’ thanks to the closed PDBs, which means that no ADG usage is recorded. In summary, 18c has added two possible values for RECOVERY_MODE: ‘MANAGED WITH QUERY’ and ‘MANAGED REAL TIME APPLY WITH QUERY’, and those ones, in multitenant, are for standby CDB opened read-only when at least one user PDB is also opened read-only.

For DBaaS, you probably run with Grid Infrastructure (because Oracle Restart do not allow databases on ACFS). For single-tenant, this new feature can solve the problem of activating ADG option by mistake, at least if we have good control of PDB states depending on the role.

 

Cet article 18c: No Active Data Guard required (and detected) when only CDB$ROOT and PDB$SEED are opened in read-only est apparu en premier sur Blog dbi services.

Building the Priority 600

Following are some photos and thoughts from my unboxing and building of the Priority 600 belt-drive, gearbox-driven bicycle from Priority Bicycles. If you're building the 600, be sure to watch Dave Weiner's Priority 600 Assembly video on YouTube. It's twenty minutes well spent. 

  Today was fantastic for working outside. I rose early, and setup in the backyard.

Today was fantastic for working outside. I rose early, and setup in the backyard.

  Job <a href=#1 was to fish out the seatpost and use it to hang the bike from the workstand. And Dave is spot-on in his video about needing to tighten the hex bolt on the seat clamp. " data-image="https://static1.squarespace.com/static/524b7aaae4b067caf1ceac16/t/5b2edb7af950b77f29c92546/1529808885717/Priority+600+Unboxed.jpg" data-image-dimensions="1365x768" data-image-focal-point="0.5,0.5" data-load="false" data-image-id="5b2edb7af950b77f29c92546" data-type="image" src="https://static1.squarespace.com/static/524b7aaae4b067caf1ceac16/t/5b2edb7af950b77f29c92546/1529808885717/Priority+600+Unboxed.jpg?format=1000w" />

Job #1 was to fish out the seatpost and use it to hang the bike from the workstand. And Dave is spot-on in his video about needing to tighten the hex bolt on the seat clamp.

  Side-cutters or scissors. Carefully (!) snip away the zip-ties. Be sure to catch the front wheel when you cut it loose. Keep metal and especially sharp edges away from the paint.

Side-cutters or scissors. Carefully (!) snip away the zip-ties. Be sure to catch the front wheel when you cut it loose. Keep metal and especially sharp edges away from the paint.

  It's much easier to work with all the packing material out of the way.

It's much easier to work with all the packing material out of the way.

  I took extra care to gently (!) unwrap the tape from around the electrical connector. Then I pulled and discarded the spacers from the brake caliper and fork dropouts.

I took extra care to gently (!) unwrap the tape from around the electrical connector. Then I pulled and discarded the spacers from the brake caliper and fork dropouts.

  This handy tarp kept the small parts organized and ready to hand.

This handy tarp kept the small parts organized and ready to hand.

  It's a nice touch to keep the top and bottom gaps in the kickstand clamp -- sideways in this image -- more or less the same when you tighten the bolts. 

It's a nice touch to keep the top and bottom gaps in the kickstand clamp -- sideways in this image -- more or less the same when you tighten the bolts. 

  Take note of the directional arrows indicating the direction of rotation on the disk rotor.   I tightened the rotor bolts in a star pattern, and torqued each bolt to 7 newton-meters.

Take note of the directional arrows indicating the direction of rotation on the disk rotor. I tightened the rotor bolts in a star pattern, and torqued each bolt to 7 newton-meters.

  Don't spin the wheel yet! Because, spokes! (I almost made that mistake).

Don't spin the wheel yet! Because, spokes! (I almost made that mistake).

  The power connector will rotate until you clamp the quick-release tight, so keep an eye on the connector and hold it where you want it -- behind the fork leg is good! -- until you clamp the wheel into place. 

The power connector will rotate until you clamp the quick-release tight, so keep an eye on the connector and hold it where you want it -- behind the fork leg is good! -- until you clamp the wheel into place. 

  My bike was one of those with the stem pointing backward, so I loosened the two clamping bolts on either side of the stem and spun the stem around to point forward. Then I attached the handlebar assembly.

My bike was one of those with the stem pointing backward, so I loosened the two clamping bolts on either side of the stem and spun the stem around to point forward. Then I attached the handlebar assembly.

  Dave's video instructions got me through installing the light and front fender. I did redo a zip-tie, choosing to tie all four cables together so as to keep them from hanging in front of the light.

Dave's video instructions got me through installing the light and front fender. I did redo a zip-tie, choosing to tie all four cables together so as to keep them from hanging in front of the light.

  There is a sweet spot with fender-stay bolts where they will clamp just tight enough to hold position while being loose enough for you to slide them by hand. Finding that sweet spot helps fender alignment go faster.

There is a sweet spot with fender-stay bolts where they will clamp just tight enough to hold position while being loose enough for you to slide them by hand. Finding that sweet spot helps fender alignment go faster.

  Pretty certain I spent more time obsessing over fenders than on any other part of the build. Work slowly. Be patient.   Don't be afraid to bend the stays a bit if you need them to pull one direction or the other.

Pretty certain I spent more time obsessing over fenders than on any other part of the build. Work slowly. Be patient. Don't be afraid to bend the stays a bit if you need them to pull one direction or the other.

  See the zip-tie end just above and to the right of the fork crown? Near the fender? (It's a little hard to see). I chose to zip-tie the taillight's power cable to the front brake hose.

See the zip-tie end just above and to the right of the fork crown? Near the fender? (It's a little hard to see). I chose to zip-tie the taillight's power cable to the front brake hose.

  The power cable coming from the hub I wrapped around the light, and zip-tied on this side. This arrangement is not how Dave shows it in his video, but it's working ok for me so far.

The power cable coming from the hub I wrapped around the light, and zip-tied on this side. This arrangement is not how Dave shows it in his video, but it's working ok for me so far.

  I did a quick caliper alignment to eliminate the barest of

I did a quick caliper alignment to eliminate the barest of "ka-ching" sounds. I almost didn't bother.

  The pedals were the last part that I attached. They went on just like Dave shows in the video, and I even had a bit of grease laying around to dab on the threads like he suggests. Dave says in his video to

The pedals were the last part that I attached. They went on just like Dave shows in the video, and I even had a bit of grease laying around to dab on the threads like he suggests. Dave says in his video to "put as much leverage as you can into the pedal", but I feel it's safer to pay attention to how much force he's actually applying. There's a point I look for when the threads bottom out hard, and resistance spikes way up.

  Is it straight? It is now! I'm so picky...Happy with the alignment, I evenly tightened the stem bolts, taking care to alternate between them as I brought them to an even clamping pressure.

Is it straight? It is now! I'm so picky...Happy with the alignment, I evenly tightened the stem bolts, taking care to alternate between them as I brought them to an even clamping pressure.

  The hash marks help with getting the bar centered in the stem.   Center the bar, rotate it up or down to  preference, tighten all four faceplate bolts evenly, and keep the gap more or less the same top and bottom.

The hash marks help with getting the bar centered in the stem. Center the bar, rotate it up or down to  preference, tighten all four faceplate bolts evenly, and keep the gap more or less the same top and bottom.

  I like my brake levers inboard so that only my index finger touches. I made that and a few other adjustments -- like saddle tilt -- for comfort.

I like my brake levers inboard so that only my index finger touches. I made that and a few other adjustments -- like saddle tilt -- for comfort.

  Eh, voila! The Priority 600. Here you see it on the old rail grade near the Anna Marsh during my 20-mile shakedown ride. It is just one of the places the bike and the tires can take you.

Eh, voila! The Priority 600. Here you see it on the old rail grade near the Anna Marsh during my 20-mile shakedown ride. It is just one of the places the bike and the tires can take you.

Once again, I heartily recommend watching Dave Weiner's Priority 600 Assembly video on the Priority Bicycles YouTube channel if you are building your own Priority 600. The bike is too expensive to not take 20 minutes for watching the build instructions.

Cursor_sharing force

Prompted by a recent ODC (OTN) question I’ve just written up an example of one case where setting the cursor_sharing parameter to force doesn’t work as you might expect. It’s a specific example of what I believe is a theme that can appear in several different circumstances: if your SQL mixes “genuine” bind variable with literals then the literals may not be substituted.

Here’s a simple data set to start with:


rem
rem     Script:         cursor_sharing_limit.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jun 2018
rem     Purpose:
rem
rem     Last tested
rem             18.1.0.0
rem             12.2.0.1
rem             12.1.0.2
rem

create table t1
as
select
        rownum            n1,
        rownum            n2,
        lpad(rownum,10)   small_vc,
        rpad('x',100,'x') padding
from dual
connect by
        level <= 1e4 -- > comment to avoid WordPress format issue
;

I haven’t bothered to gather stats – it’s not needed in trivial cases of CTAS.

Here’s one way to execute SQL that doesn’t get literal substitution when (apparently) it should:


alter system flush shared_pool;
alter session set cursor_sharing=force;

declare
        m_ct number;
        m_n1 number := 20;
begin
        execute immediate
                'select /*+ trace this */ count(*) from t1 where n2 = 15 and n1 = :b1'
                into m_ct using m_n1;
        dbms_output.put_line(m_ct);

        execute immediate
                'select /*+ trace this too */ count(*) from t1 where n1 = 15 and n2 = 15'
                into m_ct;
        dbms_output.put_line(m_ct);
end;
/

alter session set cursor_sharing=exact;

I’ve included a “hint” that allows me to find the SQL statements in v$sql very easily – and here they are, preceded by the query I used to find them:


select  sql_id, parse_calls, executions, rows_processed, sql_text 
from    v$sql
where   sql_text like 'select%trace this%'
and     sql_text not like '%v$sql%'
;


SQL_ID        PARSE_CALLS EXECUTIONS ROWS_PROCESSED
------------- ----------- ---------- --------------
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------
cbu4s78h5pfj5           1          1              1
select /*+ trace this too */ count(*) from t1 where n1 = :"SYS_B_0" and n2 = :"SYS_B_1"

cru67sufptx8x           1          1              1
select /*+ trace this */ count(*) from t1 where n2 = 15 and n1 = :b1

Notice how one statement – which used only literals – underwent bind variable transformation but the other – with its mix of literals and bind variables – didn’t. On the other hand, if I execute the mixed format statement from a basic SQL*Plus session then pull the plan from memory, this is what I get:


SQL> select /*+ SQL*Plus session */ count(*) from t1 where n2 = 15 and n1 = :b1;

  COUNT(*)
----------
         0

1 row selected.

SQL> select  sql_id, parse_calls, executions, rows_processed, sql_text
  2  from    v$sql
  3  where   sql_text like 'select%Plus session%'
  4  and     sql_text not like '%v$sql%'
  5  ;

SQL_ID        PARSE_CALLS EXECUTIONS ROWS_PROCESSED
------------- ----------- ---------- --------------
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------
gq2qy2a9yuta7           1          1              1
select /*+ SQL*Plus session */ count(*) from t1 where n2 = :"SYS_B_0" and n1 = :b1

When trying to take advantage of cursor_sharing = force, you have to think about how the SQL gets to the database engine – is it delivered directly, or is it sent through some layer of code that means the raw SQL statement is protected in some way from the substitution code.

Footnote:

I happen to have used an example that puts the bind variable into the where clause; you will see the same effect even if the bind variables are in the select list – for example if you’ve selected something like to_char(date_col, :formatting_variable).

Introduction to databases for {Power.Coders} with MySQL

    PC_IMG_5528
    This week I took some days off to do something related to my job but a bit different. I’ve given a course on Databases and SQL. But not for my usual customers. And not with the database I know the best. So, it is still in a domain that I know, but out of my comfort zone. And this is something that we should do more often because it gives a little apprehension and a big satisfaction.

    The little apprehension is because there were a lot of unknown parameters for me. I taught to students from Power.Coders, a coding academy for refugees. 18 young people with a very different background. Some already knew how to code. Some did some front-end stuff and website design in the past but had no idea about what is a server. Some others are doing all that for the first time, and have to learn what is a program. But there’s one thing that is common to everybody here: all are motivated to learn, understand, and acquire the knowledge and experience to start a career in IT. This is the good energy that makes everything possible.

    The big satisfaction is because, with everybody doing their best, things works and everyone gains confidence. It is out of the comfort zone that you can get your best, and that is for the students as well as the teacher and coaches. Because I wanted the course to stay consistent with what they learned in the curriculum, I did the database examples and exercises on MySQL and from PHP code. I never did that so I had to study as well. The big advantage I have, from experience, is that I know where to search on the internet. One of the students told me “don’t tell me to google for it, that’s like swimming in the ocean!”. When you are not used to it ‘googling’ for solutions is not easy. Experienced people do not always consider that and they answer in forums with a rude “RTFM” or “LMGTFY”. I’ve never felt obliged to answer on forums but when I do it, it is not to argue about the OP and his question, and whether he did his own research before. If I choose to answer, then my goal is to explain as clearly as possible. And I do not fear to repeat myself because the more I explain and the better understanding I have about what I explain. I remember my first answers on the dba-village forum. Some questions were recurrent. And each time I tried to answer with a shorter and clearer explanation.

    Google Slides

    PC_IMG_5523I’ve prepared slides and exercises for 3 days and here I share the content (however, there were a lot of whiteboard explanations so the slides may not be sufficient). I did the pages were with Google Sites and the presentation with Google Slides. Both of them were, again, new things for me, out of my .ppt comfort zone. It went very well. For correct presenter experience, I installed the “Google Slides Auto Resize Speaker Notes” Google Chrome extension. One things annoys me with Google Slides: readers cannot copy/paste from the slides. Please, comment here if you have a solution. My workaround was to copy the code to the presenter’s notes and tell the students to open them (with ‘s’ key) and copy from there. But I don’t like to duplicate the code.

    – Day 1 on Data Structures:
    Data Modeling, YAML, XML, JSON, CSV and introduction to relation tables.

    Exercise: load some OpenFlight data into MySQL

    – Day 2 on Introduction to Databases:
    RDBMS, create and load tables, normalization

    Exercise: a PHP page to query a simple table

    – Day 3 on Introduction to SQL:
    SELECT, INSERT, DELETE, UPDATE, ACID and transactions

    Exercise: a PHP page to list Flights from multi-creteria form

    In addition to the course, I also did some coaching for their PHP exercises. I discovered this language (which I do not like – meaningless error messages, improbable implicit conversion,…). But at least we were able to make some concepts more clear: what is a web server, sessions, cookies, access to the database… And the method is important. How to approach a code that doesn’t work, where nothing is displayed: change the connection parameters to wrong ones to see if we go to this part of code, add explicitly a syntax error in the SQL statement to see if errors are correctly trapped, echo some variables to see if they are set. Before learning magic IDEs, we must put the basics that will help everywhere. The main message is: you are never stuck with an error. There is always a possibility to trace more. And when you have all details, you can focus your google search better.

    Thanks

    Big thanks to SQLFiddle where it is easy to do some SQL without installing anything. However, being 20 people on a small Wi-Fi, using local resources is preferable. And we installed MAMP (see here how I discovered it and had to fix a bug at the same time). Big thanks to Chris Saxon ‘Database for Developers’ videos which will help the students to review all the concepts in an entertaining way. Thanks to w3schools for the easy learning content.

    Oh, and thanks to facebook sponsoring-privacy-intrusive-algorithms! Because this is how I heard about PowerCoders. For the first time of my life, I clicked on a sponsored link on a social media. This was for the WeMakeIt crowdfunding project for this powercoders curriculum in Lausanne. I’ve read about the project. I watched the video and that’s how I wanted to participate in this project. You should watch this Christian Hirsig TED talk as well. At a time where everybody is talking about autonomous self-driven cars, his accomplishment was to move from completely powerless to be back in the driver’s seat…

    IMG_5531
    And of course thanks to Powercoders organizers, students, teachers, coaches, mentors and the companies who propose internships to complete the curriculum (I was happy, and proud of my employer, when dbi-services was in immediately).
    Teaching to motivated people who want to learn as much as possible is a great experience, and not all days are like this in the professional life. And explaining topics that are aside of my comfort zone is lot of work, but also a rewarding experience. In this world where technology goes faster and faster, showing the approach and the method to adapt to new topics gives a lot of self-confidence.

     

    Cet article Introduction to databases for {Power.Coders} with MySQL est apparu en premier sur Blog dbi services.

OSWatcher, Tracefile Analyzer, and Oracle RAC 12.2

When I started the series about Tracefile Analyzer (TFA) I promised three parts. One for single instance, another one for Oracle Restart and this one is going to be about Real Application Clusters. The previous two parts are published already, this is the final piece.

The environment

I am using a virtualised 2-node cluster named rac122pri with nodes rac122pri1{1,2} based on Oracle Linux 7.4. RAC is patched to 12.2.0.1.180116. I installed a Grid Home and an RDBMS home (Enterprise Edition).

Real Application Clusters

Before starting this discussion it’s worth pointing out that TFA integration in RAC 12.2 works really well. TFA is installed as part of the initial setup of the binaries and documented in the Autonomous Health Framework.

As soon as you execute root.sh as part of your RAC 12.2 Grid Infrastructure installation, TFA is set up as the first step out of 19. Unfortunately I don’t have the screen output available, but all the action is logged in rootcrs_$(hostname)_timestamp.log anyway. You can find that logfile in $ORACLE_BASE/crsdata/$(hostname)/crsconfig. If memory serves me right this is a 12.2 change, I think the file was in $GRID_HOME/cfgtoollogs in 12.1. but I’m not sure (and don’t have a system to check against).

Here is the relevant output from the log file:

2018-05-11 12:16:02: Command output:
>  CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'. 
>End Command output
2018-05-11 12:16:02: CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2018-05-11 12:16:02: Executing cmd: /u01/app/12.2.0.1/grid/bin/clsecho -p has -f clsrsc -m 4001
2018-05-11 12:16:02: Command output:
>  CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector. 
>End Command output
2018-05-11 12:16:02: CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2018-05-11 12:16:02: Executing cmd: /u01/app/12.2.0.1/grid/crs/install/tfa_setup -silent -crshome /u01/app/12.2.0.1/grid
2018-05-11 12:16:24: Command output:
>  TFA Installation Log will be written to File : /tmp/tfa_install_8879_2018_05_11-12_16_02.log
>  
>  Starting TFA installation
>  
>  Using JAVA_HOME : /u01/app/12.2.0.1/grid/jdk/jre
>  
>  Running Auto Setup for TFA as user root...
>  
>  Installing TFA now...
>  
>  
>  TFA Will be Installed on rac122pri1...
>  
>  TFA will scan the following Directories
>  ++++++++++++++++++++++++++++++++++++++++++++
>  
>  .---------------------------------------------------------.
>  |                        rac122pri1                       |
>  +----------------------------------------------+----------+
>  | Trace Directory                              | Resource |
>  +----------------------------------------------+----------+
>  | /u01/app/12.2.0.1/grid/OPatch/crs/log        | CRS      |

... [a lot more directories]

>  | /u01/app/oracle/crsdata/rac122pri1/trace     | CRS      |
>  '----------------------------------------------+----------'
>  
>  
>  Installing TFA on rac122pri1:
>  HOST: rac122pri1     TFA_HOME: /u01/app/12.2.0.1/grid/tfa/rac122pri1/tfa_home
>  
>  .------------------------------------------------------------------------------.
>  | Host       | Status of TFA | PID  | Port | Version    | Build ID             |
>  +------------+---------------+------+------+------------+----------------------+
>  | rac122pri1 | RUNNING       | 9165 | 5000 | 12.2.1.0.0 | 12210020161122170355 |
>  '------------+---------------+------+------+------------+----------------------'
>  
>  Running Inventory in All Nodes...
>  
>  Enabling Access for Non-root Users on rac122pri1...
>  
>  Adding default users to TFA Access list...
>  
>  Summary of TFA Installation:
>  .----------------------------------------------------------------------.
>  |                              rac122pri1                              |
>  +---------------------+------------------------------------------------+
>  | Parameter           | Value                                          |
>  +---------------------+------------------------------------------------+
>  | Install location    | /u01/app/12.2.0.1/grid/tfa/rac122pri1/tfa_home |
>  | Repository location | /u01/app/oracle/tfa/repository                 |
>  | Repository usage    | 0 MB out of 10240 MB                           |
>  '---------------------+------------------------------------------------'
>  
>  
>  Installing oratop extension..
>  
>  
>  TFA is successfully installed...

...

>End Command output
2018-05-11 12:16:24: Executing cmd: /u01/app/12.2.0.1/grid/bin/clsecho -p has -f clsrsc -m 4002
2018-05-11 12:16:24: Command output:
>  CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 
>End Command output

As with the other database deployment types, TFA integrates into systemd on Oracle Linux 7:

[root@rac122pri1 ~]# systemctl status oracle-tfa
● oracle-tfa.service - Oracle Trace File Analyzer
   Loaded: loaded (/etc/systemd/system/oracle-tfa.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2018-06-20 23:03:11 BST; 15h ago
 Main PID: 3092 (init.tfa)
   CGroup: /system.slice/oracle-tfa.service
           ├─ 3092 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 

The service is enabled and starts together with the operating system.

RAC-TFA lives in the Grid Home

If you read the previous parts of this series you might have noticed in the previous output that unlike single instance and Oracle Restart environments, TFA is resident in the Grid Home as opposed to ORACLE_BASE. This could have implications if you are using a system implementing separation of duties. For everyone else it’s a question of making sure the environment is set correctly.

[oracle@rac122pri1 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[oracle@rac122pri1 ~]$ which tfactl
/u01/app/12.2.0.1/grid/bin/tfactl

Another difference between single node and RAC installations is the status message. TFA reports its status on all nodes in the cluster, as shown here:

[oracle@rac122pri1 ~]$ tfactl print status

.-------------------------------------------------------------------------------------------------.
| Host       | Status of TFA | PID  | Port | Version    | Build ID             | Inventory Status |
+------------+---------------+------+------+------------+----------------------+------------------+
| rac122pri1 | RUNNING       | 3698 | 5000 | 18.2.1.0.0 | 18210020180529111033 | COMPLETE         |
| rac122pri2 | RUNNING       | 3697 | 5000 | 18.2.1.0.0 | 18210020180529111033 | COMPLETE         |
'------------+---------------+------+------+------------+----------------------+------------------'
[oracle@rac122pri1 ~]$

This isn’t the stock version by the way, I have recently upgraded my deployment to 18.2.1.0. More about that later. I recently installed RAC 12.2 and when invoking TFA am greeted with the familiar “WARNING – your software is older than 180 days. Please consider upgrading TFA to the latest version”.

You may want to consider upgrading the installation

As with the other deployments, the version you get with the base installation is most likely outdated. My base release showed this:

.-------------------------------------------------------------------------------------------------.
| Host       | Status of TFA | PID  | Port | Version    | Build ID             | Inventory Status |
+------------+---------------+------+------+------------+----------------------+------------------+
| rac122pri1 | RUNNING       | 3698 | 5000 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE         |
| rac122pri2 | RUNNING       | 3697 | 5000 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE         |
'------------+---------------+------+------+------------+----------------------+------------------'

But not only that: if you check for toolstatus you will notice that most of the support tools are missing just like with the other deployments. I love to have them and as such upgraded to the current version at the time of writing (18.2.1.0.0). My Oracle Support DOC ID 1513912.1 is the reference, as always.

After the upgrade completed, I could see the following tools:

[oracle@rac122pri1 ~]$ tfactl toolstatus

.------------------------------------------------------------------.
|                 TOOLS STATUS - HOST : rac122pri1                 |
+----------------------+--------------+--------------+-------------+
| Tool Type            | Tool         | Version      | Status      |
+----------------------+--------------+--------------+-------------+
| Development Tools    | orachk       |   12.2.0.1.3 | DEPLOYED    |
|                      | oratop       |       14.1.2 | DEPLOYED    |
+----------------------+--------------+--------------+-------------+
| Support Tools Bundle | darda        | 2.10.0.R6036 | DEPLOYED    |
|                      | oswbb        |        8.0.1 | RUNNING     |
|                      | prw          | 12.1.13.11.4 | NOT RUNNING |
+----------------------+--------------+--------------+-------------+
| TFA Utilities        | alertsummary |   12.2.1.1.0 | DEPLOYED    |
|                      | calog        |   12.2.0.1.0 | DEPLOYED    |
|                      | changes      |   12.2.1.1.0 | DEPLOYED    |
|                      | dbglevel     |   12.2.1.1.0 | DEPLOYED    |
|                      | events       |   12.2.1.1.0 | DEPLOYED    |
|                      | grep         |   12.2.1.1.0 | DEPLOYED    |
|                      | history      |   12.2.1.1.0 | DEPLOYED    |
|                      | ls           |   12.2.1.1.0 | DEPLOYED    |
|                      | managelogs   |   12.2.1.1.0 | DEPLOYED    |
|                      | menu         |   12.2.1.1.0 | DEPLOYED    |
|                      | param        |   12.2.1.1.0 | DEPLOYED    |
|                      | ps           |   12.2.1.1.0 | DEPLOYED    |
|                      | pstack       |   12.2.1.1.0 | DEPLOYED    |
|                      | search       |   18.2.0.0.0 | DEPLOYED    |
|                      | summary      |   12.2.1.1.0 | DEPLOYED    |
|                      | tail         |   12.2.1.1.0 | DEPLOYED    |
|                      | triage       |   12.2.1.1.0 | DEPLOYED    |
|                      | vi           |   12.2.1.1.0 | DEPLOYED    |
'----------------------+--------------+--------------+-------------'

Note :-
  DEPLOYED    : Installed and Available - To be configured or run interactively.
  NOT RUNNING : Configured and Available - Currently turned off interactively.
  RUNNING     : Configured and Available.

Summary

Whilst TFA is a great tool for single instance and Oracle Restart environments, it is indispensible for RAC. The ability to gather a lot of diagnostic data that would otherwise be difficult to get is the prime use case. Tools such as OSWatcher are very useful when it comes to diagnosing system status at the time of an instance eviction for example.

Remote syslog from Linux and Solaris

Auditing operations with Oracle Database is very easy. The default configuration, where SYSDBA operations go to ‘audit_file_dest’ (the ‘adump’ directory) and other operations go to the database may be sufficient to log what is done but is definitely not a correct security audit method as both destinations can have their audit trail deleted by the DBA. If you want to secure your environment by auditing the most privileged accounts, you need to send the audit trail to another server.

This is easy as well and here is a short demo involving Linux and Solaris as the audited environments. I’ve created those 3 computer services in the Oracle Cloud:
CaptureSyslog000

So, I have an Ubuntu service where I’ll run the Oracle Database (XE 11g) and the hostname is ‘ubuntu’

root@ubuntu:~# grep PRETTY /etc/os-release
PRETTY_NAME="Ubuntu 16.04.4 LTS"

I have a Solaris service which will also run Oracle, and the hostname is ‘d17872′

root@d17872:~# cat /etc/release
Oracle Solaris 11.3 X86
Copyright (c) 1983, 2016, Oracle and/or its affiliates. All rights reserved.
Assembled 03 August 2016

I have an Oracle Enterprise Linux service which will be my audit server, collecting syslog messages from remote hosts, the hostname is ‘b5e501′ and the IP address in the PaaS network is 10.29.235.150

[root@b5e501 ~]# grep PRETTY /etc/os-release
PRETTY_NAME="Oracle Linux Server 7.5"

Testing local syslog

I start to ensure that syslog works correctly on my audit server:

[root@b5e501 ~]# jobs
[1]+ Running tail -f /var/log/messages &
[root@b5e501 ~]#
[root@b5e501 ~]# logger -p local1.info "hello from $HOSTNAME"
[root@b5e501 ~]# Jun 20 08:28:35 b5e501 bitnami: hello from b5e501

Remote setting

On the aduit server, I un-comment the lines about receiving syslog from TCP and UDP on port 514

[root@b5e501 ~]# grep -iE "TCP|UDP" /etc/rsyslog.conf
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
# Remote Logging (we use TCP for reliable delivery)

I restart syslog service

[root@b5e501 ~]# systemctl restart rsyslog
Jun 20 08:36:47 b5e501 systemd: Stopping System Logging Service...
Jun 20 08:36:47 b5e501 rsyslogd: [origin software="rsyslogd" swVersion="8.24.0" x-pid="2769" x-info="http://www.rsyslog.com"] exiting on signal 15.
Jun 20 08:36:47 b5e501 systemd: Starting System Logging Service...
Jun 20 08:36:47 b5e501 rsyslogd: [origin software="rsyslogd" swVersion="8.24.0" x-pid="2786" x-info="http://www.rsyslog.com"] start
Jun 20 08:36:47 b5e501 systemd: Started System Logging Service.

I tail the /var/log/messages (which is my default destination for “*.info;mail.none;authpriv.none;cron.none”)

[root@b5e501 ~]# tail -f /var/log/messages &
[root@b5e501 ~]# jobs
[1]+ Running tail -f /var/log/messages &

I test with local1.info and check that the message is tailed even when logger is sending it though the network:

[root@b5e501 ~]# logger -n localhost -P 514 -p local1.info "hello from $HOSTNAME"
Jun 20 09:18:07 localhost bitnami: hello from b5e501

That’s perfect.

Now I can test the same from my Ubuntu host to ensure that the firewall settings allow for TCP and UDP on port 514


root@ubuntu:/tmp/Disk1# logger --udp -n 10.29.235.150 -P 514 -p local1.info "hello from $HOSTNAME in UDP"
root@ubuntu:/tmp/Disk1# logger --tcp -n 10.29.235.150 -P 514 -p local1.info "hello from $HOSTNAME in TCP"

Here are the correct messages received:

Jun 20 09:24:46 ubuntu bitnami hello from ubuntu in UDP
Jun 20 09:24:54 ubuntu bitnami hello from ubuntu in TCP

Destination setting for the audit

As I don’t want to have all messages into /var/log/messages, I’m now setting, in the audit server, a dedicated file for “local1″ facility and “info” level that I’ll use for my Oracle Database audit destination

[root@b5e501 ~]# touch "/var/log/audit.log"
[root@b5e501 ~]# echo "local1.info /var/log/audit.log" >> /etc/rsyslog.conf
[root@b5e501 ~]# systemctl restart rsyslog

After testing the same two ‘logger’ commands from the remote host I check the entries in my new file:

[root@b5e501 ~]# cat /var/log/audit.log
Jun 20 09:55:09 ubuntu bitnami hello from ubuntu in UDP
Jun 20 09:55:16 ubuntu bitnami hello from ubuntu in TCP

Remote logging

Now that I validated that remote syslog is working, I set automatic forwarding of syslog messages on my Ubuntu box to send all ‘local1.info to the audit server':

root@ubuntu:/tmp/Disk1# echo "local1.info @10.29.235.150:514" >> /etc/rsyslog.conf
root@ubuntu:/tmp/Disk1# systemctl restart rsyslog

This, with a single ‘@’ forwards in UDP. You can double the ‘@’ to forward using TCP.

Here I check with logger in local (no mention of the syslog host here):

root@ubuntu:/tmp/Disk1# logger -p local1.info "hello from $HOSTNAME with forwarding"

and I verify that the message is logged in the audit server into /var/log/audit.log

[root@b5e501 ~]# tail -1 /var/log/audit.log
Jun 20 12:00:25 ubuntu bitnami: hello from ubuntu with forwarding

Repeated messages

Note that when testing, you may add “$(date)” to your message in order to see it immediately because syslog keeps the message to avoid flooding when the message is repeated. This:

root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Then another one"

is logged as this:

Jun 20 12:43:12 ubuntu bitnami: message repeated 5 times: [ Always the same message]
Jun 20 12:43:29 ubuntu bitnami: Then another one

I hope that one day this idea will be implemented by Oracle when flooding messages to the alert.log ;)

Oracle Instance

The last step is to get my Oracle instance sending audit message to the local syslog, with facility.level local1.info so that they will be automatically forwarded to my audit server. I have to set audit_syslog_level to ‘local1.info’ and the audit_trail to ‘OS':

oracle@ubuntu:~$ sqlplus / as sysdba
 
SQL*Plus: Release 11.2.0.2.0 Production on Wed Jun 20 11:48:00 2018
 
Copyright (c) 1982, 2011, Oracle. All rights reserved.
 
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
 
SQL> alter system set audit_syslog_level='local1.info' scope=spfile;
 
System altered.
 
SQL> alter system set audit_trail='OS' scope=spfile;
 
System altered.
 
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
 
Total System Global Area 1068937216 bytes
Fixed Size 2233344 bytes
Variable Size 616565760 bytes
Database Buffers 444596224 bytes
Redo Buffers 5541888 bytes
Database mounted.
Database opened.

It is very easy to check that it works as the SYSDBA and the STARTUP are automatically audited. Here is what I can see in my audit server /var/log/audit.log:

[root@b5e501 ~]# tail -f /var/log/audit.log
Jun 20 11:55:47 ubuntu Oracle Audit[27066]: LENGTH : '155' ACTION :[7] 'STARTUP' DATABASE USER:[1] '/' PRIVILEGE :[4] 'NONE' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[13] 'Not Available' STATUS:[1] '0' DBID:[0] ''
Jun 20 11:55:47 ubuntu Oracle Audit[27239]: LENGTH : '148' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[5] 'pts/0' STATUS:[1] '0' DBID:[0] ''
Jun 20 11:55:51 ubuntu Oracle Audit[27419]: LENGTH : '159' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[5] 'pts/0' STATUS:[1] '0' DBID:[10] '2860420539'

In the database server, I have no more files in the adump since this startup:

oracle@ubuntu:~/admin/XE/adump$ /bin/ls -alrt
total 84
drwxr-x--- 6 oracle dba 4096 Jun 20 11:42 ..
-rw-r----- 1 oracle dba 699 Jun 20 11:44 xe_ora_26487_1.aud
-rw-r----- 1 oracle dba 694 Jun 20 11:44 xe_ora_26515_1.aud
-rw-r----- 1 oracle dba 694 Jun 20 11:44 xe_ora_26519_1.aud
-rw-r----- 1 oracle dba 694 Jun 20 11:44 xe_ora_26523_1.aud
drwxr-x--- 2 oracle dba 4096 Jun 20 11:48 .
-rw-r----- 1 oracle dba 896 Jun 20 11:48 xe_ora_26574_1.aud

Solaris

I have also started a Solaris service:

opc@d17872:~$ pfexec su -
Password: solaris_opc
su: Password for user 'root' has expired
New Password: Cl0udP01nts
Re-enter new Password: Cl0udP01nts
su: password successfully changed for root
Oracle Corporation SunOS 5.11 11.3 June 2017
You have new mail.
root@d17872:~#

Here, I add the forwarding to /etc/syslog.conf (tab is a required separator which cannot be replaced with spaces) and restart the syslog service:

root@d17872:~# echo "local1.info\t@10.29.235.150" >> /etc/syslog.conf
root@d17872:~# svcadm restart system-log

Then logging a message locally

root@d17872:~# logger -p local1.info "hello from $HOSTNAME with forwarding"

Here is the messaged received from the audit server:

[root@b5e501 ~]# tail -f /var/log/audit.log
Jun 20 05:27:51 d17872.compute-a511644.oraclecloud.internal opc: [ID 702911 local1.info] hello from d17872 with forwarding

Here in Solaris I have the old ‘syslog’ with no syntax to change the UDP port. The default port is defined in /etc/services, which is the one I’ve configured to listen to on my audit server:

root@d17872:~# grep 514 /etc/services
shell 514/tcp cmd # no passwords used
syslog 514/udp

If you want more features, you can install syslog-ng or rsyslog on Solaris.

 

Cet article Remote syslog from Linux and Solaris est apparu en premier sur Blog dbi services.

The death of UTL_FILE

In a previous post I covered a technique to improve the performance of UTL_FILE, but concluded the post with a teaser: “you probably don’t need to use UTL_FILE ever again”.

image

Time for me to back that statement up with some concrete evidence.

UTL_FILE can read and write files. This blog post will cover the writing functionality of UTL_FILE and why I think you probably don’t need UTL_FILE for this. I’ll come back to UTL_FILE to read files in a future post.

There are two possibilities when it comes to writing a file:

  • The file is being requested by a client program and the results should be stored on a client machine. In that case, you do not use UTL_FILE anyway – you use the tools available on the client, for example, SQL Developer, SQLcl, SQL*Plus or a myriad of other tools, OR
  • the file is to be written by the database server to a location on the database server itself, or to a location that is accessible to the database server. This is where UTL_FILE has been used, the main motivations being that it can be part of existing database-resident PL/SQL code, and does not require clients to have direct SQL or OS level access to the database server.

So here is my assertion – most of the time, UTL_FILE is used to write out a file because we can’t make use an easier tool like those mentioned above directly on the database server.  After all, who wants to write code like:


SQL> declare
  2    f utl_file.file_type;
  3    line varchar2(255);
  4    s timestamp;
  5  begin
  6    f := utl_file.fopen('TMP','demo.dat','w');
  7    s := systimestamp;
  8    for i in ( select * from t )
  9    loop
 10      line := i.object_id||','||i.object_name||','||i.object_type||','||i.data_object_id;
 11      utl_file.put_line(f,line);
 12    end loop;
 13    dbms_output.put_line(systimestamp-s);
 14    utl_file.fclose_all;
 15  end;
 16  /

PL/SQL procedure successfully completed.

when most of the time if I was doing this in a tool it would be simply:


SQL> set markup csv on
SQL> spool c:\tmp\demo.dat
SQL> select * from t;
SQL> spool off

But in recent versions of the database, you can do exactly this! The database scheduler has been enhanced to be able to to run SQL*Plus style scripts directly out of the database without needing to give OS access to database server, or the SQL*Plus executable. In the previous post I unloaded 40million rows to a file in CSV format and the responsibility for formatting the data into CSV format fell to me – I had to do all of the “heavy lifting”. Using the scheduler and the SQL_SCRIPT job type, it is as simple as writing a SQL*Plus script, and submitting it as a job.


SQL> declare
  2    l_script   VARCHAR2(32767) :=
  3  'conn /@db
  4  set markup csv on
  5  set arraysize 500
  6  set pages 0
  7  set lines 200
  8  set trimspool on
  9  spool c:\tmp\demo_sched.dat
 10  select * from t;
 11  spool off';
 12  begin
 13    dbms_scheduler.create_job(
 14      job_name        => 'UNLOAD_DATA',
 15      job_type        => 'SQL_SCRIPT',
 16      job_action      => l_script,
 17      credential_name => 'MY_ACCOUNT',
 18      enabled         => true
 19    );
 20  end;
 21  /

PL/SQL procedure successfully completed.

You will want to ensure that you have some controls over the usage of this feature, and what credentials the scripts will run under.  Also in my example, I’ve got a connection wallet setup so that I do not have to code any passwords into the connection string for my scheduler job. But suddenly it has become easy to get access to the scripting tools we are used to on our own client machines, and utilize them on the database server.

“Slow down there cowboy…” I hear you exclaim. “…That might be fine for simple SELECT * scripts, but my UTL_FILE procedures have a lot of complex logic to construct the file data”.

Even if you are using UTL_FILE because you are performing some complicated algorithms to generate the data that will be written to file, you can still utilize this scheduler facility.  After your complicated logic is completed, ultimately you typically will have a line of data you need to write to a file.  And if you have a line of data, then it is trivial to port that procedure to become a pipelined function.  And once you have a pipelined function, then we have a simple query mechanism that can be used to spool the output.  For example, if my original procedure is:


SQL> create or replace
  2  procedure my_procedure is
  3  begin
  4    for each_row in ( [some set] )
  5    loop
  6    ...
  7    ... my complex logic
  8    ...
  9
 10    ...
 11    ... writing each line to file with UTL_FILE
 12    ...
 13    end loop;
 14  end;
 15  /

then we change none of the logic – we only need replace all of that messy UTL_FILE code with a simple pipe command to allow querying that function as if it was source of rows, and then spool it to a file in the usual way.


SQL> create or replace
  2  function my_function return sys.odcivarchar2list pipelined is
  3  begin
  4    for each_row in ( [some set] )
  5    loop
  6    ...
  7    ... my complex logic
  8    ...
  9
 10    ...
 11    ... pipe row ( the_line );
 12    ...
 13    end loop;
 14    return;
 15  end;
 16  /

SQL> spool c:\tmp\demo.dat
SQL> select * from my_function();
SQL> spool off

So next time you’re cranking out some cumbersome UTL_FILE code to write a file, take a moment to see if the scheduler can look after some of the work for you. I’ll talk about (not) using UTL_FILE to read files in a future post.

When WHEN went faster

Yeah…try saying that blog post title 10 times in a row as fast as you can Smile

But since we’re talking about doing things fast, this is just a quick post about a conversation I had a twitter yesterday about the WHEN clause in a trigger.

 

image

That is an easy benchmark to whip up – I just need a couple of tables, each with a simple a trigger differing only by their usage of the WHEN clause.  Here is my setup:


SQL> create table t1 ( x int );

Table created.

SQL> create table t2 ( x int );

Table created.

SQL> create or replace
  2  trigger TRG1
  3  before insert on t1
  4  for each row
  5  begin
  6    if sys_context('USERENV','SID') = 0 then
  7       :new.x := 0;
  8    end if;
  9  end;
 10  /

Trigger created.

SQL> create or replace
  2  trigger TRG2
  3  before insert on t2
  4  for each row
  5  when ( sys_context('USERENV','SID') = 0 )
  6  begin
  7     :new.x := 0;
  8  end;
  9  /

Trigger created.

I’m using an always false condition so the content of the trigger will never be fired (except for the WHEN clause and the IF-test).  Now I’m ready to slam a bunch of inserts into each table and measure the performance.


SQL> set timing on
SQL> insert into t1
  2  select 1 from dual
  3  connect by level <= 1000000;

1000000 rows created.

Elapsed: 00:00:02.52

SQL> insert into t2
  2  select 1 from dual
  3  connect by level <= 1000000;

1000000 rows created.

Elapsed: 00:00:00.41

There is an immediate urge here to go screaming from the roof tops that WHEN is the all conquering hero of this benchmark. And yes, it is indeed faster but make sure you keep the results in perspective.  We gained back about 2 seconds for 1 million inserts. For me, once I’m inserting millions of rows into a table I’m starting to think not about trigger performance but whether this is the kind of table I want to be having triggers on at all. 

Power BI and the Speed(ier) Desktop

I can be an extremely impatient person about anything I think should be faster.

I’m diving in deep with Power BI and for most of the early on lessons, the data sources used are Excel and…ahem…Microsoft Access.  I don’t know a DBA alive that enjoys working with Access.  Its great for the common user to have a database application, but we can barely use the words “Access” and “Database” in the same sentence.  In my heart, I will always be a performance person and working with Power BI desktop with Microsoft Access is enough to make you want to throw your PC out the window, especially when talking data sets of 10 million rows or more..  I felt there had to be, at least, some way to help speed up the performance on Power BI when using this combination.

Now in a real life scenario, the first recommendation would be to filter the data set down so that it wouldn’t put so much pressure, resulting in poor performance.  I was offered some great links that presented that, along with other best practices and I’ll link them here, as the Power BI community offered up some AWESOME responses to my quest for answers:

Melissa Coates has a great Check List for Finalizing a Power BI Data Model post

Meagan Longoria sent me the Power BI Performance Best Practices link for Microsoft, which is filled with great information that everyone who works with Power BI should know.

Ginger Grant tweeted out the Marco Russo webinar,  My Power BI Report is Slow.

As for me, I had the added challenge of working with the edX lessons, not much of the above is an option, as the labs successful completion relies on entering correct counts for the results post lab work with the required data sets.  If you filtered the data or optimized the data model, the counts would be off and you WOULD FAIL.

What’s a girl to do to get through this without pulling her hair out and feeling the quality of the experience wasn’t impacted?  I can’t be the only one who felt this way and I know how users react when these types of situations happen.  I’m a DBA and in the database world, no matter who the performance impact culprit is, the database is guilty until proven innocent.  In this new world, Power BI credibility is the one impacted for new users who are just starting to learn about his powerful tool, when the goal is to empower the user.

I searched Google for some best practices, but most of them surrounded how to model the data more effectively vs. working with the products.  It demonstrates why performance specialists from all areas are essential to creating solutions and how NOTHING should be off the table.

OK, so if I’m working from scratch, this is the time to test out my own ideas and if I fall flat on my face, so be it. </p />
</p></div>

    	  	<div class=

Juicing up UTL_FILE

Think about your rubbish bin for a second. Because, clearly this is going to be an oh so obvious metaphor leading into UTL_FILE right?  OK, maybe a little explanation is needed. I have a basket next to my desk into which I throw any waste paper. It is where I throw my stupid ideas and broken dreams Smile

image

Melancholy aside, once it is full I’ll take it out and empty it into the main recycling bin that is collected once a fortnight from my street front.

image

Metaphor make sense now? Still No? OK, let me bring it around to the database. When you make changes to data in the database, most of the time these changes are only made to data blocks in memory. Obviously we have to keep a permanent record of the changes in the redo log to ensure that database recovery is possible, but the changed data blocks themselves are only dumped out to disk on an intermittent basis. That way, the process of changing data in the database is nice and efficient.

That is similar to what I do with the waste paper basket. I don’t take each piece of paper out to the street front recycling bin; I wait until the waste paper basket is full and then just do the one trip. It is more efficient. (This is hopefully your light bulb moment on my metaphor skills Smile)

So back to UTL_FILE. You can take the same approach when you need to unload some data using UTL_FILE. There is an inherent overhead with every call you make to write out a line of output to a flat file using UTL_FILE. Hence, if you can minimize the number of calls you make, you’ll see some benefits.

Here’s an example of that in action. I’m building a simple CSV file based on a table called T which contains approximately 40 million rows.  Here is my first cut at the solution which I’d wager most people have written in their IT lives. We simply loop through each record and write it out to the file:


SQL> set serverout on
SQL> declare
  2    f utl_file.file_type;
  3    line varchar2(255);
  4    s timestamp;
  5  begin
  6    f := utl_file.fopen('TMP','demo.dat','w');
  7    s := systimestamp;
  8    for i in ( select * from t )
  9    loop
 10      line := i.object_id||','||i.object_name||','||i.object_type||','||i.data_object_id;
 11      utl_file.put_line(f,line);
 12    end loop;
 13    dbms_output.put_line(systimestamp-s);
 14    utl_file.fclose_all;
 15  end;
 16  /
+000000000 00:04:07.368000000

PL/SQL procedure successfully completed.

I’d contend that four minutes for 40 million records is pretty good, but maybe we want to go faster.  Based on my metaphor it might not be immediately apparent how could I perform less UTL_FILE calls? Surely I need to call it once per line of data written? But don’t forget that a “line” in a file is merely the data terminated with a end-of-line pattern specific to your platform. It could be ASCII 10 or an ASCII 13,10 pair, but either way, it is really just an extension of the data you need to write to the file. A “line” is merely our interpretation of that data where that pattern means move to the next line.

So rather than call UTL_FILE for each line, I’ll build up multiple lines and then write them out with a single UTL_FILE.PUT_LINE call. I’m choosing a cap of around 300 lines to ensure that my concatenation does not exceed the maximum size of a VARCHAR2, but as a result I should cut down on the number of UTL_FILE calls by a factor of 300.


SQL> set serverout on
SQL> declare
  2    f utl_file.file_type;
  3    line varchar2(255);
  4    big_line varchar2(32767);
  5    s timestamp;
  6    cnt pls_integer := 0;
  7  begin
  8    f := utl_file.fopen('TMP','demo2.dat','w',32767);
  9    s := systimestamp;
 10    for i in ( select * from t )
 11    loop
 12      line := i.object_id||','||i.object_name||','||i.object_type||','||i.data_object_id;
 13      cnt := cnt + 1;
 14      if cnt < 300 then
 15         big_line := big_line || line || chr(10);
 16      else
 17         utl_file.put_line(f,big_line||line);
 18         cnt := 0;
 19         big_line := null;
 20      end if;
 21    end loop;
 22    utl_file.put_line(f,big_line);
 23    dbms_output.put_line(systimestamp-s);
 24    utl_file.fclose_all;
 25  end;
 26  /
+000000000 00:02:23.297000000

PL/SQL procedure successfully completed.

That’s pretty cool. We reclaimed around 100 seconds of elapsed time just by reducing the number of UTL_FILE.PUT_LINE calls. In this particular instance, that’s about 40% but obviously your mileage may vary based on a myriad of factors – so make you sure do your own benchmarking on your own systems.

As you can see, with just a couple of lines of extra code, we can really make UTL_FILE sing from a performance perspective. Having said that, in an upcoming blog post I’ll make a bold assertion – that you probably don’t need to use UTL_FILE ever again!  Stay tuned for that one.