Search

OakieTags

Who's online

There are currently 0 users and 34 guests online.

Recent comments

Oakies Blog Aggregator

Scripts from Today’s Family Coding and Hands on Lab Fix

RMOUG’s Quarterly education workshop was a great event today and I appreciate everyone who came out and those who spent time with their family and their user community doing fun geeky stuff.

qew_1

Raspberry Pi Family Coding Morning

There were a few things I promised folks, including access to my family coding slides so anyone can do this at home. Even though I’ve included the GPIO with breadboard configuration and code in the slides, please, PLEASE consider getting the Pibrella board and code with it instead.  The chance of wiring the board wrong is high for those just starting out and even I ran into this after having some extensive time with the product, so for the $15-20, it’s worth the investment… :)

The slides to do this hands on lab can be found here.  The art project pieces to do this most effectively with the motors-

From Michaels or another hobby store:

  • Sponge foam “marshmallows” .  You can’t miss them.  They look just like marshmallows but are perfect to attach your products to the motors.
  • Motors.  You can by these individually at Hobby Lobby or other toy hobby stores.  I bought a large parcel of them online.
  • sponge stickers, 3-D stickers, construction paper, plastic flyers, (the kind you spin between your hands and then let take off) and anything else light that you can add to you art projects.

One thing we found useful for this event-  I had two images already built with all the libraries and downloads performed and installs configured.  This saved a few folks-  we just pushed my SDCard for the first Pi or Micro SDCard for the latest version into the slot and then the attendee could go through the full setup of their Raspberry Pi once they got home.

I also lent out my pibrella card, monitor, mice, keyboard and jump drives.  In the future, I’m thinking HDMI cables and maybe HDMI to VGA adapters might be a good bonus, along with extra Raspberry Pi power cables.  Many of these online stores sold the unit without the power cables and one sent an attendee home with the wrong size memory card!

qew_2

After you write out all your code, one of the challenges was hooking up the motor to the pibrella board.  I planned on using my own setup as the physical example, but since I’d lent out all my hardware, I didn’t have that example anymore.  I’ve added the following pictures to help give a visual representation to the project.

pi_motor1

Ensure that you note the positive and negative on the Pibrella board, plugging the positive, (red) and the negative, (black) in to the correct holes in the board.  Note that your board may be active on the bottom left, (D on the Pibrella board) to the red button on the board or the upper right, (E) on the board.  The active ports will show when you run your code, that line of ports will light up with a small, white light.  As stated in the slides, MAKE SURE NOT to touch the wire tips to the Pibrella board circuits or the Raspberry Pi metal areas if the unit is powered up!  This will cause a short and you’ll know as the unit will restart.

AWR Warehouse Hands on Lab

Now for the AWR Warehouse hands on lab.  A couple people had no issues at all, but a number of folks found out that I had a problem with copies of the OVA file to “lesser quality” jump drives.

I found the issue was a corrupted listener that was giving us quite the challenge.

1.  Go to $ORACLE_HOME/network/admin and open up the LISTENER.ora file.

2.  You’ll see that there is either one or two databases that are now registered in the file.

  1. Change the AWRW1 entry, (that we noticed was brought up in the “corrupt copy) to just be AWRW for both entries in the file.
  2. Save and exit.  Restart the LISTENER.

You should not be able to log in, but use the following, (again, it’s still acting a bit odd, but it’s a 40G image and we’re not going to try to get people to download that across the network! :))

>sqlplus sys@AWRW / as sysdba

Enter the password and you can now log in and start the database.

Go ahead and  switch to the HOL user:

sqlplus> connect awrw_hol@AWRW

And you can now work through the lab without an issue.  It’s not a perfect fix, but this will get those that couldn’t run through the lab today back in business!

The scripts for the lab can be found here and you can print a new copy of the Hands on Lab if you’d like, too.

Thanks for coming out again to the event and I hope everyone enjoyed the afternoon at Elitches after a full morning of technical content!



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Scripts from Today's Family Coding and Hands on Lab Fix], All Right Reserved. 2015.

OTN Tour of Latin America 2015 : GUOB, Brazil – Day -1

As I mentioned in the previous post, I got to bed at the hotel in Sau Paulo at about 03:00. I woke up at about 07:00 and did some work for a couple of hours before hitting breakfast with the wife. At breakfast we were joined by Francisco and his son. :)

After breakfast, Debra and I went on a sight seeing trip. I was in Sau Paulo about 2 years ago, but I saw nothing of the city as it was such a short visit. I did have a few photos from the event in 2013, as well as a couple people sent to me (here).

Trying to do Sau Paulo in 4 hours pretty much means sitting in the car a lot. :) Even during the day the traffic is heavy. Added to that, the temperature was in the low 30’s. With the lack of sleep and the temperature combined, we were struggling. Added to that, our driver started to feel ill. I did get some photos, which you can see here. Once again, wide angle is on, so don’t assume everything in Sau Paulo is bowed. :)

Despite our struggles, it was really nice to finally see something of the city!

When we got back from sightseeing, we popped across the road to get some food and bumped into Francisco, his son and Alex. :)

Then it was back to the hotel, with a plan to get some sleep. Instead, I started to catch up on blog posts, as well as spending £54 (or $84 USD) on washing. :( It’s a sad day when you have to pay someone to do your washing! At least this way I should have enough clothes to complete the tour. :)

I’ll pretty soon be off to bed, ready for the GUOB conference tomorrow. Happy days!

Cheers

Tim…

PS. I’m going to us the English spelling (Brazil), rather than the Portuguese version, as it’s the only way I have a chance of staying consistent. No offence meant by spelling the name of your country “wrong”. :)


OTN Tour of Latin America 2015 : GUOB, Brazil – Day -1 was first posted on August 7, 2015 at 9:22 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

CBO catchup

It’s interesting to watch the CBO evolving and see how an enhancement in one piece of code doesn’t necessarily echo through to all the other places it seems to fit. Here’s an example of an enhancement that spoiled (or, rather, made slightly more complicated) a little demonstration I had been running for about the last 15  years  – but (in a fashion akin to another partitioning limitation) doesn’t always work in exactly the way you might expect.

I wrote a note some time ago about the way that the optimizer could pre-compute the result of what I called a “fixed subquery” (such as “select 9100 from dual”) and take advantage of the value it derived to do a better job of estimating the cardinality for a query. That’s a neat feature (although it may cause some 3rd party applications a lot of pain as plans change on the upgrade to 11.2.0.4 or 12c) but it doesn’t work everywhere you might hope.

I’m going to create two (small) tables with the same data, but one of them is going to be a simple heap table and the other is going to be partitioned by range; then I’m going to run the same queries against the pair of them and show you the differences in execution plans. First the tables:


create table t1
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                  id,
        lpad(rownum,10,'0')     v1,
        rpad('x',100)           padding
from
        generator       v1
where
        rownum <= 1e4
;

create table pt1(
        id, v1, padding
)
partition by range (id) (
        partition p02000 values less than ( 2001),
        partition p04000 values less than ( 4001),
        partition p06000 values less than ( 6001),
        partition p08000 values less than ( 8001),
        partition p10000 values less than (10001)
)
as
select * from t1
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'PT1',
                granularity      =>'ALL',
                method_opt       => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

alter table  t1 add constraint  t1_pk primary key(id);
alter table pt1 add constraint pt1_pk primary key(id) using index local;

create or replace function f(i_in  number)
return number
is
begin
        return i_in;
end;
/

Note that I’ve used ‘ALL’ as my granularity option – for such small tables this should mean that the statistics at the partition and global level are as accurate as they can be. And since the data is defined to be uniform I don’t expect the partitioning to introduce any peculiarities in the optimizer’s calculations of selectivity and cardinality. I’ve created the indexes after gathering stats on the tables – this is 12c (and 11.2.0.4) so the index stats will be collected with a 100% sample as the indexes are created. Finally I’ve created a function that simply returns its numeric input.

Now let’s run a couple of queries against the simple table and check the cardinality (Rows) predicted by the optimizer – the two plans follow the code that generated them:

set serveroutput off

select  max(v1)
from    t1
where   id between (select 500 from dual)
           and     (select 599 from dual)
;

select * from table(dbms_xplan.display_cursor);

select  max(v1)
from    t1
where   id between (select f(500) from dual)
           and     (select f(599) from dual)
;

select * from table(dbms_xplan.display_cursor);

======================
Actual Execution Plans
======================

select max(v1) from t1 where id between (select 500 from dual)
  and     (select 599 from dual)

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |       |       |     4 (100)|          |
|   1 |  SORT AGGREGATE                      |       |     1 |    15 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |   101 |  1515 |     4   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_PK |   101 |       |     2   (0)| 00:00:01 |
|   4 |     FAST DUAL                        |       |     1 |       |     2   (0)| 00:00:01 |
|   5 |     FAST DUAL                        |       |     1 |       |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID">= AND "ID"<=)

select max(v1) from t1 where id between (select f(500) from dual)
     and     (select f(599) from dual)

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE                      |       |     1 |    15 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    25 |   375 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_PK |    45 |       |     2   (0)| 00:00:01 |
|   4 |     FAST DUAL                        |       |     1 |       |     2   (0)| 00:00:01 |
|   5 |     FAST DUAL                        |       |     1 |       |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID">= AND "ID"<=)

In the first plan the optimizer has recognised the values 500 and 599, so its range-based calculation has produced a (matching, and nearly correct) prediction of 101 rows. In the second plan the function call has hidden the values so the optimizer has had to use the arithmetic for “ranges with unknown values” – which means it uses guesses for the selectivity of 0.45% for the index and 0.25% for the table. Maybe in a future release that f(500) will be evaluated in the same way that we can trigger in-list calculation with the precompute_subquery hint.

Now we repeat the query, but using the partitioned table – showing only the trimmed output from dbms_xplan.display_cursor():

select max(v1) from pt1 where id between (select 500 from dual)
   and     (select 599 from dual)

----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                            |        |       |       |     4 (100)|          |       |       |
|   1 |  SORT AGGREGATE                             |        |     1 |    15 |            |          |       |       |
|   2 |   PARTITION RANGE ITERATOR                  |        |   101 |  1515 |     4   (0)| 00:00:01 |   KEY |   KEY |
|   3 |    TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PT1    |   101 |  1515 |     4   (0)| 00:00:01 |   KEY |   KEY |
|*  4 |     INDEX RANGE SCAN                        | PT1_PK |   101 |       |     2   (0)| 00:00:01 |   KEY |   KEY |
|   5 |      FAST DUAL                              |        |     1 |       |     2   (0)| 00:00:01 |       |       |
|   6 |      FAST DUAL                              |        |     1 |       |     2   (0)| 00:00:01 |       |       |
----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("ID">= AND "ID"<=)

select max(v1) from pt1 where id between (select f(500) from dual)
      and     (select f(599) from dual)

----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                            |        |       |       |     3 (100)|          |       |       |
|   1 |  SORT AGGREGATE                             |        |     1 |    15 |            |          |       |       |
|   2 |   PARTITION RANGE ITERATOR                  |        |    25 |   375 |     3   (0)| 00:00:01 |   KEY |   KEY |
|   3 |    TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PT1    |    25 |   375 |     3   (0)| 00:00:01 |   KEY |   KEY |
|*  4 |     INDEX RANGE SCAN                        | PT1_PK |    45 |       |     2   (0)| 00:00:01 |   KEY |   KEY |
|   5 |      FAST DUAL                              |        |     1 |       |     2   (0)| 00:00:01 |       |       |
|   6 |      FAST DUAL                              |        |     1 |       |     2   (0)| 00:00:01 |       |       |
----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("ID">= AND "ID"<=)

It’s great to see that the predicted cardinalities match the simple heap version exactly – but can you see anything odd about either of these plans ?

 

 

Pause for thought …

 

 

There’s nothing odd about the second plan but there’s a little puzzle in the first.

In theory, it seems, the optimizer is aware that the first query covers the range 500 – 599; so why are the pstart/pstop columns for operations 2-4 showing KEY-KEY, which usually means the optimizer knows that it will have some partition key values at run-time and will be able to do run-time partition elimination, but it doesn’t know what those key values are at parse time.

In this very simple case it’s (probably) not going to make any difference to the performance – but it may be worth some careful experimentation in more complex cases where you might have been hoping to get strict identification of partitions and partition-wise joins taking place. Yet another topic to put on the todo list of “pre-emptive investigations” with a reminder to re-run the tests from time to time.

 

 

OTN Tour of Latin America 2015 : AROUG, Argentina – Day 2

I mentioned in the previous post we were spending the day sightseeing today. Last time I was in Buenos Aires I did a bus tour. You can see some of the photos here.

This time Debra, Mike, Cindy and I went on a private tour in a minibus. I took the GoPro with me, so I’ve got a bunch of super wide angle photos (here). I’m sure anyone who does photography will laugh at them, but they look fine to me, so long as you look at them on a large screen. :)

It was a really nice day. We got back early, so Mike could go back to the conference to do his final session. Debra, Cindy and I spent the rest of the day chilling in the executive lounge at the Hilton, taking advantage of the free internet. It’s good when you are with people to have a good honors (not honours) status! :)

We were there for a few hours, but at about 20:00 we headed off for the airport. Debra also gets access to the lounge at the airport, so we spent 25 minutes there before heading for the plane.

I’ve already posted all my thank you messages, but once again, thanks Argentina! I hope to see you soon!

On the flight I watched Project Almanac, which was kind-of predictable, but good. After a little over two hours we were in Sau Paulo. Last time I came to Sau Paulo the taxi ride to the hotel was very slow as the traffic was a nightmare. Travelling at 02:00 meant the roads were quiet and we breezed through to the hotel. I was in bed by 03:00. :)

Cheers

Tim…


OTN Tour of Latin America 2015 : AROUG, Argentina – Day 2 was first posted on August 6, 2015 at 8:26 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

OTN Tour of Latin America 2015 : AROUG, Argentina – Day 1

I woke up far too early and spent a few hours on the computer.

A number of folks on the tour have built up so many Hilton Honours points they get access to the executive lounge, so I was signed in as a scummy guest and I got breakfast for free. :)

At about 09:00 we walked across to the AROUG event, which had the following order of events.

  • The day started with an introduction to OTN and the ACE program by Pablo Ciccarello.
  • I was up next with my “Pluggable Databases : What they will break and why you should use them anyway!” talk. I had a full room and the response to it was really good. I got some good questions at the end and amazingly, I managed to finish on time. :)
  • Then came Debra Lilley with “PaaS4SaaS”.
  • Next up was “Controlling Execution Plans (without touching the code)” by Kerry Osborne. This was a double session, split by lunch.
  • After that was Mike Dietrich with his “How Oracle Single/Multitenant will change a DBA’s life” talk.
  • I closed the day with “It’s raining data! Oracle Databases in the Cloud’. Once again, a busy room and some good interaction with the audience. I’m not sure how many people feel ready for a move to the cloud, but the level of interest is definitely high.

I got really tired in the middle of the day. At one point I sat in a big chair and started to nod off, so I went out and got a giant cup of coffee, which perked me up. :)

There were a couple of the other speakers I was interested in seeing, but they weren’t presenting in English, so no luck there. :)

After the event we walked back to the hotel and I was coerced into drinking alcohol. Well actually, I just fancied it. :) Due to the fact I rarely drink, I felt very drunk very quickly. My alcohol tolerance is truly pathetic. :)

From the bar we wandered over to meet the other speakers and organisers at a barbecue restaurant. You might think that wasn’t the best place for a vegetarian to be, but you would be wrong. There was loads of good stuff I could eat, plus excellent company! :)

After far too much food, we headed back through the rain to the hotel and then to bed.

This really concluded the AROUG event for me. All my sessions were on day 1, so I get day 2 to do a bit of sightseeing. :)

I’d like to send out a big thank you to everyone who came to the event, including the attendees and the speakers. Also, big thanks to the organisers at AROUG and the ACE Program, who made this possible for me.

Cheers

Tim…


OTN Tour of Latin America 2015 : AROUG, Argentina – Day 1 was first posted on August 6, 2015 at 1:39 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

OTN Tour of Latin America 2015 : UYOUG, Uruguay – Day 2

I woke up early and started working on the computer. I wrote a couple of blog posts and logged in to work to check some stuff out. I also re-recorded the vocal for the ALL, ANY and SOME video. The mic I was using wasn’t too good, but I was keen to get it done, rather than wait until I got back to the UK. Overall, it was a pretty busy morning.

I then headed out for my last session of the conference. It was a panel session with Ronald, Mike, Nelson and myself. Putting Ronald and me on a panel together pretty much guaranteed nobody else would get to say anything. I thought the panel went really well. Having someone from a MySQL background on the panel added a different dynamic. Most of the discussion was focussed on why you would pick MySQL over Oracle or vice versa. There was a surprising amount of agreement and some good banter. :) The reaction from the crowd was very positive. :)

That marked the end of the UYOUG conference for me. I hope all the attendees enjoyed the event. I know I did. Thanks very much to everyone for organising it and thanks to the Oracle ACE Program for getting me here. Big thanks to Edelweiss and Nelson for looking after us! :)

From there we got some lunch and I went back to the hotel to do some more work.

At about 17:30 we headed off to catch the ferry to Buenos Aires. I was meant to get a little video footage, but I forgot. :) It took a little over 2 hours to complete, then it was a walk across to the hotel. I did remember to get some footage of that! :)

I got to bed a little after midnight, which wasn’t too bad. Tomorrow is the first day of the AROUG conference in Argentina!

Cheers

Tim…


OTN Tour of Latin America 2015 : UYOUG, Uruguay – Day 2 was first posted on August 5, 2015 at 8:00 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Index compression–working out the compression number

Richard Foote did a series of informative posts on Index Compression which concludes that there is whole lot of positives about index compression, and very little negatives.  But obviously one critical thing is choosing the right number of leading columns to compress. Is it just "take a guess?" .  Luckily, Oracle has taken the guesswork out of it.

The ANALYZE command on an index can let you find the optimial compression count.  When you do an ANALYZE INDEX command, two critical columns are populated:

SQL> desc index_stats
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 HEIGHT                                 NUMBER
 BLOCKS                                 NUMBER
 NAME                                   VARCHAR2(128)
 PARTITION_NAME                         VARCHAR2(128)
 LF_ROWS                                NUMBER
 LF_BLKS                                NUMBER
 LF_ROWS_LEN                            NUMBER
 LF_BLK_LEN                             NUMBER
 BR_ROWS                                NUMBER
 BR_BLKS                                NUMBER
 BR_ROWS_LEN                            NUMBER
 BR_BLK_LEN                             NUMBER
 DEL_LF_ROWS                            NUMBER
 DEL_LF_ROWS_LEN                        NUMBER
 DISTINCT_KEYS                          NUMBER
 MOST_REPEATED_KEY                      NUMBER
 BTREE_SPACE                            NUMBER
 USED_SPACE                             NUMBER
 PCT_USED                               NUMBER
 ROWS_PER_KEY                           NUMBER
 BLKS_GETS_PER_ACCESS                   NUMBER
 PRE_ROWS                               NUMBER
 PRE_ROWS_LEN                           NUMBER
 OPT_CMPR_COUNT                         NUMBER
 OPT_CMPR_PCTSAVE                       NUMBER

Lets look at an example – we’ll create a table with 10 columns with all sorts of interesting distributions

SQL> drop table T purge;

Table dropped.

SQL> create table T as
 2   select
 3     rownum r01,
 4     mod(rownum,10) r02,
 5     mod(rownum,100) r03,
 6     mod(rownum,1000) r04,
 7     trunc(rownum/10) r05,
 8     trunc(rownum/100) r06,
 9     trunc(dbms_random.value(1,100)) r07,
10     trunc(dbms_random.value(1,1000)) r08  ,
11     trunc(mod(rownum,100)/10) r09,
12     mod(trunc(rownum/10),100) r10
13  from
14    ( select 1 from dual connect by level <= 1000 ),
15    ( select 1 from dual connect by level <= 1000 )
16  /

Table created.

We’ve used a technique described here https://www.youtube.com/watch?v=UonikfFgEyM to generate 1,000,000 rows.

Now with some SQL trickery, we can generate every possible combination of 3 column indexes for this table.

SQL> with inds as ( select rownum r from dual connect by level <= 10 )
  2  select  '(r'||i1.r||',r'||i2.r||',r'||i3.r||')' ix_cols
  3  from inds i1, inds i2, inds i3
  4  where i1.r != i2.r
  5  and   i2.r != i3.r
  6  and   i1.r != i3.r
  7  /

IX_COLS
---------------------------------------------------------------
(r1,r2,r3)
(r1,r2,r4)
(r1,r2,r5)
(r1,r2,r6)
(r1,r2,r7)
(r10,r9,r4)
...
...
(r10,r9,r5)
(r10,r9,r6)
(r10,r9,r7)
(r10,r9,r8)

So, now we’ll create a table called IND_STATS to hold the results of our ANALYZE command, because every time you issue ANALYZE INDEX the INDEX_STATS view is cleared.

SQL> create table IND_STATS as select name, OPT_CMPR_COUNT,  OPT_CMPR_PCTSAVE from index_stats where 1=0;

Table created.

And now we’ll just loop around all 720 potential indexes and work out the optimal compression.

SQL> begin
  2  for i in (
  3    with inds as ( select rownum r from dual connect by level <= 10 )
  4    select 'IX_'||i1.r||'_'||i2.r||'_'||i3.r ix_name,
  5           '(r'||to_char(i1.r,'fm00')||',r'||to_char(i2.r,'fm00')||',r'||to_char(i3.r,'fm00')||')' ix_cols
  6    from inds i1, inds i2, inds i3
  7    where i1.r != i2.r
  8    and   i2.r != i3.r
  9    and   i1.r != i3.r
 10  )
 11  loop
 12    dbms_application_info.set_client_info(i.ix_name);
 13    begin
 14      execute immediate 'drop index IX';
 15    exception
 16      when others then null;
 17    end;
 18    execute immediate 'create index IX on t '||i.ix_cols;
 19    execute immediate 'analyze index IX validate structure';
 20    insert into IND_STATS select i.ix_cols, OPT_CMPR_COUNT,  OPT_CMPR_PCTSAVE from index_stats;
 21    commit;
 22  end loop;
 23  end;
 24  /

PL/SQL procedure successfully completed.

So now we can take a look at the compression recommendations for all of our potential 3 column indexes. (Note: I’m not suggesting you would actually index all 720 combinations ! Smile)

SQL> select * from ind_stats;

NAME                                     OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
---------------------------------------- -------------- ----------------
(r01,r02,r03)                                         0                0
(r01,r02,r04)                                         0                0
(r01,r02,r05)                                         0                0
(r01,r02,r06)                                         0                0
(r01,r02,r07)                                         0                0
(r01,r02,r08)                                         0                0
(r01,r02,r09)                                         0                0

...
...


(r02,r06,r05)                                         2               24
(r02,r06,r07)                                         2               26
(r02,r06,r08)                                         2               25
(r02,r06,r09)                                         2               26
(r02,r06,r10)                                         2               26
(r02,r07,r01)                                         2               26
(r02,r07,r03)                                         3               43
(r02,r07,r04)                                         3               39
(r02,r07,r05)                                         2               26
(r02,r07,r06)                                         2               28
(r02,r07,r08)                                         2               28
(r02,r07,r09)                                         3               43
(r02,r07,r10)                                         3               37
(r02,r08,r01)                                         2               29
(r02,r08,r03)                                         3               39

You can see that some indexes probably wont benefit at all from compression, whereas for others, ALL columns become candidates for compression and might save nearly half the size of index.

So you dont need to guess at the compression for your indexes – you can work it out using ANALYZE.

Navigating the world of Oracle database documentation

If you head on over to http://docs.oracle.com/en/database/ you’ll be both amazed and dismayed by the volume of documentation you can find about the database.  If you’re a seasoned Oracle professional, then you probably dont think twice about finding PIVOT examples in Chapter 18 of the Data Warehousing guide Smile

But for the novice, whether it be DBA or Developer, it can be a bit overwhelming.  One resource that you might find a useful addition to your navigation of the Oracle universe, is the interactive quick reference site.

Just head on over to http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/poster/OUTPUT_poster/poster.html

You’ll see a screen like this:

image

 

You can then click on the various tabs to get more information, plus useful links to other resources that might be useful.  For example, the multi-tenant tab gives you an overview of the facility

 

image

CloneDB in Oracle 12.1.0.2

I personally really like CloneDB, a way to thin-clone an Oracle database over NFS. This can be quite interesting, and I wanted to update my blog for 12.1.0.2.3 (April PSU). Tim Hall has a good example for 11.2.0.2 and later with further references.

My setup is as follows:

  • server3 (Oracle Linux 7.1) uses Oracle Restart and has database CDB1 registered. I would like to use this as the source for the clone
  • The backup I will take of CDB1 resides in /u01/oraback/CDB1
  • /u01/oraback is NFS-exported on server3 to server4 (Oracle Linux 7.1)
  • This directory is mounted on server4 as /u01/oraback
  • Oracle 12.1.0.2.3 (April 2015 PSU) is used throughout
  • The Oracle accounts on server3 and server4 have been created using the preinstall RPM, and have the same user and group IDs. You will find this important, and your Oracle installation standards document should enforce common user and group IDs not only for RAC where this is mandatory

Step 1: backing up CDB1

In the first step I am taking a full cold backup of CDB1. Data files are in ASM, and there is 1 PDB defined in the CDB. Here is the script I used for the backup:

[oracle@server3 ~]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Tue Aug 4 11:15:11 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CDB1 (DBID=853309103, not open)

RMAN> run {
2> allocate channel c1 device type disk format '/u01/oraback/CDB1/%U';
3> allocate channel c2 device type disk format '/u01/oraback/CDB1/%U';
4> set nocfau;
5> backup as copy database;
6> }

allocated channel: c1
channel c1: SID=15 device type=DISK

allocated channel: c2
channel c2: SID=252 device type=DISK

executing command: SET NOCFAU

Starting backup at 04-AUG-15
channel c1: starting datafile copy
input datafile file number=00001 name=+DATA/CDB1/DATAFILE/system.273.879591527
...
Finished backup at 04-AUG-15
released channel: c1
released channel: c2

CloneDB requires me to provide the init.ora file as well, which I also place on the NFS mount (“create pfile=’/u01/oraback/CDB1/initCDB1.orig’ from spfile” does the trick)

That should be enough for now, I can resume work on server4.

Setup of server4

The first step is to ensure that dNFS is set up. Instead of repeating myself here I’ll simply link to a previous post where I explained how I did this for 12c. The actual line in /etc/fstab is this:

server3:/u01/oraback/   /u01/oraback    nfs     rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768 1 2

Check MOS and your NFS appliance vendor’s support site for your specific settings.

The oranfstab’s contents is shown here for reference:

[oracle@server4 lib]$ cat $ORACLE_HOME/dbs/oranfstab
server: server1
local: 192.168.100.13
path:  192.168.100.12
export: /u01/oraback mount: /u01/oraback

Prepare for cloning

The clonedb script requires a few environment variables to be set, as in this example:

export MASTER_COPY_DIR=/u01/oraback/CDB1
export CLONE_FILE_CREATE_DEST=/u01/oradata/CLONEDB
export CLONEDB_NAME=CLONEDB

The MASTER_COPY_DIR indicates where the backup resides, the CLONE_FILE_CREATE_DEST marks the directory where the CLONEDB data files are going to be located, and finally the CLONEDB_NAME is the database name.

With that in place, directories created and permissions set properly, it’s time to call the script, and here’s the first surprise:

[oracle@server4 ~]$ perl $ORACLE_HOME/rdbms/install/clonedb.pl
Missing braces on \o{} at /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/install/clonedb.pl line 245, near "$lne' , '$clonedbdir"

I tried with $ORACLE_HOME/perl/bin/perl and the Oracle-Linux 7.1 version of perl and both showed the same behaviour (perl provided by the RDBMS home is v5.14.1, the default is v5.16.3). I corrected the script in line 245 and was able to produce the required scripts:

[oracle@server4 ~]$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/install/clonedb.pl \
> /u01/oraback/CDB1/initCDB1.orig /tmp/script1.sql /tmp/script2.sql

[oracle@server4 ~]$ cat /tmp/script1.sql

SET ECHO ON
SET FEEDBACK 1
SET NUMWIDTH 10
SET LINESIZE 80
SET TRIMSPOOL ON
SET TAB OFF
SET PAGESIZE 100

STARTUP NOMOUNT PFILE=/u01/oradata/CLONEDB/initCLONEDB.ora
CREATE CONTROLFILE REUSE SET DATABASE CLONEDB RESETLOGS
    MAXLOGFILES 32
    MAXLOGMEMBERS 2
    MAXINSTANCES 1
    MAXLOGHISTORY 908
LOGFILE
  GROUP 1 '/u01/oradata/CLONEDB/CLONEDB_log1.log' SIZE 100M BLOCKSIZE 512,
  GROUP 2 '/u01/oradata/CLONEDB/CLONEDB_log2.log' SIZE 100M BLOCKSIZE 512
DATAFILE
'/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSAUX_FNO-3_0mqdp1mb',
'/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSAUX_FNO-7_0oqdp1n5',
'/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSAUX_FNO-9_0nqdp1n5',
'/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSTEM_FNO-1_0lqdp1mb',
'/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSTEM_FNO-5_0rqdp1o4',
'/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSTEM_FNO-8_0qqdp1nl',
'/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-UNDOTBS1_FNO-4_0pqdp1nk',
'/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-USERS_FNO-10_0sqdp1o4',
'/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-USERS_FNO-6_0tqdp1oc'
CHARACTER SET WE8DEC;

[oracle@server4 ~]$ cat /tmp/script2.sql
declare
begin
dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSAUX_FNO-3_0mqdp1mb' , '/u01/oradata/CLONEDB//ora_data_CLONEDB0.dbf');
dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSAUX_FNO-7_0oqdp1n5' , '/u01/oradata/CLONEDB//ora_data_CLONEDB1.dbf');
dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSAUX_FNO-9_0nqdp1n5' , '/u01/oradata/CLONEDB//ora_data_CLONEDB2.dbf');
dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSTEM_FNO-1_0lqdp1mb' , '/u01/oradata/CLONEDB//ora_data_CLONEDB3.dbf');
dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSTEM_FNO-5_0rqdp1o4' , '/u01/oradata/CLONEDB//ora_data_CLONEDB4.dbf');
dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSTEM_FNO-8_0qqdp1nl' , '/u01/oradata/CLONEDB//ora_data_CLONEDB5.dbf');
dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-UNDOTBS1_FNO-4_0pqdp1nk' , '/u01/oradata/CLONEDB//ora_data_CLONEDB6.dbf');
dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-USERS_FNO-10_0sqdp1o4' , '/u01/oradata/CLONEDB//ora_data_CLONEDB7.dbf');
dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-USERS_FNO-6_0tqdp1oc' , '/u01/oradata/CLONEDB//ora_data_CLONEDB8.dbf');
end;
/
show errors;
alter database open resetlogs;
drop tablespace TEMP;
create temporary tablespace TEMP;[oracle@server4 ~]$

[oracle@server4 tmp]$ cat /u01/oradata/CLONEDB/initCLONEDB.ora
CDB1.__data_transfer_cache_size=0
CDB1.__db_cache_size=478150656
CDB1.__java_pool_size=4194304
CDB1.__large_pool_size=125829120
CDB1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
CDB1.__pga_aggregate_target=268435456
CDB1.__sga_target=805306368
CDB1.__shared_io_pool_size=0
CDB1.__shared_pool_size=188743680
CDB1.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/CDB1/adump'
*.audit_trail='db'
*.compatible='12.1.0.2.0'
control_files=/u01/oradata/CLONEDB/CLONEDB_ctl.dbf
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
db_name=CLONEDB
*.db_recovery_file_dest='+RECO'
*.db_recovery_file_dest_size=4560m

*.dispatchers='(PROTOCOL=TCP) (SERVICE=CDB1XDB)'
*.enable_pluggable_database=true
*.open_cursors=300
*.pga_aggregate_target=256m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=768m
*.undo_tablespace='UNDOTBS1'
db_create_file_dest=/u01/oradata/CLONEDB/
log_archive_dest=/u01/oradata/CLONEDB/
clonedb=TRUE

The scripts in that form won’t work- bear with me a second.

Fixing the scripts

First of all the init.ora script, it needs mending. The final initCLONEDB.ora, located in $CLONE_FILE_CREATE_DEST file looks like this (mandatory directories have to be created):

*.audit_file_dest='/u01/app/oracle/admin/CLONEDB/adump'
*.audit_trail='db'
*.compatible='12.1.0.2.0'
*.control_files=/u01/oradata/CLONEDB/CLONEDB_ctl.dbf
*.db_block_size=8192
*.db_domain=''
*.db_name=CLONEDB
*.db_recovery_file_dest='/u01/fra'
*.db_recovery_file_dest_size=4560m
*.dispatchers='(PROTOCOL=TCP) (SERVICE=CLONEDBXDB)'
*.enable_pluggable_database=true
*.open_cursors=300
*.pga_aggregate_target=256m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=768m
*.undo_tablespace='UNDOTBS1'
*.db_create_file_dest=/u01/oradata/CLONEDB/
*.log_archive_dest=/u01/oradata/CLONEDB/
*.clonedb=TRUE

I took out the ASM related parameters and made sure the FRA was usable. I also tidied everything up a little bit. Take a backup of this file, it will be overwritten next time you run clonedb.pl

Next check script1.sql for problems. I initially forgot to remove the reference to the init.ora I placed in /u01/oraback/CDB1 (I thought it was a good idea. It’s not):

SQL> @/tmp/script1
SQL> SET FEEDBACK 1
SQL> SET NUMWIDTH 10
SQL> SET LINESIZE 80
SQL> SET TRIMSPOOL ON
SQL> SET TAB OFF
SQL> SET PAGESIZE 100
SQL>
SQL> STARTUP NOMOUNT PFILE=/u01/oradata/CLONEDB/initCLONEDB.ora
ORACLE instance started.

Total System Global Area  805306368 bytes
Fixed Size                  2929552 bytes
Variable Size             318770288 bytes
Database Buffers          478150656 bytes
Redo Buffers                5455872 bytes
SQL> CREATE CONTROLFILE REUSE SET DATABASE CLONEDB RESETLOGS
  2      MAXLOGFILES 32
  3      MAXLOGMEMBERS 2
  4      MAXINSTANCES 1
  5      MAXLOGHISTORY 908
  6  LOGFILE
  7    GROUP 1 '/u01/oradata/CLONEDB/CLONEDB_log1.log' SIZE 100M BLOCKSIZE 512,
  8    GROUP 2 '/u01/oradata/CLONEDB/CLONEDB_log2.log' SIZE 100M BLOCKSIZE 512
  9  DATAFILE
 10  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSAUX_FNO-3_04qdogkp',
 11  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSAUX_FNO-7_06qdogmu',
 12  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSAUX_FNO-9_05qdogmq',
 13  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSTEM_FNO-1_03qdogkp',
 14  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSTEM_FNO-5_09qdogpr',
 15  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSTEM_FNO-8_08qdogon',
 16  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-UNDOTBS1_FNO-4_07qdogom',
 17  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-USERS_FNO-10_0aqdogpt',
 18  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-USERS_FNO-6_0bqdogqn',
 19  '/u01/oraback/CDB1/initCDB1.orig'
 20  CHARACTER SET WE8DEC;
CREATE CONTROLFILE REUSE SET DATABASE CLONEDB RESETLOGS
*
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01565: error in identifying file '/u01/oraback/CDB1/initCDB1.orig'
ORA-17503: ksfdopn:7 Failed to open file /u01/oraback/CDB1/initCDB1.orig
ORA-27047: unable to read the header block of file
Additional information: 7
Additional information: 210592808
Additional information: 140184619486368

Took me 5 minutes to work out that there isn’t a problem with the pfile for starting the database. There is a problem with the reference to the pfile in line 19…

In a second attempt I was luckier.

[oracle@server4 CLONEDB]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Aug 4 16:09:50 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> @/tmp/script1
SQL> SET FEEDBACK 1
SQL> SET NUMWIDTH 10
SQL> SET LINESIZE 80
SQL> SET TRIMSPOOL ON
SQL> SET TAB OFF
SQL> SET PAGESIZE 100
SQL>
SQL> STARTUP NOMOUNT PFILE=/u01/oradata/CLONEDB/initCLONEDB.ora
ORACLE instance started.

Total System Global Area  805306368 bytes
Fixed Size                  2929552 bytes
Variable Size             318770288 bytes
Database Buffers          478150656 bytes
Redo Buffers                5455872 bytes
SQL> CREATE CONTROLFILE REUSE SET DATABASE CLONEDB RESETLOGS
  2      MAXLOGFILES 32
  3      MAXLOGMEMBERS 2
  4      MAXINSTANCES 1
  5      MAXLOGHISTORY 908
  6  LOGFILE
  7    GROUP 1 '/u01/oradata/CLONEDB/CLONEDB_log1.log' SIZE 100M BLOCKSIZE 512,
  8    GROUP 2 '/u01/oradata/CLONEDB/CLONEDB_log2.log' SIZE 100M BLOCKSIZE 512
  9  DATAFILE
 10  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSAUX_FNO-3_0mqdp1mb',
 11  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSAUX_FNO-7_0oqdp1n5',
 12  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSAUX_FNO-9_0nqdp1n5',
 13  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSTEM_FNO-1_0lqdp1mb',
 14  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSTEM_FNO-5_0rqdp1o4',
 15  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSTEM_FNO-8_0qqdp1nl',
 16  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-UNDOTBS1_FNO-4_0pqdp1nk',
 17  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-USERS_FNO-10_0sqdp1o4',
 18  '/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-USERS_FNO-6_0tqdp1oc'
 19  CHARACTER SET WE8DEC;

Control file created.

Elapsed: 00:00:02.22

Success! Now I need to continue with script2:

SQL> @/tmp/script2
SQL> declare
  2  begin
  3  dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSAUX_FNO-3_0mqdp1mb' , '/u01/oradata/CLONEDB//ora_data_CLONEDB0.dbf');
  4  dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSAUX_FNO-7_0oqdp1n5' , '/u01/oradata/CLONEDB//ora_data_CLONEDB1.dbf');
  5  dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSAUX_FNO-9_0nqdp1n5' , '/u01/oradata/CLONEDB//ora_data_CLONEDB2.dbf');
  6  dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSTEM_FNO-1_0lqdp1mb' , '/u01/oradata/CLONEDB//ora_data_CLONEDB3.dbf');
  7  dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSTEM_FNO-5_0rqdp1o4' , '/u01/oradata/CLONEDB//ora_data_CLONEDB4.dbf');
  8  dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-SYSTEM_FNO-8_0qqdp1nl' , '/u01/oradata/CLONEDB//ora_data_CLONEDB5.dbf');
  9  dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-UNDOTBS1_FNO-4_0pqdp1nk' , '/u01/oradata/CLONEDB//ora_data_CLONEDB6.dbf');
 10  dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-USERS_FNO-10_0sqdp1o4' , '/u01/oradata/CLONEDB//ora_data_CLONEDB7.dbf');
 11  dbms_dnfs.clonedb_renamefile('/u01/oraback/CDB1/data_D-CDB1_I-853309103_TS-USERS_FNO-6_0tqdp1oc' , '/u01/oradata/CLONEDB//ora_data_CLONEDB8.dbf');
 12  end;
 13  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.05
SQL> show errors;
No errors.
SQL> alter database open resetlogs;

Database altered.

Elapsed: 00:00:22.60
SQL> drop tablespace TEMP;
drop tablespace TEMP
*
ERROR at line 1:
ORA-12906: cannot drop default temporary tablespace

Elapsed: 00:00:00.01
SQL> create temporary tablespace TEMP;
create temporary tablespace TEMP
*
ERROR at line 1:
ORA-01543: tablespace 'TEMP' already exists

Elapsed: 00:00:00.02

Not quite success but close. I added a temp file to the TEMP tablespace and was ready. A quick test revealed that I could open the PDB, too:

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           MOUNTED
SQL> alter pluggable database PDB1 open;

Pluggable database altered.

Elapsed: 00:00:01.46
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
SQL>

Sparse Files

The real beauty lies in the fact that I have space efficient snapshots:

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/oradata/CLONEDB/ora_data_CLONEDB3.dbf
/u01/oradata/CLONEDB/ora_data_CLONEDB0.dbf
/u01/oradata/CLONEDB/ora_data_CLONEDB6.dbf
/u01/oradata/CLONEDB/ora_data_CLONEDB4.dbf
/u01/oradata/CLONEDB/ora_data_CLONEDB8.dbf
/u01/oradata/CLONEDB/ora_data_CLONEDB1.dbf
/u01/oradata/CLONEDB/ora_data_CLONEDB5.dbf
/u01/oradata/CLONEDB/ora_data_CLONEDB2.dbf
/u01/oradata/CLONEDB/ora_data_CLONEDB7.dbf

9 rows selected.

Elapsed: 00:00:00.01
SQL> !ls -lsh /u01/oradata/CLONEDB/
total 211M
   0 drwxr-x---. 4 oracle asmadmin   37 Aug  4 16:14 CLONEDB
8.9M -rw-r-----. 1 oracle asmadmin 8.9M Aug  4 16:16 CLONEDB_ctl.dbf
101M -rw-r-----. 1 oracle asmadmin 101M Aug  4 16:16 CLONEDB_log1.log
101M -rw-r-----. 1 oracle asmadmin 101M Aug  4 16:10 CLONEDB_log2.log
4.0K -rw-r--r--. 1 oracle oinstall  613 Aug  4 16:09 initCLONEDB.ora
4.0K -rw-r--r--. 1 oracle oinstall  613 Aug  4 16:00 initCLONEDB.ora.orig
460K -rw-r-----. 1 oracle asmadmin 761M Aug  4 16:15 ora_data_CLONEDB0.dbf
 64K -rw-r-----. 1 oracle asmadmin 491M Aug  4 16:10 ora_data_CLONEDB1.dbf
 64K -rw-r-----. 1 oracle asmadmin 521M Aug  4 16:15 ora_data_CLONEDB2.dbf
420K -rw-r-----. 1 oracle asmadmin 791M Aug  4 16:15 ora_data_CLONEDB3.dbf
 64K -rw-r-----. 1 oracle asmadmin 251M Aug  4 16:10 ora_data_CLONEDB4.dbf
 64K -rw-r-----. 1 oracle asmadmin 261M Aug  4 16:15 ora_data_CLONEDB5.dbf
540K -rw-r-----. 1 oracle asmadmin 361M Aug  4 16:15 ora_data_CLONEDB6.dbf
 64K -rw-r-----. 1 oracle asmadmin 201M Aug  4 16:15 ora_data_CLONEDB7.dbf
 64K -rw-r-----. 1 oracle asmadmin 5.1M Aug  4 16:10 ora_data_CLONEDB8.dbf

The first column in the output is the actual size on disk, for example 64k for ora_data_CLONEDB4.dbf. A “regular” ls output will show the file as 251M, reflected in the above output as well. It’s the original file size.

Upcoming Raspberry Pi Family Coding Day Project

I’ve had a lot of people email and message me asking me what I’m doing with the Raspberry Pi and I wanted to share before the actual RMOUG Quarterly Education Workshop, (QEW) this Friday at Elitches.

rmoug_summer_qew

So a little history-  When I was a kid, I was a real handful.  My mother used to come into my room and I’d have created “webs” out of yarn strung from every piece of furniture and twisted into patterns.  I used to take apart radios and use the parts to create new toys, figuring out how the electronics worked.  Dolls suddenly became cyborgs and we wont’ talk about what I used to do to poor, defenseless stuffed animals.  My destruction and “recycling” used to exhaust her to no end, but what I used to create new toys out of, rarely occurs in today’s world and when the Raspberry Pi was introduced to the stage, I was immediately smitten…:)

20150803_194756

As many know, I’m also a grand supporter of STEAM, (Science, Technology, Engineering, Art and Math) as part of the success in filling the 1.4 million open positions that we are posed to have available in technology by 2020.  That number is only going to grow and I don’t see our public education system building out core knowledge to embrace technology in a way that kids will have been introduced to logical thinking and coding to entice them to careers in technical fields.  Public educations just doesn’t have the resources or the vision to do this and I feel its up to us who are here, in the technical arena now.

steam_logo

With that said, it took some convincing to get the QEW to include a family coding event.  We were already late getting the new board in place, so we’ve been scrambling ever since!  Devoxx4Kids will be onsite and they’ll be doing a session on how to build Minecraft modules.  I’ll be teaching the Raspberry Pi session with attendees.  The project will have all areas of STEAM covered, which is quite a challenge to do in one project with kids!

The attendees will bring their Raspberry Pi setups with their parents [hopefully] in tow, and I’ll be providing small motors, pre-soldered with connectors to attach to a pibrella board that was part of the “shopping list” from the website.  The Pibrella board is a small add-on board to the Raspberry Pi that makes it easy enhance projects and it’s one of my favorite additions to my setup.  With the motor, and a small lego base, the kids will then use ART supplies to build a small project, it can be an animal, a flower, a design from stickers, even a spinner or other flyer.  The art project can be ENGINEERED any way they want, to either spin, dance or fly off the motor.

Along with art supplies, I’ll have LED lights and small watch batteries that can be used to power the lights and add to their projects.  I also have magnets to use to “attach” flyers to bases to launch which all bring in some components of SCIENCE into the project.

raspberry-pi-led-lights1

Once they finish their creation, we’ll attach it to a motor and start up our Raspberry Pi’s.  Each attendee will open up a text editor and CODE some simple timing and spinning rotations, adding different enhancements to the code, depending on what they want their project to do, (I have three coded examples that they can edit with different times, start with the execution or use the button on the pibrella to commence, etc.)

They’ll then EXECUTE their code and see if the initial TIMES COMPUTED work with their project or if they need to adjust the MATH to make the project work.

Once they are satisfied with the project they built, we’ll discuss how they might imagine to enhance their project.  Would they add lights?  Would they add music?  Might they change the physical design or code?  I want them to use their imagination and their logical thinking skills to see how cool technology can be.

20150803_192233

My youngest son, Josh, seen above, has been helping me with this project, using his master soldering skills to assist me attaching the jumper wires to the motors, helping me test out different art projects and flyers to ensure that the code works like expected and that my new [non-brand name] Lego case secures my different card installations correctly.

Thanks for listening to me rant so I can take a break from everything and hopefully, we’ll have a RAGING turn out for the family coding event, people will stick around and listen to me also ramble on about Enterprise Manager Hybrid Cloning to the Cloud and then have a great Hands on Lab using a AWR Warehouse I created for this event!

See you Friday!

 

 

 



Tags:  , , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Upcoming Raspberry Pi Family Coding Day Project], All Right Reserved. 2015.