Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

COVID-19: The Current Situation in the UK and June.

I’ve not said anything about Covid-19 for much longer than I expected, but really it has been a case of watching the coming peak come and go, pretty much following the pattern of Italy, Spain, Belgium and France. I plan to do a post soon which pulls together the current scientific position, but for now I wanted to record where we are and where my gut feeling (based as ever on reliable scientific sources and not so much on what the daily government updates would like us to think) says we will be in a month or so.

We’ve not done very well in the UK. If you are based in the UK you may not be aware of the fact that most of Europe think we have,as a nation, been idiots – failing to learn from other countries, late to lock-down, lock-down was not strict enough, too early to open up, our PPE fiasco… I can’t say I can disagree with them. We have one of the highest deaths-per-million-population rates in Europe, exceeded only by Spain and Belgium. But it could have been worse. A lot worse.

I’m truly relieved my predictions in my last post were (for once) too pessimistic. I misjudged when the peak in deaths would be by over a week – it was 9 days earlier than I thought, happening around the 11th April. As a result of coming sooner, the peak was lower than my little model predicted. Even allowing for that, the increase in number of deaths did not mirror the increase in cases (I used the cases pattern as my template for deaths). I think this is because the UK finally started ramping up it’s testing rate. The more testing you do, the more of the real cases you detect, so some of the increase in cases was simply better testing and not continuing spreading. That’s what happens when the source of your metrics changes, your model loses accuracy.

Deaths are directly related to real case numbers, it does not actually matter how many cases you detect. This is part of why case numbers are a much poorer metric for epidemics, whereas deaths are better. The best metric is a random, large sample for those who have had the disease – but we still do not have reliable, large-scale antibody or similar tests to tell us this.

If you look at the actual figures and compare to what I predicted for the peak of deaths, I seem to have been pretty accurate. I said 1,200 to 1,500 around the 20th April and the peak was 1,172 in the 21st April. But I was predicting hospital deaths only. Up until 29th April this was the number reported each day but since then the daily number of deaths reported included community (mostly care home) deaths. The previous figures were altered to reflect this and the graphs to the right are based on these updated figures. Hospital deaths seem to have peaked at 980 on the 11th April, so I was wrong.

I think it is crucial in science and technology (and actually, just in general) that you be honest when you are wrong – even if (like in this case) I could made a fallacious claim to have hit the nail on the head.

The bottom line is, we are well past the first peak and it did not overwhelm the NHS. It got really close and our issues with personal protective equipment was a scandal and must have resulted in more illness and some avoidable deaths to our front-line NHS staff. But, apparently, saying so is Political.

All in all we followed the pattern of European counties that were impacted by Covid-19 before us and implemented similar country-wide lock-downs.

One difference between us and other European countries that have been hit hard is our tail of cases is thicker and longer. We have not been as rigorous in our lock-down as those other countries (e.g we did not have to have written permission to leave or enter an area and children were not utterly forbidden from leaving home, which are just two examples how our lock-down was softer). I know it might not feel like it, but we were not.

What really concerns me is that we are easing lock-down measures so soon in the UK. Our daily new case rate and number of deaths are both still really quite high. The figures always drop over the weekend, especially Sunday and Monday (due to the numbers reported being for the day before). Over the last 3 days (Wed to Fri) we averaged 1998 new cases and 371 deaths per day. If you think Covid-19 has gone away, every single day there are 371 families who sadly know different.

I understand that the economy is important, that unless things are being manufactured, services provided, money earned and spent, that a large part of our society is not functioning. Maybe I don’t really appreciate how important it is as economics has always looked more like a dark art based on greed than anything logical, but some people feel getting back to normal business is critical and the long-term impact of not doing so is potentially as serious as Covid-19.

I also know that not being able to go to places, eat out, have a drink in the pub, meet up with friends in a building or in more than small numbers is frustrating. For many, not seeing your family and loved ones who are not in your home is very upsetting.

I’m sure that parents are desperate for kids to go back to school (partly for education and partly as it turns out kids are a lot of work), couples need a bit of time apart, people are missing their jobs. Nearly all of us have never had to spend so much time with a very small number of other people.

But I’m also sure that what we don’t want is in 4-8 weeks to have to go into the same level of lock-down as we spent most of this spring in. And the next lock-down may be even more draconian as there is a difference now to where we were at the second week of March when we should have locked down first.

SARS-Cov-2 is now endemic and prevalent across the UK. It is everywhere.

At the start of an epidemic the disease is growing in a small number of places, so usually (such as was the case with MERS and SARS) you can contain it by strong isolation and tracking efforts in those areas it occurs, as most of the population are not exposed. This is why you cannot contain seasonal ‘flu epidemics by isolating people, it does not work if it is wide-spread enough. ‘Flu simply flows through the population and it does in some years kill a lot of people.

With Covid-19 right now, If our R(e) – the effective reproduction number – goes above 1 anywhere across the UK, Covid-19 cases will rapidly increase in that area. And with restrictions being lifted across the whole UK and in England especially, I am privately convinced the disease will burst fourth again in many, many places and it is going to go very wrong again. I think the government is being utterly disingenuous about the impact of opening up schools and my friends who are teachers and medics have no doubt this is a significantly more dangerous step than it is being sold as. It might be the right move, but lying about it’s potential impact is not helpful long-term.

Not only are we relaxing social distancing steps too early, but I feel the government has utterly bolloxed up (technical term meaning “done a jolly poor job of”) the messaging. As examples:

  • The very clear “Stay at Home” became the vacuous “Stay Alert”, which no one seems to be able to clearly define and every one seems to have a different interpretation of.
  • We were given contradicting and non-nonsensical rules such as you could see one family member from outside your household in the park, but you could have people come and view your house. So if you want to see your mum & dad at the same time, put your house up for sale and have them view it.
  • Parts of the UK (Wales, Northern Ireland, Scotland) have said they were not consulted on changes, they do not agree with them, and they are doing their own thing. That’s not confusing to people is it?
  • The whole Cummings affair. Dominic Cummings did break the rules, he acted like a selfish idiot, he lied about what he did, he had pathetically stupid excuses (“I drove my child around in a car to test my eyesight” which shows he either does not care at all for other people’s safety or has too low an IQ to be allowed out on his own). The issue is not that one arrogant, self-important person decided the rules do not apply to him. It is that the government fail to understand that not sanctioning him is being interpreted by many to mean they can make up their own minds about which rules apply to them and which they can ignore. Continuing to say “look, get over it” is simply coming across as telling us all to bugger off.

To help steer us through this crisis, we really needed a government with both the mandate to introduce new rules and also the acceptance by most of the population of those rules, and at least acquiescence from the majority to put up with limitations placed upon us. What we have now is a not just the hard-core “we won’t be told what to do” people that would always be a negative factor in limiting the spread of a disease, but a large number of angry, confused, worried people across the country. Almost everyone I personally know in the UK feel angry, confused, worried, and mostly with a progressively declining respect for the government and their advice.

I know I’m not very good at understanding people, it does not come naturally to me. If someone does not think like I do, I can have a devil of a job working out why. But I’m pretty sure that here in the UK a lot of people are going to start saying “to hell with the lock-down rules, everyone else is ignoring them and I’ve not seen anyone die in front of me…”

I went to see my Mum this week. I had to drive 100+ miles to do it. Unlike in Dominic’s case, it’s allowed now and I have no Covid-19 symptoms. I took a mask, I took my own food, we sat in her garden (I got sunburn, so Covid-19 might not get me but skin cancer might). I assured myself she was OK and that her tech will keep working so we can stay in touch. And I felt a little naughty doing it.

But I made a conscious decision to do it now – as I think SARS-CoV-2 is about at it’s lowest prevalence in our population right now (end of May 2020) than it is going to be for months. Admissions and deaths are going down and I expect at least deaths to continue to do so for another week or two. Personally I am deeply worried that in 4 weeks time new cases, hospital admissions, and deaths will be going up again. I don’t want them to be but I’ll be (very happily) surprised if they don’t go up  – what we see in cases & deaths at any point in time is based on the level of spread one or two weeks ago respectively. I suspect that as I type our R(e) number is going up and will exceed 1 this week.

If you don’t agree with me, just keep an eye on what the scientists are saying. Some are already making noises of anxiety as an article on the BBC is already saying today. Scientists tend to make cautious statements such as “we do not think this is wise” or “we feel there is a danger in this choice of action”. It’s a normal person’s equivalent of screaming “Are you bloody idiots?!?”.  Once again, the experts are saying we should do one thing and the government are doing another. It’s not gone too well to ignore the scientists so far.

There is a T-shirt you can get at the moment, which I really must order a dozen of.

“All disaster movies start with someone ignoring a scientist”.

 

 

SQLDeveloper’s “mystats.sql” equivalent

It just occurred to me that I haven’t blogged about SQLDeveloper yet, something I really need to change ;) This post covers SQLDeveloper 19.4, and although I don’t know exactly when the feature you are reading about was added I doubt it was in the latest release.

A little bit of background first

Sometimes the wait interface isn’t enough, so you need to dig a little deeper into the performance issue. In my opinion there are two layers inside Oracle when it comes to performance troubleshooting:

  • The Wait Interface provides valuable insights about session activity, especially when off-CPU and waiting
  • Session counters provide insights into ongoing operations at a lower level, and finer granularity

Tanel Poder’s post I linked to (yes, it’s from 2010, time flies!) provides an excellent introduction to the topic. His (session) snapper.sql script is one of my favourite and most heavily-used tools, especially when investigating an ongoing issue. Have a look at the Snapper landing page to get some more details and ideas about its many capabilities, it can do a LOT.

Adrian Billington’s mystats.sql tracks sessions counters as well, but it takes a different approach. The mystats.sql script works particularly well when you can sandwich a piece of (PL/)SQL between calls to mystats start and mystasts stop. I have been using it extensively for research and even managed to sneak a little improvement in ;)

Here is an example of mystat use. It captures session counters before and after the execution of a SQL statement and prints the delta. Have a look at the GitHub repository for more details about this very useful utility as I’m not going into the details…

SQL> @mystats start s=s

PL/SQL procedure successfully completed.

SELECT /*+ noparallel gather_plan_statistics  */
  2      oi.order_id,
    SUM(oi.unit_price * oi.quantity)
  4  FROM
  5      soe.orders         o,
  6      soe.order_items    oi
  7  WHERE
  8          o.order_id = oi.order_id
  9      AND o.delivery_type = 'Express'
 10  GROUP BY
 11      oi.order_id
 12  HAVING
    SUM(oi.unit_price * oi.quantity) > 45000
 14  ORDER BY
 15      2 DESC;

  ORDER_ID SUM(OI.UNIT_PRICE*OI.QUANTITY)
---------- ------------------------------
   1449606                          49324
    577982                          48606
    468019                          47970
   1977685                          46786
   1891120                          45347
   2310221                          45209

6 rows selected.

SQL> @mystats stop t=1

==========================================================================================
MyStats report : 26-MAY-2020 18:44:51
==========================================================================================


------------------------------------------------------------------------------------------
1. Summary Timings
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
TIMER   snapshot interval (seconds)                                                  14.76
TIMER   CPU time used (seconds)                                                       2.38


------------------------------------------------------------------------------------------
2. Statistics Report
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
STAT    CPU used by this session                                                       239
STAT    CPU used when call started                                                     238
STAT    CR blocks created                                                               13
STAT    DB time                                                                        692
STAT    Effective IO time                                                        4,415,766
STAT    Number of read IOs issued                                                    1,336
STAT    Requests to/from client                                                         20
STAT    SQL*Net roundtrips to/from client                                               20
STAT    application wait time                                                            9
STAT    buffer is not pinned count                                                       7
STAT    bytes received via SQL*Net from client                                      20,586
STAT    bytes sent via SQL*Net to client                                            10,594
STAT    calls to get snapshot scn: kcmgss                                              791
STAT    calls to kcmgcs                                                                 47
STAT    cell physical IO interconnect bytes                                  1,370,603,520
STAT    consistent changes                                                             944
STAT    consistent gets                                                            278,437
STAT    consistent gets direct                                                     167,309
STAT    consistent gets examination                                                    251
STAT    consistent gets examination (fastpath)                                         251
STAT    consistent gets from cache                                                 111,128
STAT    consistent gets pin                                                        110,877
STAT    consistent gets pin (fastpath)                                             110,876
STAT    data blocks consistent reads - undo records applied                            247
STAT    db block changes                                                               838
STAT    db block gets                                                                2,415
STAT    db block gets from cache                                                     2,415
STAT    db block gets from cache (fastpath)                                          1,970
STAT    dirty buffers inspected                                                         33
STAT    enqueue conversions                                                              4
STAT    enqueue releases                                                                 8
STAT    enqueue requests                                                                 8
STAT    enqueue waits                                                                    1
STAT    execute count                                                                   17
STAT    execute count                                                                   17
STAT    file io wait time                                                       10,452,219
STAT    free buffer inspected                                                          105
STAT    free buffer requested                                                           69
STAT    heap block compress                                                              2
STAT    hot buffers moved to head of LRU                                               193
STAT    index fetch by key                                                               2
STAT    lob writes                                                                     391
STAT    lob writes unaligned                                                           391
STAT    logical read bytes from cache                                          930,144,256
STAT    no work - consistent read gets                                             277,738
STAT    non-idle wait count                                                          1,153
STAT    non-idle wait time                                                             452
STAT    opened cursors cumulative                                                       21
STAT    parse count (hard)                                                               1
STAT    parse count (total)                                                             17
STAT    physical read IO requests                                                    1,337
STAT    physical read bytes                                                  1,370,603,520
STAT    physical read total IO requests                                              1,337
STAT    physical read total bytes                                            1,370,603,520
STAT    physical read total multi block requests                                     1,336
STAT    physical reads                                                             167,310
STAT    physical reads cache                                                             1
STAT    physical reads direct                                                      167,309
STAT    process last non-idle time                                                      15
STAT    recursive calls                                                              2,022
STAT    recursive cpu usage                                                              3
STAT    rollbacks only - consistent read gets                                           26
STAT    rows fetched via callback                                                        1
STAT    session cursor cache count                                                       7
STAT    session cursor cache hits                                                       10
STAT    session logical reads                                                      280,852
STAT    session pga memory                                                       2,276,704
STAT    session pga memory max                                                  65,518,944
STAT    session uga memory                                                         261,952
STAT    session uga memory max                                                  63,036,496
STAT    sorts (memory)                                                                   1
STAT    sorts (rows)                                                                     6
STAT    table fetch by rowid                                                             1
STAT    table scan blocks gotten                                                   277,764
STAT    table scan disk non-IMC rows gotten                                     31,341,195
STAT    table scan rows gotten                                                  31,341,195
STAT    table scans (direct read)                                                        1
STAT    table scans (long tables)                                                        2
STAT    temp space allocated (bytes)                                             1,048,576
STAT    user I/O wait time                                                             442
STAT    user calls                                                                      28
STAT    workarea executions - optimal                                                    6
STAT    workarea memory allocated                                                       12


------------------------------------------------------------------------------------------
3. Options Used
------------------------------------------------------------------------------------------
- Statistics types : s=s
- Reporting filter : t=1


------------------------------------------------------------------------------------------
4. About
------------------------------------------------------------------------------------------
- MyStats v3.0 by Adrian Billington (http://www.oracle-developer.net)
- Original version based on the SNAP_MY_STATS utility by Jonathan Lewis


==========================================================================================
End of report
==========================================================================================

PL/SQL procedure successfully completed.

I have modified mystats slightly so as to sort output by statistic name, rather than the change in value. I find it easier to read that way, but your mileage may vary.

Each time I look at performance instrumentation in the Oracle database I am pleasantly surprised about the level of detail available. I haven’t blogged about it lately, but going back a bit to 12.2 shows the progress made. And Oracle didn’t stop there, either. Using mystats’s output I can derive all sorts of interesting facts, none of which are of importance for this post as I’m purely interesting in reporting the change in session stats ;) When researching how Oracle works, this is quite a wealth of information, and you can find further examples on this blog.

SQLDeveloper can do this, too

If you are using SQLDeveloper and don’t want/need to use the command line, you can get something quite similar as well. Just enter your query into the text window, then hit F6 for “Autotrace”. In the lower part of your Autotrace results, you can see the change in session counters. If not, you may have to swipe the statistics pane into view using the mouse. Figure 1 shows an example:

autotrace and change in session stats - SQLDeveloper 19.4https://martincarstenbach.files.wordpress.com/2020/05/sqldeveloper-autot... 150w, https://martincarstenbach.files.wordpress.com/2020/05/sqldeveloper-autot... 300w, https://martincarstenbach.files.wordpress.com/2020/05/sqldeveloper-autot... 768w, https://martincarstenbach.files.wordpress.com/2020/05/sqldeveloper-autot... 1587w" sizes="(max-width: 1024px) 100vw, 1024px" />
Figure 1: SQLDeveloper’s Autotrace function displaying change in session statistics for this query

Summary

Session counters are super useful for understanding what Oracle does at a more granular level than the wait interface. For the most part, using the wait interface is enough, yet there is the occasional case where more details are needed. I often consider changes in session statistics when researching a particular problem, and thankfully I don’t have to come up with ways to do so as the community has already lent a helping hand!

8 years and counting…

I just noticed today is the 8 year anniversary of working for my current company.

I wrote a blog post about the interview before I got the job (here).

About 3 months after I started the job I wrote an update post, and clearly wasn’t too happy with the way things were going (here).

There was a rather unenthusiastic post about my four year anniversary (here).

Now here I am after eight years and I’m still having the same arguments about my role on an almost daily basis. It’s like a bad relationship I can’t break free from. Some sort of He Hit Me (And It Felt Like A Kiss) kind-of thing.

Let’s see what this next year brings…

Cheers

Tim…

PS. At least now I’m working from home, which is better.

PPS. Thanks to the people who contacted me out of concern. This is not a cry for help. </p />
</p></div>

    	  	<div class=

Video : Install Podman on Oracle Linux 8 (OL8)

In today’s video we’ll take a look at installing Podman on Oracle Linux 8 (OL8).

This is based on the article here.

You can see more information about containers here.

The star of today’s video is John King. John’s been on the channel a couple of times before. Once to do a message to one of his super-fans, a work colleague of mine who was impressed that I know John, and once for a regular “.com” appearance. I blame the wife for the terrible audio. </p />
</p></div>

    	  	<div class=

티베로 – The AWR-like “Tibero Performance Repository”

By Franck Pachot

.
In a previous post I introduced Tibero as The most compatible alternative to Oracle Database. Compatibility is one thing but one day you will want to compare the performance. I’ll not do any benchmark here but show you how you we can look at the performance with TPR – the Tibero Performance Repository – as an equivalent of AWR – the Oracle Automatic Workload Repository. And, as I needed to run some workload, I attempted to run something that has been written with Oracle Database in mind: the Kevin Closson SLOB – Silly Little Oracle Benchmark. The challenge is to make it run on Tibero and get a TPR report.

SLOB

I’ve downloaded SLOB from:


git clone https://github.com/therealkevinc/SLOB_distribution.git
tar -zxvf SLOB_distribution/2019.11.18.slob_2.5.2.tar.gz

and I’ll detail what I had to change in order to have it running on Tibero.

sqlplus

The client command line is “tbsql” and has a very good compatibility with sqlplus:


[SID=t6a u@h:w]$ tbsql -h
Usage: tbsql [options] [logon] [script]
options
-------
  -h,--help        Displays this information
  -v,--version     Displays version information
  -s,--silent      Enables silent mode. Does not display the 
                   start-up message, prompts and commands
  -i,--ignore      Ignore the login script (eg, tbsql.login)
logon
-----
  [username[/password[@connect_identifier]]]
script
------
  @filename[.ext] [parameter ...]
[SID=t6a u@h:w]$ tbsql
tbSQL 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
SQL> help
HELP
 ----
 Displays the Help.
{H[ELP]|?} topic
where topic is
 ! {exclamation} % {percent} @ {at}   @@ {double at}
 / {slash}       ACCEPT      APPEND   ARCHIVE LOG
 CHANGE          CLEAR       COLUMN   CONNECT
 DEFINE          DEL         DESCRIBE DISCONNECT
 EDIT            EXECUTE     EXIT     EXPORT
 HELP            HISTORY     HOST     INPUT
 LIST            LOADFILE    LOOP     LS
 PASSWORD        PAUSE       PING     PRINT
 PROMPT          QUIT        RESTORE  RUN
 SAVE            SET         SHOW     TBDOWN
 SPOOL           START       UNDEFINE VARIABLE
 WHENEVER

However, there are a few things I had to change.

The silent mode is “-s” instead of “-S”

The “-L” (no re-prompt if the connection fails) doesn’t exist: tbsql does not re-prompt, and leaves you in the CLI rather than exiting.

sqlplus does not show feedback for less than 5 rows. tbsql shows it always by default. We can get the same sqlplus output by setting SET FEEDBACK 6

tbsql returns “No Errors” where sqlplus returns “No errors”

You cannot pass additional spaces in the connection string. Sqlplus ignores them bit tbsql complains:

All those were easy to change in the setup.sh and runit.sh scripts.
I actually defined a sqlplus() bash function to handle those:


sqlplus(){
 set -- ${@// /}
 set -- ${@/-L/}
 set -- ${@/-S/-s}
 sed \
 -e '1 i SET FEEDBACK 6' \
 tbsql $* | sed \
 -e 's/No Errors/No errors/'
 echo "--- call stack ---  ${FUNCNAME[0]}()$(basename $0)#${LINENO}$(f=0;while caller $f;do((f++));done|awk '{printf " &2
 echo "--- parameters ---  tbsql $*" >&2
}

As you can see I’ve added the print of the bash callstack which I used to find those issues. Here is the idea:

tnsping

The equivalent of TNSPING is TBPROBE. It takes a host:port and display nothing but returns a 0 return code when the connection is ok or 3 when it failed. Note that there are other status like 1 when the connection is ok but the database is read-only, 2 when in mount or nomount. You see here an architecture difference with Oracle: there is no listener but it is the database that listens on a port.
A little detail with no importance here, my database port is 8629 as mentioned in the previous post but tbprobe actually connects to 8630:


[SID=t6a u@h:w]$ strace -fyye trace=connect,getsockname,recvfrom,sendto tbprobe localhost:8629
connect(4, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory)
connect(4, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory)
getsockname(4, {sa_family=AF_NETLINK, nl_pid=41494, nl_groups=00000000}, [12]) = 0
sendto(4, {{len=20, type=RTM_GETADDR, flags=NLM_F_REQUEST|NLM_F_DUMP, seq=1589797671, pid=0}, {ifa_family=AF_UNSPEC, ...}}, 20, 0, {sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, 12) = 20
connect(4, {sa_family=AF_INET, sin_port=htons(8630), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
getsockname(4127.0.0.1:8630]>, {sa_family=AF_INET, sin_port=htons(50565), sin_addr=inet_addr("127.0.0.1")}, [28->16]) = 0
connect(4, {sa_family=AF_INET, sin_port=htons(8630), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
recvfrom(4127.0.0.1:8630]>, "\0\0\0\0\0\0\0@\0\0\0\0\0\0\0\0", 16, 0, NULL, NULL) = 16
recvfrom(4127.0.0.1:8630]>, "\0\0\0\2\0\0\0\17\0\0\0\2\0\0\0\0\0\0\0\0\0\0\0\6\0\0\0\0\0\0\0\6"..., 64, 0, NULL, NULL) = 64
sendto(4127.0.0.1:8630]>, "\0\0\0\204\0\0\0\f\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 28, 0, NULL, 0) = 28
recvfrom(4127.0.0.1:8630]>, "\0\0\0\204\0\0\0\f\0\0\0\0\0\0\0\0", 16, 0, NULL, NULL) = 16
recvfrom(4127.0.0.1:8630]>, "\0\0\0e\0\0\0\0\0\0\0\1", 12, 0, NULL, NULL) = 12
+++ exited with 0 +++

The return code is correct. What actually happens is that Tibero does not seem to use Out-Of-Band but another port to be able to communicate if the default port is in use. And this is the next port number as mentioned in my instance process list as “listener_special_port”:


[SID=t6a u@h:w]$ cat /home/tibero/tibero6/instance/t6a/.proc.list
Tibero 6   start at (2019-12-16 14:03:00) by 54323
shared memory: 140243894161408 size: 3221225472
shm_key: 847723696 1 sem_key: -1837474876 123 listener_pid: 7026 listener_port: 8629 listener_special_port: 8630 epa_pid: -1
7025 MONP
7027 MGWP
7028 FGWP000
7029 FGWP001
7030 PEWP000
7031 PEWP001
7032 PEWP002
7033 PEWP003
7034 AGNT
7035 DBWR
7036 RCWP

This means that when my database listener is 8629 TBPROBE will return “ok” for 8628 and 8629


[SID=t6a u@h:w]$ for p in {8625..8635} ; do tbprobe  localhost:$p ; echo "localhost:$p -> $?" ; done
localhost:8625 -> 3
localhost:8626 -> 3
localhost:8627 -> 3
localhost:8628 -> 0
localhost:8629 -> 0
localhost:8630 -> 3
localhost:8631 -> 3
localhost:8632 -> 3
localhost:8633 -> 3
localhost:8634 -> 3
localhost:8635 -> 3

Anyway, this has no importance here and I just ignore the tnsping test done by SLOB:


tnsping(){
 true
}

DCL and DDL

Tibero SQL does not allow to create a user with the grant statement and needs explicit CREATE. In setup.sh I did the following replacement


--GRANT CONNECT TO $user IDENTIFIED BY $user;
CREATE USER $user IDENTIFIED BY $user;
GRANT CONNECT TO $user;

The implicit creation of a user with a grant statement is an oraclism that is not very useful anyway.
PARALLEL and CACHE attributes are not allowed at the same place as in Oracle:


-- PARALLEL CACHE PCTFREE 99 TABLESPACE $tablespace
-- STORAGE (BUFFER_POOL KEEP INITIAL 1M NEXT 1M MAXEXTENTS UNLIMITED);
PCTFREE 99 TABLESPACE $tablespace
STORAGE (BUFFER_POOL KEEP INITIAL 1M NEXT 1M MAXEXTENTS UNLIMITED) PARALLEL CACHE;

--NOCACHE PCTFREE 99 TABLESPACE $tablespace
--STORAGE (BUFFER_POOL KEEP INITIAL 1M NEXT 1M MAXEXTENTS UNLIMITED);
PCTFREE 99 TABLESPACE $tablespace
STORAGE (BUFFER_POOL KEEP INITIAL 1M NEXT 1M MAXEXTENTS UNLIMITED);

They just go at the end of the statement and that’s fine.
Tibero has no SYSTEM user by default, I create one. And by the same occasion create the IOPS tablespace:


tbsql sys/tibero <

those are the changes I’ve made to be able to run SLOB setup.sh and runit.sh

I also had to change a few things in slob.sql

There is no GET_CPU_TIME() in DBMS_UTILITY so I comment it out:


--v_end_cpu_tm := DBMS_UTILITY.GET_CPU_TIME();
--v_begin_cpu_tm := DBMS_UTILITY.GET_CPU_TIME();

I didn’t check for an equivalent here and just removed it.

The compatibility with Oracle is very good so that queries on V$SESSION are the same, The only thing I changed is the SID userenv that is called TID in Tibero:


SELECT ((10000000000 * (SID + SERIAL#)) + 1000000000000) INTO v_my_serial from v$session WHERE sid = ( select sys_context('userenv','tid') from dual);

I got a TBR-11006: Invalid USERENV parameter before this change.

This session ID is larger so I removed the precision:


--v_my_serial NUMBER(16);
v_my_serial NUMBER;

I got a TBR-5111: NUMBER exceeds given precision. (n:54011600000000000, p:16, s:0) before changing it.

The dbms_output was hanging and I disable it:


--SET SERVEROUTPUT ON   ;

I didn’t try to understand the reason. That’s a very bad example of troubleshooting but I just want it to run now.

Again, without trying to understand further, I replaced all PLS_INTEGER by NUMBER as I got: TBR-5072: Failure converting NUMBER to or from a native type

setup.sh

In slob.conf I changed only UPDATE_PCT to 0 for a read-only workload and changed “statspack” to “awr”, and I was able to create 8 schemas:


./setup.sh IOPS 8 

(I redirect /dev/null to stdin because my sqlplus() function above reads it)

runit.sh

Now ready to run SLOB.
This is sufficient to run it but I want to collect statistics from the Tibero Performance Repository (TPR).

Tibero Performance Repository is the equivalent of AWR. The package to manage it is DBMS_TPR instead of DBMS_WORKLOAD_REPOSITORY. It has a CREATE_SNAPSHOT procedure, and a REPORT_LAST_TEXT to generate the report between the two last snapshots, without having to get the snapshot ID.
I’ve replaced the whole statistics() calls in runit.sh by:


tbsql system/manager <<<'exec dbms_tpr.create_snapshot;';

before
and:


tbsql system/manager <<<'exec dbms_tpr.create_snapshot;';
tbsql system/manager <<<'exec dbms_tpr.report_text_last;';

after.

Now running:


sh runit.sh 4 

and I can see the 4 sessions near 100% in CPU.
Note that they are threads in Tibero, need to tun “top -H” to see the detail:

TPR (Tibero Performance Repository) Report

1 session LIO

Here is the report for 5 minutes of running on one session:


--------------------------------------------------------------------------------
               *** TPR (Tibero Performance Repository) Report ***
--------------------------------------------------------------------------------

              DB Name : t6a
                  TAC : NO
                  TSC : NO
         Instance Cnt : 1
              Release : 6   r166256 ( LINUX_X86_64 )
            Host CPUs : 48

   Interval condition : 758 (snapshot id) (#=1 reported)
Report Snapshot Range : 2020-05-26 20:02:37 ~ 2020-05-26 20:07:38
  Report Instance Cnt : 1 (from Instance NO. 'ALL')
         Elapsed Time : 5.02 (mins)
              DB Time : 4.99 (mins)
       Avg. Session # : 1.00

I’m running a simple installation. No TSC (Tibero Standby Cluster, the Active Data Guard equivalent) and no TAC (the Oracle RAC equivalent).
This report is a run with one session only (DB time = Elapsed time) and Avg. Session # is 1.00.


================================================================================
 2.1 Workload Summary
================================================================================

                            Per Second           Per TX         Per Exec         Per Call
                       ---------------  ---------------  ---------------  ---------------
          DB Time(s):             1.00             7.68             0.00            42.80
           Redo Size:         3,096.93        23,901.92             0.52       133,167.86
       Logical Reads:       394,911.05     3,047,903.26            66.40    16,981,175.29
       Block Changes:            26.60           205.33             0.00         1,144.00
      Physical Reads:             0.00             0.00             0.00             0.00
     Physical Writes:             1.31            10.08             0.00            56.14
          User Calls:             0.02             0.18             0.00             1.00
              Parses:             0.02             0.18             0.00             1.00
         Hard Parses:             0.00             0.00             0.00             0.00
              Logons:             0.01             0.05             0.00             0.29
            Executes:         5,947.03        45,898.85             1.00       255,722.14
           Rollbacks:             0.00             0.00             0.00             0.00
        Transactions:             0.13             1.00             0.00             5.57

394,911 logical reads per second is comparable to what I can get from Oracle on this Intel(R) Xeon(R) Platinum 8167M CPU @ 2.00GHz but let’s see what happens with some concurrency. I miss the DB CPU(s) which can quickly show that I am running mostly on CPU but this can be calculated from other parts of the report.(DB CPU time is 299,134 which covers the 5 minutes elapsed time). The “Per Call” is interesting as it has more meaning than the “Per Execution” one which counts the recursive executions.

4 sessions LIO

Another report from the run above with 4 concurrent sessions:


--------------------------------------------------------------------
               *** TPR (Tibero Performance Repository) Report ***
--------------------------------------------------------------------
DB Name : t6a
                  TAC : NO
                  TSC : NO
         Instance Cnt : 1
              Release : 6   r166256 ( LINUX_X86_64 )
            Host CPUs : 48
Interval condition : 756 (snapshot id) (#=1 reported)
Report Snapshot Range : 2020-05-2619:54:48 ~ 2020-05-2619:59:49
  Report Instance Cnt : 1 (from Instance NO. 'ALL')
         Elapsed Time : 5.02 (mins)
              DB Time : 19.85 (mins)
       Avg. Session # : 1.00

Ok, there’s a problem in the calculation of “Avg. Session #” which should be 19.85 / 5.02 = 3.95 for my 4 sessions.

About the Host:


================================================================================
 1.1 CPU Usage
================================================================================

                Total       B/G           Host CPU Usage(%)
               DB CPU    DB CPU  ----------------------------------
Instance#    Usage(%)  Usage(%)   Busy   User    Sys   Idle  IOwait
-----------  --------  --------  -----  -----  -----  -----  ------
          0       7.9       7.9    7.9    7.8    0.1   92.1     0.0

mostly idle as I have 38 vCPUs there.


====================================================================
 1.2 Memory Usage
====================================================================
HOST Mem Size :    322,174M
                      Total SHM Size :      3,072M
                   Buffer Cache Size :      2,048M
               Avg. Shared Pool Size :     288.22M
                  Avg. DD Cache Size :       8.22M
                  Avg. PP Cache Size :      63.44M
                       DB Block Size :          8K
                     Log Buffer Size :         10M

My workload size (80MB defined in slob.conf) is much smaller than the buffer cache.
and then I expect an average of 4 active sessions in CPU:


====================================================================
 2.1 Workload Summary
====================================================================
Per Second          Per TX        Per Call
                  --------------- --------------- ---------------
      DB Time(s):            3.96           38.41           79.38
       Redo Size:        3,066.07       29,770.52       61,525.73
   Logical Reads:      427,613.99    4,151,993.87    8,580,787.33
   Block Changes:           26.36          255.90          528.87
  Physical Reads:            0.00            0.00            0.00
 Physical Writes:            0.99            9.61           19.87
      User Calls:            0.05            0.48            1.00
          Parses:            0.03            0.29            0.60
     Hard Parses:            0.00            0.00            0.00
          Logons:            0.02            0.16            0.33
        Executes:       67,015.04      650,694.45    1,344,768.53
       Rollbacks:            0.00            0.00            0.00
    Transactions:            0.10            1.00            2.07

3.96 average session is not so bad. But 427,613 LIOPS is only a bit higher than the 1 session run, but now with 4 concurrent sessions. 4x CPU usage for only 10% higher throughput…

At this point I must say that I’m not doing a benchmark here. I’m using the same method as I do with Oracle, for which I know quite well how it works, but here Tibero is totally new for me. I’ve probably not configured it correctly and the test I’m doing may not be correct. I’m looking at the number here only to understand a bit more how it works.

The Time Model is much more detailed than Oracle one:


================================================================================
 2.2 Workload Stats
================================================================================

                                                                         DB         DB
            Category                                       Stat         Time(ms)   Time(%)
--------------------  -----------------------------------------  ---------------  --------
Request Service Time  -----------------------------------------        1,190,760    100.00
                                            SQL processing time        1,200,818    100.84
                           reply msg processing time for others                0      0.00
                                             commit by msg time                0      0.00
                              SQL processing (batchupdate) time                0      0.00
                                           rollback by msg time                0      0.00
                                                   msg lob time                0      0.00
                                                    msg xa time                0      0.00
                                                   msg dpl time                0      0.00
                                            msg dblink 2pc time                0      0.00
                                                  msg tsam time                0      0.00
                                             msg long read time                0      0.00
      SQL Processing  -----------------------------------------        1,200,818    100.84
                                       SQL execute elapsed time          369,799     31.06
                                          csr fetch select time          355,744     29.88
                                             parse time elapsed           83,992      7.05
                                       sql dd lock acquire time           25,045      2.10
                                                ppc search time            5,975      0.50
                                          csr fetch insert time              115      0.01
                                          csr fetch delete time               37      0.00
                                        hard parse elapsed time                2      0.00
                                        total times to begin tx                1      0.00
                                      failed parse elapsed time                1      0.00
                                      sql dml lock acquire time                0      0.00
                                              cursor close time                0      0.00
                        stat load query hard parse elapsed time                0      0.00
                        stat load query soft parse time elapsed                0      0.00
                              csr fetch direct path insert time                0      0.00
                                           csr fetch merge time                0      0.00
                                                 optimizer time                0      0.00
                                          csr fetch update time                0      0.00
                                 stat load query row fetch time                0      0.00
              Select  -----------------------------------------               17      0.00
                                           table full scan time               11      0.00
                                                 hash join time                6      0.00
                                                      sort time                0      0.00
                                        op_proxy execution time                0      0.00
              Insert  -----------------------------------------                1      0.00
                                              tdd mi total time                1      0.00
                                            tdi insert key time                0      0.00
                                            tdd insert row time                0      0.00
                                              tdi mi total time                0      0.00
                                            tdi fast build time                0      0.00
              Update  -----------------------------------------                0      0.00
                                             tdd update rp time                0      0.00
                                            tdd update row time                0      0.00
                                              tdd mu total time                0      0.00
                                    idx leaf update nonkey time                0      0.00
              Delete  -----------------------------------------               15      0.00
                                            tdd delete row time               15      0.00
                                             tdd delete rp time                0      0.00
                                              tdd md total time                0      0.00
                                              tdi md total time                0      0.00
                                            tdi delete key time                0      0.00

I am surprised to spend 7% of the time in parsing, as I expect the few queries to be parsed only once there, which is confirmed by the Workload Summary above.

Having a look at the ratios:


================================================================================
 3.1 Instance Efficiency
================================================================================

                              Value
                           --------
      Buffer Cache Hit %:    100.00
           Library Hit %:    100.00
                PP Hit %:     99.54
             Latch Hit %:     98.44
        Redo Alloc Hit %:    100.00
         Non-Parse CPU %:     92.95

confirms that 7% of the CPU time is about parsing.

We have many statistics. Here are the timed-base ones:


================================================================================
 6.1 Workload Stats (Time-based)
================================================================================

                                     Stat         Time(ms)    Avg. Time(ms)              Num             Size
-----------------------------------------  ---------------  ---------------  ---------------  ---------------
                      SQL processing time        1,200,818        120,081.8               10                0
                Inner SQL processing time        1,200,817        120,081.7               10                0
                         req service time        1,190,760         79,384.0               15                0
                              DB CPU time        1,120,908              1.6          705,270                0
                 SQL execute elapsed time          369,799              0.0       20,174,294                0
                    csr fetch select time          355,744              0.0       20,174,264                0
                         tscan rowid time          187,592              0.0       20,174,102                0
               PSM execution elapsed time          129,820              0.0       60,514,594                0
                       parse time elapsed           83,992              0.0       20,174,308                0
                     tdi fetch start time           47,440              0.0       20,175,956        1,660,966
                 sql dd lock acquire time           25,045              0.0       20,171,527                0
                 isgmt get cr in lvl time           25,004              0.0       20,390,418       22,268,417
                        isgmt get cr time           21,500              0.0       22,479,405                0
                tscan rowid pick exb time           18,734              0.0      106,200,851                0
                    tscan rowid sort time           15,529              0.0       20,173,326                0
                         ppc search time            5,975              0.0       20,174,325                0
                           dd search time            4,612              0.0       40,344,560                0
...
                  hard parse elapsed time                2              0.1               13                0
...
                failed parse elapsed time                1              0.1                5                0
...

This parse time is not hard parse.
The non-timed-based statistics are conveniently ordered by number which is more useful than by alphabetical order:


================================================================================
 6.2 Workload Stats (Number-based)
================================================================================

                                     Stat              Num             Size         Time(ms)
-----------------------------------------  ---------------  ---------------  ---------------
                     candidate bh scanned      128,715,142      153,638,765                0
               consistent gets - no clone      128,700,485                0                0
  fast examines for consistent block gets      128,700,414                0                0
                    consistent block gets      128,698,143                0                0
                 block pin - not conflict      128,691,338                0              796
                              block unpin      128,691,332                0              333
                      rowid sort prefetch      106,200,861       18,945,339                0
                     tscan rowid pick exb      106,200,851                0           18,734
                          dd search count       40,344,560                0            4,612
Number of conflict DBA while scanning can       24,917,669                0                0

...
                    tdi fetch start total       20,175,956        1,660,966           47,440
           parse count (for all sessions)       20,174,308                0           83,992
                         csr fetch select       20,174,264                0          355,744
                              tscan rowid       20,174,102                0          187,592
                         tscan rowid sort       20,173,326                0           15,529
               memory tuner prof register       20,171,661       20,171,661              198
                            execute count       20,171,528                0                0
                      sql dd lock acquire       20,171,527                0           25,045
...
                      parse count (total)                9                0                0
...

and this clearly means that I had as many parses as counts. Then I realized that there was nothing about a parse-to-execute ratio in the “Instance Efficiency” which is the only ratio I read at in Oracle “Instance Efficiency”. Even if the terms are similar there’s something different from Oracle.
The only documentation I’ve found is in Korean: https://technet.tmaxsoft.com/download.do?filePath=/nas/technet/technet/upload/kss/tdoc/tibero/2015/02/&fileName=FILE-20150227-000002_150227145000_1.pdf
According to this, “parse time elapsed” is the time spent in “parse count (for all sessions)”, and covers parsing, syntax and semantic analysis which is what we call soft parse in Oracle. “parse count (total)” is parsing, transformation and optimization, which is what we call hard parse in Oracle. With this very small knowledge, it looks like, even if called from PL/SQL, a soft parse occurred for each execution. Look also at the “dd’ statistics: “sql dd lock acquire time” is 2.1% of DB time and looks like serialization on the Data Dictionary. And, even if not taking lot of time (“dd search time” is low) the “dd search count” is called 2 times per execution: soft parse of the small select has to read the Data Dictionary definitions.

Of course we have wait events (not timed events as this section does not include the CPU):


================================================================================
 3.2 Top 5 Wait Events by Wait Time
================================================================================

                                                  Time          Wait          Avg           Waits       DB
                         Event            Waits  -outs(%)       Time(s)      Wait(ms)           /TX   Time(%)
------------------------------  ---------------  --------  ------------  ------------  ------------  --------
           spinlock total wait          787,684      0.00         45.47          0.06  2,540,916.13      3.82
          dbwr write time - OS                7      0.00          0.16         23.14         22.58      0.01
spinlock: cache buffers chains            9,837      0.00          0.12          0.01     31,732.26      0.01
               redo write time               36      0.00          0.03          0.78        116.13      0.00
     log flush wait for commit               31      0.00          0.03          0.81        100.00      0.00

Only “spinlock total wait” is significant here.
Here is the section about latches:


================================================================================
 7.7 Spinlock(Latch) Statistics
================================================================================

                                            Get    Get     Avg.Slps       Wait           Nowait   Nowait     DB
                          Name          Request   Miss(%)     /Miss       Time(s)       Request   Miss(%)   Time(%)
------------------------------  ---------------  --------  --------  ------------  ------------  --------  --------
         SPIN_SPIN_WAITER_LIST        2,190,429      4.07      3.60         51.69             0      0.00      4.34
                    SPIN_ALLOC      161,485,088     10.40      3.71         41.42             0      0.00      3.48
                SPIN_LC_BUCKET       60,825,921      0.53     25.55          3.31             0      0.00      0.28
               SPIN_ROOT_ALLOC      126,036,981      0.00     79.52          0.16   126,040,272      0.00      0.01
               SPIN_BUF_BUCKET      257,429,840      0.00     15.82          0.12            34      0.00      0.01
               SPIN_RECR_UNPIN      121,038,780      0.00     57.14          0.08        36,039      0.01      0.01
                 SPIN_SQLSTATS       40,383,027      0.36      0.61          0.07             0      0.00      0.01

This looks like very generic Latch protected by spinlock.

While I was there I ran a Brendan Gregg Flamegraph during the run:


perf script -i ./perf.data | FlameGraph/stackcollapse-perf.pl | FlameGraph/flamegraph.pl --width=1200 --hash --cp > /tmp/tibero-slob.svg

https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/05/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/05/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/05/Annotat... 1205w" sizes="(max-width: 1024px) 100vw, 1024px" />

if you ever traced some Oracle PL/SQL call stacks, you can see that Tibero is completely different implementation. But the features are very similar. TPR is really like AWR. And there is also an ASH equivalent. You may have seen in the previous post that I have set ACTIVE_SESSION_HISTORY=Y in the .tip file (Tibero instance parameters). You can query a v$active_session_history.

SQL_TRACE

Given the similarity with Oracle, let’s do a good old ‘tkprof’.
I run the SLOB call with SQL_TRACE enabled:


[SID=t6a u@h:w]$ tbsql user1/user1

tbSQL 6

TmaxData Corporation Copyright (c) 2008-. All rights reserved.

Connected to Tibero.

SQL> alter session set sql_trace=y;

Session altered.

SQL> @slob 1 0 0 300 10240 64 LITE 0 FALSE 8 16 3 0 .1 .5 0

PSM completed.

SQL> alter session set sql_trace=y;

Session altered.

The raw trace has been generated, which contains things like this:


=========================================================
PARSING IN CSR=#8 len=87, uid=172, tim=1590498896169612
SELECT COUNT(c2)
                        FROM cf1
                        WHERE ( custid > ( :B1 - :B2 ) ) AND  (custid < :B1)
END OF STMT
[PARSE] CSR=#8 c=0, et=29, cr=0, cu=0, p=0, r=0, tim=1590498896169655
[EXEC] CSR=#8 c=0, et=48, cr=0, cu=0, p=0, r=0, tim=1590498896169720
[FETCH] CSR=#8 c=0, et=307, cr=66, cu=0, p=0, r=1, tim=1590498896170044
[PSM] OBJ_ID=-1, ACCESS_NO=0 c=0, et=14, cr=0, cu=0, p=0, r=0, tim=1590498896170090
[STAT] CSR=#8 ppid=4826 cnt_in_L=1 cnt=1 dep=0 'column projection (et=2, cr=0, cu=0, co=63, cpu=0, ro=1)'
[STAT] CSR=#8 ppid=4826 cnt_in_L=64 cnt=1 dep=1 'sort aggr (et=6, cr=0, cu=0, co=63, cpu=0, ro=1)'
[STAT] CSR=#8 ppid=4826 cnt_in_L=64 cnt=64 dep=2 'table access (rowid) CF1(3230) (et=258, cr=64, cu=0, co=63, cpu=0, ro=60)'
[STAT] CSR=#8 ppid=4826 cnt_in_L=0 cnt=64 dep=3 'index (range scan) I_CF1(3235) (et=20, cr=2, cu=0, co=2, cpu=0, ro=60)'
CSR_CLOSE #8
[PSM] OBJ_ID=-1, ACCESS_NO=0 c=0, et=62, cr=0, cu=0, p=0, r=0, tim=1590498896170170
=========================================================

And all this can be aggregated by TBPROF:

tbprof tb_sqltrc_17292_63_2107964.trc tb_sqltrc_17292_63_2107964.txt sys=yes sort="prscnt"

Which gets something very similar to Oracle tkprof:


SELECT COUNT(c2)
                        FROM cf1
                        WHERE ( custid > ( :B1 - :B2 ) ) AND  (custid < :B1)

 stage     count       cpu   elapsed   current     query      disk      rows
-----------------------------------------------------------------------------
 parse         1      0.00      0.00         0         0         0         0
  exec    829366     28.41     27.68         0         0         0         0
 fetch    829366    137.36    135.18         0  55057089         0    829366
-----------------------------------------------------------------------------
   sum   1658733    165.77    162.86         0  55057089         0    829366

    rows  u_rows        execution plan
  ----------------------------------------------------------
   829366       -       column projection (et=859685, cr=0, cu=0, co=52250058, cpu=0, ro=829366)
   829366       -        sort aggr (et=3450526, cr=0, cu=0, co=52250058, cpu=0, ro=829366)
   53079424       -       table access (rowid) CF1(3230) (et=106972426, cr=53079424, cu=0, co=52250058, cpu=0, ro=49761960)
   53079424       -        index (range scan) I_CF1(3235) (et=9256762, cr=1977665, cu=0, co=1658732, cpu=0, ro=49761960)

But here I’m puzzled. From the TPR statistics, I got the impression that each SQL in the PSM (Persistent Stored Procedure – the PL/SQL compatible procedural language) was soft parsed but I was wrong: I see only one parse call here. Is there something missing there? I don’t think so. The total time in this profile is 162.86 seconds during a 300 seconds run without any think time. Writing the trace file is not included there. if I compare the logical reads per second during the SQL time: 55057089/162.86=338064 I am near the value I got without sql_trace.

I leave it with unanswered questions about the parse statistics. The most important, which was the goal of this post, is that there’s lot of troubleshooting tools, similar to Oracle, and I was able to run something that was really specific to Oracle, with sqlplus, SQL, and PL/SQL without the need for many changes.

Cet article 티베로 – The AWR-like “Tibero Performance Repository” est apparu en premier sur Blog dbi services.

Oracle Database 19c RAC On OL8 Using Vagrant

On Sunday 17th May I started the process of putting together a Vagrant build of Oracle 19c RAC on Oracle Linux 8 (OL8.2 + EUK). I figured it would take me about 20 minutes to amend my existing OL7 build, but it took the whole of that Sunday, every evening for the following week, and the whole of the following Saturday and Sunday to complete it. There were some late nights, so from an hours perspective it well over 5 days of work. Most of that time would have been completely unnecessary if I wasn’t an idiot.

First things first. The result of that effort was this build on GitHub, with an associated article on my website describing the build in more detail.

For the remainder of this post I want to describe the comedy of errors that went into this creation. These problems were not indicative of issues with the software. These problems were totally down to me being an idiot.

Changes when using OL8

The vast majority of the build remains the same, but there was one change that was necessary when moving to from OL7 to OL8. This became evident pretty quickly. When configuring shared disks and UDEV, I switched from using partprobe to partx. It did look like partprobe was working, but it chucked out loads of errors, and partx didn’t. I found out about this from Uncle Google.

There was also a slight difference in getting the UUID of a regular VirtualBox disk in OL8, but I had already noticed that on single instance builds, so that wasn’t a problem.

Where it all started to go wrong

So with those minor changes in place, all the prerequisites built fine and I was ready to start the Grid Infrastructure (GI) installation. I try to do these builds using the stock releases, so people without Oracle Support contracts can try them. In this case that meant using the 19.3 software. This really marked the point where it all started to go wrong. I knew this build was only certified on 19.7, but I continued anyway…

None of the 19.3 installers recognised OL8, so I had to fake the Linux distribution using the following environment variable.

export CV_ASSUME_DISTID=OEL7.6

The 19.3 GI software refused to install, saying there was a problem with passwordless SSH connectivity. I tested it and it all looked good to me. I searched for solutions to this on Google and MOS, checking out everything I could find that seemed relevant. None of the solutions helped.

For the hell of it I tried on older releases of OL8. I had the same issue with OL8.1, but OL8.0 worked fine. I figured this was something to do OpenSSL or the SSH config, so I searched for more MOS notes and tried everything I could find. After a very long time I reached out to Simon Coter, who came back with an unpublished note (Doc ID 2555697.1), which included a workaround. That solved my problem for the 19.3 GI installation on OL8.2.

The GI configuration step and the DB software-only installation went fine, as they had done on OL8.0 also. Unfortunately the DBCA was failing to create a database, producing errors about passwordless SSH problems. The previous fix wasn’t helping, and I tried every variation I could find, short of totally downgrading OpenSSL.

At this point I pinged Markus Michalewicz a message, hoping he would be my salvation. In short he confirmed the build did work fine on OL8.2 if I used the 19.7 software, so the writing was on the wall. Then I stumbled on a MOS note (Doc ID 29529394.8) that explained the DBCA problem on OL8. There was no workaround, and it said it was fixed in 19.7. There is always a workaround, even if it means hacking the Linux distribution to death, but the more you do that the less realistic your build is, so at that point I conceded that it was not sensible to continue with the 19.3.

When I write it down like this it doesn’t seem like a lot, but all this took a long time. I was trying different Vagrant boxes, and eventually built some of my own for OL8.0 and OL8.2, to make sure I knew exactly what was on them. Added to that, many of the tests required full rebuilds, so I was waiting sometimes in excess of an hour to to see a success/fail message on the next test. It was soul destroying, and I nearly gave up a few times during the week.

A New Hope

Once I decided to go with the 19.7 patch things moved pretty quickly. I had the 19.3 GI installed and configured and the 19.3 DB software installed, so I added in a script to patch the lot to 19.7, and the database creation worked fine. Job done.

I cleaned things up a bit, pushed it to GitHub and put together the longer description of the build in the form of an article (see links above).

Pretty soon after I put all this live I got a comment from Abdellatif AG asking why I didn’t just use the “-applyRU” parameter in the Grid and DB software installations to apply the patches as part of the installation. At this point I felt a mix of emotions. I was kind of frustrated with myself for wasting so much time trying to get 19.3 working in the first place, then annoyed at myself for being so blinkered by the existing build I hadn’t seen the obvious solution regarding the patching. Why build it all then patch it, when you can do it right first time?

The conversion of the build to use the “-applyRU” parameter with the runInstaller and gridSetup.sh calls was really quick, but the testing took a long time, because these builds take in excess of 90 minutes each try. Things pretty much worked first time.

Now that I was effectively using 19.7 software out of the gate I figured many of the tweaks I had put in place when using the 19.3 software were no longer needed. I started to remove these tweaks and everything was good. By the end of that process I was pretty much left with the OL7-type build I started with. The vast majority of my time over the last week has been unnecessary…

Conclusion

The 19c (19.7) RAC build on OL8.2+UEK6 is pretty straight forward and works without any drama.

This process has shown me how stubborn and blinkered I can be at times. Taking a step back and getting a fresh perspective would have saved me a lot of time in the long run.

Thanks to Simon Coter, Markus Michalewicz and Abdellatif AG who all witnessed my descent into madness.

Cheers

Tim…

PS. There are some extra notes at the end of the article, which include some of the MOS notes I tried along the way. They are unnecessary for the build, but I felt like I should record them.

PPS. The image is how I feel at the end of this process.


Oracle Database 19c RAC On OL8 Using Vagrant was first posted on May 26, 2020 at 8:14 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Working Remotely Isn’t Just About the Work

With COVID-19, the business world has come to a fork in the road: Down one route, shuttered offices. Down the other, companies embracing remote work, showing us how businesses can survive — and thrive! — with a fully remote workforce. And that includes fostering strong team bonds and employee relationships; just because there’s no physical break room or water cooler doesn’t mean companies can’t create opportunities for colleagues to connect.

Not all businesses can operate with remote employees, and the world’s frontline workers don’t have the luxury of dialing in from home. But for those companies that can function without a central office, there are myriad benefits to a distributed workforce. Automattic has always been fully distributed, and we’ve learned a lot about how to build a productive and happy remote workforce over the past 15 years. (CEO Matt Mullenweg is sharing many of these lessons and chatting with other folks running distributed companies on Distributed.blog and on the Distributed podcast.) Lots of companies find themselves suddenly switching to a remote work environment, and there’s a learning curve.

One of the things that can be seen as challenging is social interaction on the job, which plays a vital role in productivity and mental health. When employees build friendships and strong interpersonal relationships, they enjoy their work more and do a better job overall. And if you’re reading this, thinking you don’t know anyone whose work improves when they have strong social bonds among colleagues, you do now — it’s me! As an extrovert, I get energized when I’m able to see people and faces, especially in person, but also online. I’m a conversational learner, too, so ideas stick better in my brain when I have the chance to discuss and brainstorm topics in real time with teammates.

https://en-blog.files.wordpress.com/2020/05/social-communication-team-me... 1440w, https://en-blog.files.wordpress.com/2020/05/social-communication-team-me... 150w, https://en-blog.files.wordpress.com/2020/05/social-communication-team-me... 300w, https://en-blog.files.wordpress.com/2020/05/social-communication-team-me... 768w, https://en-blog.files.wordpress.com/2020/05/social-communication-team-me... 1024w" sizes="(max-width: 720px) 100vw, 720px" />

These days, it seems teleconferencing software is as commonplace as coffee shops in Seattle. “Zoom” has become part of our vocabulary, helping us to remain close with family and friends (albeit not physically), and it’s become fodder for marketing and advertising campaigns.

It’s also our preferred conference tool at Automattic, and we use it for a wide range of work gatherings, like town halls and team meetings. But that’s not all we use it for. Knowing that social communication is vital to ensuring a strong culture of camaraderie, we use Zoom to hang out together: We host open mic nights, break out our pencil crayons and color collectively, or do some chair yoga with one another. We’ll also grab a beverage of choice — coffee, beer, Soylent, tea, a crisp rosé — and join a happy hour.

https://en-blog.files.wordpress.com/2020/05/remote-work-zoom-meeting-as-... 1440w, https://en-blog.files.wordpress.com/2020/05/remote-work-zoom-meeting-as-... 150w, https://en-blog.files.wordpress.com/2020/05/remote-work-zoom-meeting-as-... 300w, https://en-blog.files.wordpress.com/2020/05/remote-work-zoom-meeting-as-... 768w, https://en-blog.files.wordpress.com/2020/05/remote-work-zoom-meeting-as-... 1024w" sizes="(max-width: 720px) 100vw, 720px" />

In addition to maximizing online communication tools, we strengthen our collegial relationships offline. For example, we take advantage of the fact that we are distributed all over the world and send postcards to one another.

Whatever we do, we do it because we know that social connection is important. That we’re not going to work in the same physical office just means that we have to be more intentional about making the time, and carving out the (virtual) space, to connect with coworkers. To help inspire other businesses with newly distributed workforces, we’ve put together a resource that lists the many ways we communicate socially at Automattic. You can find it on this page.

Granularity impact on visualization

The longer the timeline we show in the graph, either the more points we show or the larger granularity we use.

For example if I show 1 hour of data, then I can show 60 points of 1 minute each granularity.

If I showed a day at 1 minute each then that would be 1440 points.

Say we wanted to limit the points to less than 300 points so our UI stays responsive no matter the time scale.

The for 24 hours I can show 5 minute granularity which is  288 points

If  I show 25 hours  that’s 300 points at 5 minute granularity so we are no longer under 300 points.

At this time I’d have to jump to the next level of granularity. In my case the next step up is 1 hour.

Using one hour granularity will look different than 5 minutes.

The following is roughly a day at 5 minute granularity first and 1 hour granularity second.
image(17)
a

image(18)

The most common affect of larger granularity is loosing the spikes i.e the 5 minute granularity could have highs and lows that get averaged out at larger granularity.

What is noticeable is that the 1 hour granularity above is that it is spiky. The highs and lows are roughly the same. The spikiness comes from the fact that the above graphs are using points. Each point represents the average value for the width of the granularity, but since we are using points, the actual width of the granularity is not visible. What would be more accurate is using bars for each “point” where the bar was the width of the granularity.

 

Here are a couple of examples from the tool DB Optimizer were the width of the bar represents the granularity:

Screen Shot 2020-05-22 at 10.41.44 AM

 

 

Screen Shot 2020-05-22 at 10.41.23 AM

Add a Post Carousel and Embed a Podcast Player in Seconds with Our Two Latest Blocks

The WordPress editor keeps expanding its library of blocks, adding new and exciting functionality to websites on a regular basis. After a crop of new business-related blocks last month, our most recent additions will appeal to three communities we hold close to our heart: podcasters, podcast lovers, and bloggers.

Use the Podcast Player block to spread the word about your favorite episodes

Podcasts have been an unstoppable cultural force for several years now — and the format seems to have only grown in popularity in recent months, as so many of us are at home and looking for entertainment and (occasionally?) enlightenment.

Are you a podcaster? Add the Podcast Player block to share your passion project with your visitors, embedding your podcast’s latest episodes on any post or page on your website. If you haven’t launched a podcast (for now, anyway), but are still an avid listener, you can now embed episodes from others’ shows within your own content so readers can listen to them without ever leaving your site.

Once you choose the Podcast Player block from the available blocks in the editor, you won’t need any special codes or embed links. The only information you’ll have to provide is the podcast’s RSS feed URL (not sure where to find it? The information will be readily available on your preferred podcast platform.) Here’s what the podcast player will look like on your site — the example below features the latest episode on the Distributed podcast, hosted by Automattic CEO Matt Mullenweg:

class="jetpack-podcast-player "
style=""
>

class="jetpack-podcast-player__current-track-title "
style="" >
Episode 21: Morra Aarons-Mele on Introversion and Anxiety in Remote Work




Distributed, with Matt Mullenweg


You can customize the Podcast Player block in a number of ways, from specifying how many of the most recent episodes to display, to deciding whether or not you’d like to show each episode’s description. Whether you feature your own podcast or someone else’s, you get to design the listener’s experience on your site. Learn more about adding and customizing the Podcast Episode block.

Keep readers reading with the Post Carousel block

Sooner or later, both veteran bloggers and newer creators encounter a similar issue: publishing frequently helps energize your audience (and gives you an SEO boost, too), but also makes older posts tougher to find.

The Post Carousel block comes to the rescue, providing an easy gateway to the treasures in your blog’s archives.

Add the Post Carousel block to any post or page on your site to encourage visitors to explore more posts (by default, the block will display your most recent ones). For a post to show up in the carousel, it needs to have a featured image; the idea is to engage and entice readers with strong visuals they’ll want to click on. To get a taste of the carousel in action, here’s one highlighting the five most recent posts on the WordPress.com Blog:

For a more curated experience, you can customize what the post carousel shows. A post about your cauliflower taco recipe might call for a carousel showing other recipes that share the “vegan” tag or category. If you run a multi-author online magazine, consider presenting other stories by the same contributor. You can also configure how many items each carousel displays and whether it automatically slides from one featured post to another (among other options). Learn more about adding and customizing the Post Carousel block.


We hope you give these two new blocks a try — leave a comment if you have any questions, or if you’d like to share a link to a post where you’ve already used them.

Hacking Session: 7 Levels of Hint Invalidity in Oracle SQL

As I promised in the end of the last hacking session (about Oracle SQL Monitoring, I will run another one on 2. June 2020. This one will be a deep dive into Oracle hint usage and various scenarios of hint (in)validity. I have too much material in my full-week Advanced Oracle SQL Tuning training, so I’m moving some “narrow deep dives” out and make available for free, so that we could spend more time actually tuning SQL during the class.