Search

OakieTags

Who's online

There are currently 0 users and 25 guests online.

Recent comments

Oakies Blog Aggregator

Dynamic Sampling

Following on from an OTN posting about dynamic sampling difficulties I had planned to write a blog post about the difference between “not sampling when hinted” and “ignoring the sample” – but Mohamed Houri got there before me.

It’s just worth highlighing a little detail that is often overlooked, though: there are two versions of the dynamic_sampling() hint, the cursor level and the table level, and the number of blocks sampled at a particular level is dependent on which version you are using.  Level 4 at the cursor level, for example, will sample 64 blocks if and only if a certain condition is met,  but at the table level it will sample 256 blocks unconditionally.

So try to be a little more specific when you say “I told the optimizer to use dynamic sampling …”, it’s either:

“I told the optimizer to use cursor level dynamic sampling at level X …”

or

“I told the optimizer to use table level dynamic sampling at level Y for table A and …”

Note – apart from the changes to dynamic sampling that allow for a level 11, there’s also a change introduced (I think) in 10g for the sample() clause applied to the table during sampling – it’s the addition of a seed() clause which ensures that when you repeat the same level you generate the same set of random rows.

Addendum

Here’s a little code I wrote some time ago to check the effect of the two options at different levels. I started by creating a (nologging) table from the first 50,000 rows of all_objects, then doubled it up a few times to 400,000 rows total, and ensured that there were no stats on the table. Then executed in turn each variant of the following anonymous pl/sql block (note that I have the execute privilege on the dbms_system package):


declare
	m_ct number;
begin
	execute immediate 'alter session set events ''10053 trace name context forever''';
	for i in 1..10 loop
		sys.dbms_system.ksdwrt(1,'=============');
		sys.dbms_system.ksdwrt(1,'Level ' || i);
		sys.dbms_system.ksdwrt(1,'=============');

		execute immediate 
			'select /*+ dynamic_sampling('    || i || ') */ count(*) from t1 ' ||
--			'select /*+ dynamic_sampling(t1 ' || i || ') */ count(*) from t1 ' ||
			'where owner = ''SYS'' and object_type = ''SYNONYM'''
			into m_ct;
	end loop;
end;
/

Obviously I could examine the resulting trace file to pick out bits of each optimisation, but for a quick check a simple grep for “sample block cnt” is almost all I need to do – with the following (slightly decorated) results from 11.2.0.4:


Table level
===========
Level 1
    max. sample block cnt. : 32
    sample block cnt. : 31
    max. sample block cnt. : 64
    sample block cnt. : 63
    max. sample block cnt. : 128
    sample block cnt. : 127
    max. sample block cnt. : 256
    sample block cnt. : 255
    max. sample block cnt. : 512
    sample block cnt. : 511
    max. sample block cnt. : 1024
    sample block cnt. : 1023
    max. sample block cnt. : 2048
    sample block cnt. : 2047
    max. sample block cnt. : 4096
    sample block cnt. : 4095
    max. sample block cnt. : 8192
    sample block cnt. : 8191
Level 10
    max. sample block cnt. : 4294967295
    sample block cnt. : 11565

Cursor level
============
No sampling at level 1
Level 2
    max. sample block cnt. : 64
    sample block cnt. : 63
    max. sample block cnt. : 64
    sample block cnt. : 63
    max. sample block cnt. : 64
    sample block cnt. : 63
    max. sample block cnt. : 64
    sample block cnt. : 63
    max. sample block cnt. : 128
    sample block cnt. : 127
    max. sample block cnt. : 256
    sample block cnt. : 255
    max. sample block cnt. : 1024
    sample block cnt. : 1023
    max. sample block cnt. : 4096
    sample block cnt. : 4095
Level 10
    max. sample block cnt. : 4294967295
    sample block cnt. : 11565


You’ll notice that the cursor level example didn’t do any sampling at level 1. Although the manual doesn’t quite make it clear, sampling will only occur if three conditions are met:

  • The table has no statistics
  • The table has no indexes
  • The table is involved in a join so that a sample could affect the join order and method

If only the first two conditions are met then the execution path will be a full tablescan whatever the sample looks like and the number of rows returned has no further impact as far as the optimizer is concerned – hence the third requirement (which doesn’t get mentioned explicitly in the manuals). If you do have a query that meets all three requirements then the sample size is 32 (31) blocks.

 

CBO Series

About a year ago I came across a couple of useful articles from Stefan Koehler, which is when I added his name to my blog roll. As an introduction for other readers I’ve compiled an index for a series of articles he wrote about the CBO viewed, largely, from the perspective of using Oracle to run SAP. Today I realised I hadn’t got around to publishing it, and there’s been a couple of additions since I first started to compile the list.

 

Testing 12c CDB Resource Plans and a little bit about OEM Express

Inspired by Sue Lee’s presentation at Enkitec’s E4 conference I decided to re-run my test suite to work out how you can use Database and I/O Resource Manager in Oracle 12.1.0.2.2 to help consolidating databases in the new Multi-Tenant architecture. I should point out briefly that the use of Multi-Tenant as demonstrated in this article requires you to have a license. Tuning tools shown will also require you to be appropriately licensed…

Setup

I have created a Container Database (CDB), named MBACHMT (MBACH – Multi-Tenant), in which I want to run Swingbench to experiment with I/O Resource Manager. The same Pluggable Databases (PDBs) are subject to the new CDB-(Database) Resource Manager testing. In order to simplify the task I’ll just create a single PDB for now, install Swingbench’s Order Entry schema, and clone the PDB twice. The CDB is created using dbca on an Exadata X2-2 quarter rack. Once the CDB was in place I could create the first PDB. I don’t think the steps need a lot of explanation, so here they go without much commenting. First the PDB must be created-based on the SEED database, then I create a tablespace and a user account to host the actual data.

SYS:MBACHMT1> create pluggable database swingbench0 admin user admin identified by secret roles=(DBA);

Pluggable database created.

SYS:MBACHMT1> alter pluggable database swingbench0 open instance=all;

Pluggable database altered.

SYS:MBACHMT1> alter session set container = swingbench0;

Session altered.

SYS:MBACHMT1> create tablespace soe datafile size 5g;

Tablespace created.

SYS:MBACHMT1> create user soe identified by soe default tablespace soe;

User created.

SYS:MBACHMT1> grant connect to soe;

Grant succeeded.

SYS:MBACHMT1> grant execute on dbms_lock to soe;

Grant succeeded.

One thing requires an explanation, and that’s the “alter session set container = swingbench0″ command. As you can see the SQLPROMPT is comprised of username – colon – container name. The container named does not change when using the SYS account to switch the context from CDB$ROOT to a PDB, hence you continue to see the MBACHMT1 prefix when in fact I am executing commands on the PDB level.

The next step is to create a basic Order Entry schema. I wrote about this before, and so has Dominic Giles on his blog. For your reference, here is the command I used:

oewizard -scale 1 -dbap supersecret -u soe -p soe -cl -cs //enkscan2/swingbench0 -ts SOE -create

SwingBench Wizard
Author  :        Dominic Giles
Version :        2.5.0.949

Running in Lights Out Mode using config file : oewizard.xml

============================================
|           Datagenerator Run Stats        |
============================================
Connection Time                        0:00:00.005
Data Generation Time                   0:00:21.380
DDL Creation Time                      0:00:52.221
Total Run Time                         0:01:13.609
Rows Inserted per sec                      566,237
Data Generated (MB) per sec                   46.1
Actual Rows Generated                   13,009,500


Post Creation Validation Report
===============================
The schema appears to have been created successfully.

Valid Objects
=============
Valid Tables : 'ORDERS','ORDER_ITEMS','CUSTOMERS','WAREHOUSES','ORDERENTRY_METADATA','INVENTORIES','PRODUCT_INFORMATION',
'PRODUCT_DESCRIPTIONS','ADDRESSES','CARD_DETAILS'
Valid Indexes : 'PRD_DESC_PK','PROD_NAME_IX','PRODUCT_INFORMATION_PK','PROD_SUPPLIER_IX','PROD_CATEGORY_IX','INVENTORY_PK',
'INV_PRODUCT_IX','INV_WAREHOUSE_IX','ORDER_PK','ORD_SALES_REP_IX','ORD_CUSTOMER_IX','ORD_ORDER_DATE_IX',
'ORD_WAREHOUSE_IX','ORDER_ITEMS_PK','ITEM_ORDER_IX','ITEM_PRODUCT_IX','WAREHOUSES_PK','WHS_LOCATION_IX',
'CUSTOMERS_PK','CUST_EMAIL_IX','CUST_ACCOUNT_MANAGER_IX','CUST_FUNC_LOWER_NAME_IX','ADDRESS_PK','ADDRESS_CUST_IX',
'CARD_DETAILS_PK','CARDDETAILS_CUST_IX'
Valid Views : 'PRODUCTS','PRODUCT_PRICES'
Valid Sequences : 'CUSTOMER_SEQ','ORDERS_SEQ','ADDRESS_SEQ','LOGON_SEQ','CARD_DETAILS_SEQ'
Valid Code : 'ORDERENTRY'
Schema Created

With the schema in place I need a PDB Resource Plan, or in other words, enable a resource manager plan on PDB-level. The PDB Resource Plan is almost identical to non-CDB DBRM plans, with a few restrictions mentioned in the Admin Guide chapter 44, just below figure 44-4. The one that affects me is the lack of multi-level resource plans. For this reason I’m going to use a simple plan based on the RATIO mgmt_mth of dbms_resource_manager.create_plan. Not having multi-level resource plans at your disposal might actually prevent incredibly complex plans from being created, that are beautifully architected but equally difficult to understand for me at least.

To keep it simple, my PDBs just have 1 purpose: execute Swingbench. As such there won’t be an additional application user, all I care about is the SOE account. I want it to be eligible for the lion share of CPU, so here’s the plan. You must make sure that your execution context is the new PDB (swingbench0). You can make sure by selecting “sys_context(‘userenv’,’con_name’) from dual.

begin
 dbms_resource_manager.clear_pending_area;
 dbms_resource_manager.create_pending_area;

 -- create a new resource consumer group to which we will later on add plan directives
 -- a consumer group is a logical construct grouping sessions to a similar/identical workload
 dbms_resource_manager.create_consumer_group('SWINGBENCH_GROUP', 'for swingbench processing');

 dbms_resource_manager.validate_pending_area();
 dbms_resource_manager.submit_pending_area();
end;
/

begin
 dbms_resource_manager.create_pending_area();

 -- when logging in as oracle user "SOE", map this session to the SWINGBENCH_GROUP
 dbms_resource_manager.set_consumer_group_mapping(
		dbms_resource_manager.oracle_user, 'SOE', 'SWINGBENCH_GROUP');
 dbms_resource_manager.submit_pending_area();
end;
/

begin
 -- must allow the SOE user to switch from OTHERS_GROUP to SWINGBENCH_GROUP. Forgetting this step
 -- is a common reason for DBRM not to work as expected
 dbms_resource_manager_privs.grant_switch_consumer_group('SOE','SWINGBENCH_GROUP', true);
end;
/

BEGIN
 dbms_resource_manager.clear_pending_area();
 dbms_resource_manager.create_pending_area();
 
 -- new create the plan in the first step. Note the mgmt_mth which essentially requires you
 -- to think of CPU shares, not percentages. Also enforces the requirement not to use
 -- multi-level plans
 -- thanks for @fritshoogland for making this obvious to me
 dbms_resource_manager.create_plan(
 	plan => 'ENKITEC_SWINGBENCH_PDB_PLAN',
        mgmt_mth => 'RATIO',
 	comment => 'sample DBRM plan for swingbench'
 );

 -- now define what the plan is about. Give the SYS_GROUP 3 shares
 dbms_resource_manager.create_plan_directive(
  plan => 'ENKITEC_SWINGBENCH_PDB_PLAN',
  comment => 'sys_group is level 1',
  group_or_subplan => 'SYS_GROUP',
  mgmt_p1 => 3);

 -- the SWINGBENCH user gets 7 shares
 dbms_resource_manager.create_plan_directive(
  plan => 'ENKITEC_SWINGBENCH_PDB_PLAN',
  group_or_subplan => 'SWINGBENCH_GROUP',
  comment => 'us before anyone else',
  mgmt_p1 => 7
 );

 -- finally anyone not in a previous consumer group will be mapped to the
 -- OTHER_GROUPS and get 1 share. 
 dbms_resource_manager.create_plan_directive(
  plan => 'ENKITEC_SWINGBENCH_PDB_PLAN',
  group_or_subplan => 'OTHER_GROUPS',
  comment => 'all the rest',
  mgmt_p1 => 1
 );
 
 dbms_resource_manager.validate_pending_area();
 dbms_resource_manager.submit_pending_area();
end;
/

If you didn’t get any errors you can enable the plan in the PDB using the familiar “alter system set resource_manager_plan = ENKITEC_SWINGBENCH_PDB_PLAN;” command.

The PDB is now ready for cloning, which requires it to be open read only. Once swingbench0 is in the correct open mode, clone it using “create pluggable database swingbench1 from swingbench0;” and “create pluggable database swingbench2 from swingbench0;”. Nice-no “RMAN> duplicate database to swingbench1 from active database” and all this … just a one-liner. Once the cloning is done, open the PDBs.

The CDB plan

With the PDBs all registered I am now able to define a CDB resource plan in CDB$ROOT. Again, check using “select sys_context(‘userenv’,’con_name’) from dual” that you are in the root, not a PDB. Here is the plan:

begin
 dbms_resource_manager.clear_pending_area;
 dbms_resource_manager.create_pending_area;

 dbms_resource_manager.create_cdb_plan(
  plan => 'ENKITC_CDB_PLAN',
  comment => 'A CDB plan for 12c'
 );

 dbms_resource_manager.create_cdb_plan_directive(
  plan => 'ENKITC_CDB_PLAN',
  pluggable_database => 'swingbench0',
  shares => 5,
  utilization_limit => 100);

 dbms_resource_manager.create_cdb_plan_directive(
  plan => 'ENKITC_CDB_PLAN',
  pluggable_database => 'swingbench1',
  shares => 3,
  utilization_limit => 50);

 dbms_resource_manager.create_cdb_plan_directive(
  plan => 'ENKITC_CDB_PLAN',
  pluggable_database => 'swingbench2',
  shares => 1,
  utilization_limit => 30);

 dbms_resource_manager.validate_pending_area;
 dbms_resource_manager.submit_pending_area;
end;
/

SQL> alter system set RESOURCE_MANAGER_PLAN = 'FORCE:ENKITC_CDB_PLAN' scope=both sid='*';

System altered.

With this CDB plan I defined CPU shares and utilisation limits. There are other items worth investigating, refer to the documentation for those. Please take note that except for swingbench0 I capped the maximum utilisation of 50 and 30 percent respectively. This will be interesting later.

Testing

No workload is 100% CPU-bound and I tried a few iterations before coming up with a suitable model to view the CDB Resource Plan in action. In the end all I needed was CPU burning nonsense, and I have found one way of burning CPU by calculating millions of square roots. I have written a small “launcher” script that can execute a SQL script against a (Pluggable) database x-many times. When exceeding the CPU capacity on the system I should be able to see the effect. So I launched 20 sessions of my CPU burning for-loops against each of the PDBs in instance two and connected against CDB$ROOT to see the effect:

SYS:MBACHMT2> select count(*), con_id, inst_id, event from gv$session where username = 'SOE' group by con_id, inst_id, event;

   COUNT(*)      CON_ID     INST_ID EVENT
----------- ----------- ----------- ----------------------------------------------------------------
         20           3           2 resmgr:cpu quantum
         20           4           2 resmgr:cpu quantum
         20           5           2 resmgr:cpu quantum

And yes, that looks like it ;)

The timings for the executions were:

  • 232 seconds for swingbench0
  • 318 seconds for swingbench1
  • 509 seconds for swingbench2

A little bit of OEM Express

OEM Express was running at the time and it allows you to see the utilisation of your PDBs:

CDB Resource Plan with all 3 PDBs active

CDB Resource Plan with all 3 PDBs active

You can see the 3 PDBs working along. On the CPU bars to the right you can see the number of running sessions (green) and those waiting (beige). You can also see the entitlement as per shares (black vertical bar in “CPU Resource Limits”) and the utilisation limit (red vertical bar) The output almost perfectly matched the configuration.

 

A new challenge…

After the last 16 years in the contracting and consulting world, for dozens of clients in Australia and other countries around the world, its time for a change of direction.  Whilst doing work with Oracle systems has always been (and continues to be) rewarding, I realised that a lot of the motivation for being at client sites was the sharing of information with developers and DBA’s at those clients so they could get the most out of their Oracle investment.  Similarly, its really cool to speak at conferences to let people explore the Oracle technology in perhaps way they had not thought of, and similarly, blogging about features and idiosyncracies that we stumble upon.  Whatever the medium (and even whatever the technology, whether it be Oracle or otherwise), the concept of an “IT community”, ie, people being keen to share, debate, and discuss has always been one of the things I like about my profession.

So with that in mind…I’ve taken a new role with Oracle Corporation which hopefully will extend this even further.  I’ll be part of the Developer Advocate group, with the aim of helping people get the most out of their Oracle investment, in particular, showing how resilient and powerful the SQL language continues to be.  And hopefully we’ll also attract people unfamiliar with SQL to embrace it rather than fear it Smile

12c Parallel Execution New Features: PX SELECTOR

Continuing my series on new 12c Parallel Execution features: I've already mentioned the new PX SELECTOR operator as part of the new Concurrent UNION ALL feature where it plays a key role. However, in general starting from 12c this new operator usually will get used when it comes to executing a serial part of the execution plan, like a full scan of an object not marked parallel, or an index based operation that can't be parallelized.In pre-12c such serial parts get executed by the Query Coordinator itself, and the new PX SELECTOR changes that so that one of the PX slaves of a PX slave set is selected to execute that serial part.There is not much left to say about that functionality, except that it doesn't get used always - there are still plan shapes possible in 12c, depending on the SQL constructs used and combined, that show the pre-12c plan shape where the Query Coordinator executes the serial part.Let's have a look at a simple example to see in more detail what difference the new operator makes to the overall plan shape and runtime behaviour:


create table t1 as select * from dba_objects;

exec dbms_stats.gather_table_stats(null, 't1')

alter table t1 parallel;

create table t2 as select * from dba_objects;

exec dbms_stats.gather_table_stats(null, 't2')

create index t2_idx on t2 (object_name);

select /*+ optimizer_features_enable('11.2.0.4') */
*
from
t1
, t2
where
t1.object_id = t2.object_id
and t2.object_name like 'BLUB%'
;

-- 11.2.0.4 plan shape
-----------------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | PX COORDINATOR | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | Q1,01 | P->S | QC (RAND) |
|* 3 | HASH JOIN | | Q1,01 | PCWP | |
| 4 | BUFFER SORT | | Q1,01 | PCWC | |
| 5 | PX RECEIVE | | Q1,01 | PCWP | |
| 6 | PX SEND BROADCAST | :TQ10000 | | S->P | BROADCAST |
| 7 | TABLE ACCESS BY INDEX ROWID| T2 | | | |
|* 8 | INDEX RANGE SCAN | T2_IDX | | | |
| 9 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
|* 10 | TABLE ACCESS FULL | T1 | Q1,01 | PCWP | |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("T1"."OBJECT_ID"="T2"."OBJECT_ID")
8 - access("T2"."OBJECT_NAME" LIKE 'BLUB%')
filter("T2"."OBJECT_NAME" LIKE 'BLUB%')
10 - filter(SYS_OP_BLOOM_FILTER(:BF0000,"T1"."OBJECT_ID"))

-- 12.1.0.2 plan shape
--------------------------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | PX COORDINATOR | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | Q1,01 | P->S | QC (RAND) |
|* 3 | HASH JOIN | | Q1,01 | PCWP | |
| 4 | JOIN FILTER CREATE | :BF0000 | Q1,01 | PCWP | |
| 5 | PX RECEIVE | | Q1,01 | PCWP | |
| 6 | PX SEND BROADCAST | :TQ10000 | Q1,00 | S->P | BROADCAST |
| 7 | PX SELECTOR | | Q1,00 | SCWC | |
| 8 | TABLE ACCESS BY INDEX ROWID BATCHED| T2 | Q1,00 | SCWC | |
|* 9 | INDEX RANGE SCAN | T2_IDX | Q1,00 | SCWP | |
| 10 | JOIN FILTER USE | :BF0000 | Q1,01 | PCWP | |
| 11 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
|* 12 | TABLE ACCESS FULL | T1 | Q1,01 | PCWP | |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("T1"."OBJECT_ID"="T2"."OBJECT_ID")
9 - access("T2"."OBJECT_NAME" LIKE 'BLUB%')
filter("T2"."OBJECT_NAME" LIKE 'BLUB%')
12 - filter(SYS_OP_BLOOM_FILTER(:BF0000,"T1"."OBJECT_ID"))

The pre-12c plan shape here shows two significant things that I want to emphasize:First this plan shape only requires a single PX slave set since the Query Coordinator takes over the part that needs to be re-distributed, so although we have a plan shape that requires re-distribution there's only a single PX slave set involved. In case there is at least one operation that gets executed in parallel and requires re-distribution there always will be two PX slave sets.Second the plan shape demonstrates that parts of a Parallel Execution plan that get executed serially by the Query Coordinator require an additional BUFFER SORT operation. The HASH JOIN operation itself is blocking while it is consuming the left row source for building the hash table, so there is no true requirement to add another BUFFER SORT after the PX RECEIVE operation, but it looks like a pretty strict rule that any serial activity that involves the Query Coordinator adda a BUFFER SORT operation after re-distribution - I assume the reasoning for this is that the Query Coordinator isn't available for "coordinating" the PX slaves as along as it is actively involved in executing serial operations, hence the need to block any other parallel activity.This normally shouldn't be too relevant to performance since you should only execute operations serially that are tiny and not worth to run parallel, so buffering them shouldn't add much overhead, but it's just another reason why you see additional BUFFER SORT operations in parallel plans that are not there in serial-only plans.The 12c plan shape shows the new PX SELECTOR operator that executes now the serial part of the execution plan instead of the Query Coordinator. This also adds new decorators in the IN-OUT column called "SCWC" and "SCWP" respectivley, which you won't find in pre-12c plans - they are probably meant to read "Serial Combined With Child/Parent", similar to "PCWC/PCWP".The good thing about the new PX SELECTOR is that the need for an additional BUFFER SORT operator is now gone.However, one side-effect of the new operator for this particular plan shape here is that now a second PX slave set is allocated, although only one PX slave actually will get used at runtime. Note that for other plan shapes that need two PX slave sets anyway this doesn't matter.Another good thing about the new PX SELECTOR operator is that it avoids an odd bug that sometimes happens with Serial->Parallel redistributions when the Query Coordinator is involved. This bug causes some delay to the overall execution that usually isn't too relevant since it only adds approx 1-2 seconds delay (but it can occur several times per execution so these seconds can add up) and therefore is rarely noticed when a Parallel Execution might take several seconds / minutes typically. I might cover this bug in a separate blog post.Unrelated to the PX SELECTOR operator, the 12c plan shape also demonstrates that in 12c the way Bloom filters are shown in the plan has been improved. The 11.2.0.4 version includes the same Bloom filter as you can see from the "Predicate Information" section of the plan but doesn't make it that obvious from the plan shape that it is there (and sometimes in pre-12c it even doesn't show up in the "Predicate Information" section but is still used)

Optimizer curiosity in 12.1.0.2

For almost as long as I can remember, the optimizer has had a nifty little trick when you (in effect) try to combine two different usage models within a single SQL.  To explain that, I’m referring to the common scenario of: "If a bind value is provided by the caller, then use it, otherwise it shouldn’t limit the result set.  So we commonly see queries like:

select *
from MY_TABLE
where COL1 = NVL(:mybindvar, COL1)

[For ease of discussion, we’ll assume COL1 is not nullable]

Anyway, the nice little optimizer trick was to optimize the query to handle the two separate use cases, so you see a CONCATENATION step in the execution plan, and two FILTER’s, one to handle the case when the bind variable is null, and one to handle the case where it is provided. 

-----------------------------------------------
| Id  | Operation                             |
-----------------------------------------------
|   0 | SELECT STATEMENT                      |
|   1 |  CONCATENATION                        |
|*  2 |   FILTER                              |
            <path1>
|*  5 |   FILTER                              |
            <path2>
-----------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(:B1 IS NULL)
   5 - filter(:B1 IS NOT NULL)

In effect, the optimizer has taken care of the old SQL tuning advice we used to give to the developers to rewrite the SQL as a UNION ALL, or even split it into two distinct SQL statements to cater for each case.  With that in mind, I picked up a change to this behaviour in 12.1.0.2 (and some additional work by Jonathan Lewis suggests 11.2.0.4 as well), where bind peeking seems to create some confusion. 

Let’s look at an example.  I’ll create a table, populate it with approx 4million rows, where two cols are of interest:

  • SEQ, 2m distinct values and hence highly selective
  • PSEQ, only 2 distinct values, and hence not a great option for an index path

The primary key is the composite of these two columns, and a secondary index on PSEQ

SQL> create table T (
  2    seq int, pseq int, blah char(30),
  3    constraint T_PK primary key (seq,pseq)
  4  )
  5  /

Table created.

SQL> insert into T
  2  select trunc(rownum/2) seq,
  3         mod(rownum,2) pseq,
  4         'x' blah
  5  from
  6    ( select 1 from dual connect by level < 1000 ),
  7    ( select 1 from dual connect by level < 4000 )
  8  /

3995001 rows created.

SQL> create index T_IX2 on T ( pseq );

Index created.

SQL> exec dbms_stats.gather_table_stats('','T',no_invalidate=>false);

PL/SQL procedure successfully completed.

Now we’ll execute an SQL in the form previously mentioned, and take the case where the bind variable in the NVL is null.

SQL> variable b1 number
SQL> variable b2 number
SQL> exec :b1 := null;
SQL> exec :b2 := 1706496;

SQL> select /*+ gather_plan_statistics */ *
  2  from T
  3  WHERE seq = :B2
  4  AND pseq = NVL(:B1 ,pseq)
  5  /

       SEQ       PSEQ BLAH
---------- ---------- ------------------------------
   1706496          0 x
   1706496          1 x

SQL>
SQL> select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0caq50rvfkub5, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ * from T WHERE seq = :B2 AND pseq
= NVL(:B1 ,pseq)

Plan hash value: 3837764478

---------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |       |      1 |        |      2 |00:00:00.01 |       6 |
|   1 |  CONCATENATION                        |       |      1 |        |      2 |00:00:00.01 |       6 |
|*  2 |   FILTER                              |       |      1 |        |      2 |00:00:00.01 |       6 |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T     |      1 |      2 |      2 |00:00:00.01 |       6 |
|*  4 |     INDEX RANGE SCAN                  | T_PK  |      1 |      2 |      2 |00:00:00.01 |       4 |
|*  5 |   FILTER                              |       |      1 |        |      0 |00:00:00.01 |       0 |
|*  6 |    TABLE ACCESS BY INDEX ROWID BATCHED| T     |      0 |      1 |      0 |00:00:00.01 |       0 |
|*  7 |     INDEX RANGE SCAN                  | T_IX2 |      0 |      1 |      0 |00:00:00.01 |       0 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(:B1 IS NULL)
   4 - access("SEQ"=:B2)
       filter("PSEQ" IS NOT NULL)
   5 - filter(:B1 IS NOT NULL)
   6 - filter("SEQ"=:B2)
   7 - access("PSEQ"=:B1)


30 rows selected.

Because :B1 is null, you can see from the Predicate Information, and from the Actual Rows information, that we took the execution path in lines 3-4.  But take a peek (no pun intended) at lines 6-7.  That path, whilst not used, is proposing the use of index T_IX2, which as we know is 2 distinct keys across 4 million rows.  The problem is … we’ve now loaded that execution plan into our library cache.  So let’s see what happens when we exercise that part of the plan when we this time specify both bind variables

SQL> exec :b1 := 1;

SQL> select /*+ gather_plan_statistics */ *
  2  from T
  3  WHERE seq = :B2
  4  AND pseq = NVL(:B1 ,pseq)
  5  /

       SEQ       PSEQ BLAH
---------- ---------- ------------------------------
   1706496          1 x

SQL>
SQL> select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0caq50rvfkub5, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ * from T WHERE seq = :B2 AND pseq
= NVL(:B1 ,pseq)

Plan hash value: 3837764478

------------------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |       |      1 |        |      1 |00:00:00.70 |   28124 |    178 |
|   1 |  CONCATENATION                        |       |      1 |        |      1 |00:00:00.70 |   28124 |    178 |
|*  2 |   FILTER                              |       |      1 |        |      0 |00:00:00.01 |       0 |      0 |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T     |      0 |      2 |      0 |00:00:00.01 |       0 |      0 |
|*  4 |     INDEX RANGE SCAN                  | T_PK  |      0 |      2 |      0 |00:00:00.01 |       0 |      0 |
|*  5 |   FILTER                              |       |      1 |        |      1 |00:00:00.70 |   28124 |    178 |
|*  6 |    TABLE ACCESS BY INDEX ROWID BATCHED| T     |      1 |      1 |      1 |00:00:00.70 |   28124 |    178 |
|*  7 |     INDEX RANGE SCAN                  | T_IX2 |      1 |      1 |   1997K|00:00:00.63 |    3898 |    178 |
------------------------------------------------------------------------------------------------------------------

Ouch…2000 million rows scanned in the index.  Since we provided both bind variables, the obvious access path should have been a simple primary key lookup.  But the path derived from our first execution (with :B1 being null) has left behind a "nasty legacy".

The troubling thing about this (in terms of application stability) is that the order in which we run queries now impacts the performance of how they run.  Lets clear out the cursor by recalculating stats and then reverse the order of execution.

SQL> exec dbms_stats.gather_table_stats('','T',no_invalidate=>false);

PL/SQL procedure successfully completed.

SQL> exec :b1 := 1;
SQL> exec :b2 := 1706496;

PL/SQL procedure successfully completed.

SQL> select /*+ gather_plan_statistics */ *
  2  from T
  3  WHERE seq = :B2
  4  AND pseq = NVL(:B1 ,pseq)
  5  /

       SEQ       PSEQ BLAH
---------- ---------- ------------------------------
   1706496          1 x

SQL>
SQL> select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0caq50rvfkub5, child number 1
-------------------------------------
select /*+ gather_plan_statistics */ * from T WHERE seq = :B2 AND pseq
= NVL(:B1 ,pseq)

Plan hash value: 933468988

--------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |      |      1 |        |      1 |00:00:00.01 |       4 |
|   1 |  CONCATENATION                        |      |      1 |        |      1 |00:00:00.01 |       4 |
|*  2 |   FILTER                              |      |      1 |        |      0 |00:00:00.01 |       0 |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T    |      0 |      2 |      0 |00:00:00.01 |       0 |
|*  4 |     INDEX RANGE SCAN                  | T_PK |      0 |      2 |      0 |00:00:00.01 |       0 |
|*  5 |   FILTER                              |      |      1 |        |      1 |00:00:00.01 |       4 |
|   6 |    TABLE ACCESS BY INDEX ROWID        | T    |      1 |      1 |      1 |00:00:00.01 |       4 |
|*  7 |     INDEX UNIQUE SCAN                 | T_PK |      1 |      1 |      1 |00:00:00.01 |       3 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(:B1 IS NULL)
   4 - access("SEQ"=:B2)
       filter("PSEQ" IS NOT NULL)
   5 - filter(:B1 IS NOT NULL)
   7 - access("SEQ"=:B2 AND "PSEQ"=:B1)

When we optimize the query for the initial case of both bind variables provided, you can see that both sides of the CONCATENATION have yielded a sensible path.

Site maintenance and how to manage changing URLs

DiagnosticsAfter my recent rants about Oracle changing URLs and breaking stuff, I’ve actually done some changes myself. :)

From time to time change is forced on internet content producers. This might be because of platform changes, or changes in the way search engines behave. The important thing is how you handle that change.

Followers of the blog will know I recently made my website responsive. That happened in part because Google recently announced they would downgrade the rankings of sites that weren’t “mobile friendly” and “responsive”. The search ranking were only meant to affect mobile searches. What they didn’t say, but many people including myself believe, is that these rankings actually affect normal desktop-based searches as well. When this Google announcement was made, I noticed a drop in my hit rate. Once I changed the site to be responsive, the hit rate went up again somewhat. When I recently corrected about 100 of the remaining non-responsive articles, the hit rate went up again. It could be conincidence, but it certainly seems there was a bleed-over of this ranking change into the desktop side of things, which represents over 95% of my traffic. Those changes affected content, but not the URLs to the content.

Since I’m revisiting almost every article to fix broken links to Oracle docs, I thought I would take the opportunity to do some additional site maintenance, specifically in the following two areas.

  • HTTPS : About 9 months ago I got a certificate for the website to allow it to be accessed using HTTPS. This was also influenced by a Google decision that they would improve the ranking of content that was available over HTTPS, as well as HTTP. It was part of their “HTTPS Everywhere” campaign. Even though the site could handle HTTPS, I did not make it the default. As of a couple of days ago, you may have noticed all pages on oracle-base.com are how delivered over HTTPS. Unfortunately, this represents a URL change as far as the internet is concerned, so it means lots of broken links, unless you handle it properly. More about that later.
  • Removal of “.php” extension : You will notice many blogs and websites don’t display a file extension of pages, or display a generic “.htm” or “.html” extension. It’s pretty easy to do this using query rewrites in Apache or a “.htaccess” file. For a while, the site could be accessed with or without the “.php” extension. Now it is removed by default. The nice thing about this is you can change the underlying technology at any time, without having to support an inconsistent file extension. Once again, this represents a URL change.

So how do you manage this change without pissing off “the internet”?

The answer is rewrites and redirects done in real web pages, Apache config or “.htaccess” files. Essentially, you are supporting the old URL and redirecting the browser to the new URL, using a 301 redirect, so all search engines know the content has moved location and can be re-indexed in that new location. Over time, all the links from Google will go directly to the new URL.

So that means you can remove the redirects after a while right? NO! People will have links from their website to the old URLs forever. People will have bookmarks in their browsers forever. If you are going to change a URL, the old URL must be maintained forever.

Over the years I’ve made lots of structural changes to the site.

  • When my website started it was written in Active Server Pages, using a “.asp” extension.
  • After a while I switched to PHP, using a “.php” extension.
  • I used to name pages using initcap. A couple of years ago I switched to lower case and “-” separated names.
  • About 9 months ago I removed the “www.” because it seemed pointless in this day and age.
  • I’ve just swicthed to HTTPS.
  • I’ve just removed the “.php” extension.

If we look at a really old article, probably about 15 years old, we will see the history of these changes in the following URLs.

So all those structural changes over the last 15 years should have resulted in zero broken links, zero pissed off content producers who link to my content and zero uninformed search engines.

Now I’m not perfect, so if anyone finds something that is broken, I will fix it, assuming it’s not your bad typing or copy/pasting. :)

Cheers

Tim…

PS. Any structural changes, regardless of how well you do your 301 redirects, can result in lower search rankings, so it should not be done on a whim if you really care about hitting that top spot on Google. This is my hobby, so I will do whatever I want. :)


Site maintenance and how to manage changing URLs was first posted on June 11, 2015 at 10:37 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

EM 12.1.0.5 has been announced!

The latest version of Enterprise Manager, EM 12.1.0.5, has been announced! The announcement can be seen here. Obviously, there will be a number of posts that come out about it over the next few weeks, so I’ll add this post here as a quick notification and will update it with links to more information as it becomes available.

Technical Articles

Managing the Hybrid Cloud with Oracle Enterprise Manager – Demo
A First Technical Look At Hybrid Cloning in EM12c Release 5

Press Coverage

Oracle Enterprise Manager 12c R5 focuses on the hybrid cloud
Oracle offers software to ease move to a hybrid Oracle cloud
Oracle Enterprise Manager 12c R5 Allows Hybrid Cloud Management from a Single Pane of Glass
Unifying Oracle Database Management Across Clouds

Cross-platform database migration

Last weekend, we faced one of the larger challenges that we’ve faced in my time at my current client.  Migrate multiple database systems, across multiples sites, all to new hardware (with new endian format), new database version (12.1.0.2)…and of course, try to do it with as small a disruption to the service as possible.

We are not a Goldengate customer, so pursuing a zero-downtime migration was not considered, and to be honest, even we had such facilities at our disposal, I think we would still have taken a small outage just to allow a "quiet time" to do verification checks etc.

I had done a similar exercise many years ago, with a client moving from 9i to 10g.  In those days, migration with low downtime was a massively complex affair.  I remember countless SQL and shell scripts, all designed to do as much work concurrently as possible (multiple physical networks, multiple concurrent jobs pulling tables over db links, concurrent index builds, etc etc etc), all designed to keep that outage time to a minimum.  High risk days indeed.

In contrast, using the cross-platform transportable tablespaces, this current migration ran very smoothly indeed.  Below is a rough guide as to what is needed to perform such a migration – I didnt want to list precisely our steps, because people will mistakenly treat that as the definitive prescription for how to do it.  This is more of an overview, which will get you started, but hopefully you’ll be doing your own thorough planning and testing when the time comes !

Firstly, you create a fresh new database on your target system.  It will just have the usual tablespaces (SYSTEM, SYSAUX, etc).  It will be the recipient of all of the non-default tablespaces that will be be transporting over from our source system.

On the source system, first you’ll need a datapump of all the object definitions, without unloading any data.

expdp ... dumpfile=metadata_full.dmp full=y content=metadata_only exclude=user,role,role_grant,profile exclude=table_statistics exclude=index_statistics

and for each datafile, you need an imagecopy using RMAN, for example:

backup as copy tag 'my_tablespace' datafile 50,60,61 format '/backup_dest/file%f.bkp; 

And I do this for all of the user tablespaces, ie, not those that would be created as part of a fresh db install.  Note: Ensure compression is turned off for all RMAN parts of this process.

Notice I excluded statistics in the datapump.  Whilst datapump can also unload the optimizer stats for you, we found impdp to very slow for bringing those stats back into the target system, so we took control of that ourselves

exec dbms_stats.create_stat_table(user,'STATS'); 
exec dbms_stats.export_database_stats('ST'ATS,'WHOLE_DB',user); 

Now – so far, our source system is still running.  There has been no outage or disruption to service.  Those datafile copy backups we took, now need to be converted to the endian format of the target system, once again using RMAN, with a convert command for each file.

convert from platform 'AIX-Based Systems (64-bit)' datafile '/backup_dest/file50.bkp'  format '/target_dest/file50.dbf'; 

So now we have a converted set of files, but of course, they are a "fuzzy" copy of the data since the copies were taken with the source system still active.  This is where the nice part of cross-platform migration comes in.  I can regularly take incrementals from my source system, and convert/apply them to the freshly converted datafile copies.  Thus I can keep my target system up to date with my source system, without taking my source system offline.

So I can find the base checkpoint# from all files, and take an incremental of each tablespace since that point in time

select MIN(checkpoint_change#) from v$datafile; 

backup incremental from scn  tablespace 'MY_TSPACE' format  '/backup_dest/%U'; 

The incrementals can then be converted/applied to the target system using a little bit of PLSQL

DECLARE 
  d  varchar2(512); 
  h  varchar2(512) ; 
  t  varchar2(30) ; 
  b1 boolean ; 
  b2 boolean ;

BEGIN 
  d := sys.dbms_backup_restore.deviceAllocate; 
  sys.dbms_backup_restore.applysetdatafile(   check_logical=>false, cleanup=>false) ;

  sys.dbms_backup_restore.applyDatafileTo( 
     dfnumber=>50, 
     toname =>'/target_dest/file50.dbf, 
     fuzziness_hint=>0, 
     max_corrupt =>0, 
     islevel0=>0, 
     recid=>0, 
     stamp=>0);

  sys.dbms_backup_restore.restoreSetPiece( 
     handle=>'/backup_dest/incr12345', 
     tag=>null, 
     fromdisk=>true, 
     recid=>0, 
     stamp=>0) ;

  sys.dbms_backup_restore.restoreBackupPiece( 
    done=>d, 
    params=>null, 
    outhandle=>h, 
    outtag=>t, 
    failover=>failover);

  sys.dbms_backup_restore.restoreCancel(TRUE); 
  sys.dbms_backup_restore.deviceDeallocate;

END; 
/

 

This process can be repeated indefinitely until you are ready to cutover to the new system.  When that time will comes, the process is similar

Your source system now must go read-only (and hence this most likely is the commencement of your outage)

begin 
for i in ( 
select tablespace_name from dba_tablespaces 
where tablespace_name not in ('SYSTEM','SYSAUX') and contents = 'PERMANENT' 
) 
loop 
  execute immediate 'alter tablespace '||i.tablespace_name||' read only'; 
end loop; 
end; 
/

You now take a datapump of the tablespaces you are going to transport over.  We are omitting the statistics, because we already have them.

expdp ... exclude=table_statistics exclude=index_statistics dumpfile=transport.dmp transport_full_check=no transport_tablespaces=tspace1,tspace2,... 

And you take a final incremental backup, and apply it to the target system.  Now all is in readiness.

Create all the required users by datapump-ing across the network

impdp ... network_link=source_db_link full=y include=user,role,role_grant,profile 

and then import the transportable tablespace definitions

impdp dumpfile=transport.dmp transport_datafiles='/target_dest/file10.dbf','/target_dest/file11.dbf','/target_dest/file12.dbf',... 

And then set your tablespaces to read-write

begin 
for i in ( 
select tablespace_name from dba_tablespaces 
where tablespace_name not in ('SYSTEM','SYSAUX') and contents = 'PERMANENT' and status = 'READ ONLY' 
) 
loop 
  execute immediate 'alter tablespace '||i.tablespace_name||' read write'; 
end loop; 
end; 
/

And bring in all of the other bits and pieces (views, PL/SQL, etc etc) using the metadata dump you took earlier

impdp ... dumpfile=full.dmp full=y exclude=table_statistics exclude=index_statistics

and voila…you will have a migrated database with minimal downtime.  In our case, the transportable datapump export/import, plus the last incremental (thanks to block change tracking), was around 30mins, and the import of all of our PL/SQL etc around 30mins as well.  Not too bad for multiple multi-terabyte databases.

We then upgraded our stats table, and used dbms_stats to import the statistics, and started verification testing.

That covers the broad steps you’ll be taking.  There’s plenty of information on MOS and on OTN about the nitty gritty, but all in all, we were very happy with outcome.

Friday Philosophy – Flippin’ Technology

Sometimes I think I would have been a Luddite or a member of other groups who have trashed new technology in frustration. Some days, I can just scream at it. You would think having worked in Information Technology for so long would make me more of a fan, but it actually makes me worse – as I know there is no need for there to be so much wrong with the electronic dross we all have to deal with day-to-day. And if I, someone who has used (heck, even programmed) computers for 3 decades, have trouble with these damned things, how frustrating must “normal” people find it?

Tesco Challenge - original on RevK Rant's blog

Tesco Challenge – original on RevK Rant’s blog

Take this morning’s source-for-a-rant. Self checkouts in supermarkets. I had popped into Tesco to get the weekend’s rations of baked beans, wine and cat food and there were large queues for all manned (though, I should more correctly say, mostly womanned) tills. And almost no queue for the self-service ones. We all know why almost no queue for the self-service ones, most of us hate them. But I had to get back home for a UKOUG conference call and there was very little chance the three-people-deep queues would be negotiated in time, so I manned up and went to one of the vacant screens.

Have I mentioned I’ve been using computers since before Wayne Rooney, Keira Knightley or Michael Phelps were born? So I have some affinity and experience to navigating screens of information. But, like all of theses devices, using them is painful. Given they are designed to be used by everyone including idiots, why is the “UX” so low? Why does the important information appear at different spots in the screen at different times? Why does there seem to be no button to press for a simple, key-word triggered guide (“How to weigh Veg?” Oh, look up there, press the correct icon, look down there and press another and then finally click over here to say “yes I really did ask you to weigh some bananas” – that would be nice). Why does the Waitrose one make me swipe my card to pull up my scanned items but insist I shove the card up the slot to pay? Plus all the times you have to get some human to come over and confirm you are over 18 (I need to be 18 to buy expanding foam?!?) or don’t look suicidal. I’m not being funny but the age check is just not needed, if I am using a credit card it can know I am 18 or over (you cannot you have a UK credit card under 18, partly as you are not allowed to sign up for a credit agreement below that age and if I am using someone else’s credit card, me buying a bottle of wine is the least of the potential issues). To give them their due, at least our local Tesco (unlike many other stores I have used around here) have someone on hand to constantly swipe, press, tap and harass the machines into playing correctly.

I can’t believe how badly these self service checkouts work. I can’t believe the companies have not tested them extensively with real people (I know, they claim to, but then I’ve seen “user system testing” in big banks and it is shockingly poor). How can they think such terrible systems are good for business? That people still insist on queuing for checkouts with real people rather than use them must tell the companies something! Why can’t these systems be better designed. Why are they so painful to use? It can’t be me. Maybe it is me….

Next rant. My internet & email supplier. Hi BT. BT, you are crap at running your internet & email service. I’ll only mention in passing the regular episodes of slow internet, the times mail seems to struggle to keep moving, the harassing emails to buy more allowance as I am reaching my limit…for the prior month (“HI BT, I think I might pop back in time and download 15 films last March, can I have more allowance for than as I’d used over half my limit”) – but I am going to complain over the fact that for the last 5 weeks now, each weekend you stop accepting valid connection requests from my Laptop – but allow them from my iPhone. So my account/pwd is working fine. But no, from the PC you tell me I have to validate my account. So I log on to the web site and as soon as I put in my details you tell me I have to change my password as there has been too many failed attempts at access my account? Well, if they failed, I chose a decent password maybe? Trying to force me to change it is likely to make me change it to something simpler maybe? Especially as this is the fourth time this month… but then, usually the system fails to actually process my password change and just hangs. My iPhone still keeps working with the old details and, usually by the next day, the errors have gone and I can access my mail with my old details with a real machine again too. My conclusion has been that it’s their software screwing up. Each. And. Every. Weekend.

It’s got worse, it now fouls up some workdays too. And I made an interesting discovery. When I log in to the web site to validate myself, if I put in a password of “scr3wy0uBT” – it accepts it and puts me into the Change Your Password screen exactly the same as before. No, my password is not “scr3wy0uBT” {it rejected it when I tried…}. So top security there. Whatever is happening, it’s just…..pants {UK phrase, it means “utterly rubbish”. Pants are not trousers, they are undergarments, you strange Americans.}

What is BT doing wrong to have this problem keep happening? Is this a good “UX” experience for me and all the other people who seem to have similar issues? Is it so hard to sort this out?

What was the third rant? Oh yes. Windows 8. Too many people before me have vented spleen and given pieces of their mind on Windows 8 for me to be able to add any more to the pile, but what I cannot fathom is, as soon as they got the almighty ass-kicking that they did for ballsing up the Start Button/Menu, why did they not in the next version just put it straight back as it was?. Or put out and advertise a simple “patch” to put back what millions of people were screaming they wanted back? All I can think is someone’s ego was too large to wave their hands in the air and say “Oh boy, did we make a dog’s dinner of that – let us help improve your “UX” and our reputation by fixing that straight away”.

Final rant. Games. Computer Games. I like running around shooting things. It gets rids of some of the IT-In-Daily-Life anger. But I am not very good at it and my broadband connection is slow and a bit laggy, so I am not interested in running around shooting things with friends. The same broadband issues mean I also don’t want to spend 4 hours downloading a game, I want to buy it in a floppy disc…..I mean CD…. Errr, DVD… and play it. So I went to this place called a “shop” and I bought a game on media in a box and checked the packaging. No where did it state I have to have an internet connection. I get the DVD out (hmm, there are actually three), put it in the machine and 30 mins later the software is loaded. And now it FORCES me to register with some crapola online gaming site to register my copy (like, if I had a bootleg version that would not be the first bit they strip out) and that takes an hour to download it’s own shitty software. That done, it will let me fire up the game – that immediately bombs out to download the latest patch (which I think it the crapola online site’s version) and that takes two hours as it is obviously much, much more than a patch. I suspect it is the whole damned game again. This is not a “UX” I wanted and, you can bet, next time I buy a game, crapola online gaming company is one thing I will be looking to avoid. It does not help that said game won’t fire up without logging into said game site or making me watch a minute of adverts about who wrote the game on who’s graphics card using what game engine. Thankfully a few minutes on the net explained how I could avoid all of that. But why do I have to take steps to stop these companies annoying me and, this is the bit that confuses me, what makes these companies think I’ll be impressed by being repeatedly exposed to their adds that I don’t want to see? I’ll just despise them a little bit more each time.

I just don’t get it. The number one thing any IT system needs to achieve is User Acceptance (as I have said before, if you check the link). Why do so many large companies miss this and inflict on the world a seriously sub-standard experience of IT and technology? If someone like me who has driven a screen, a keyboard and a mouse for 3 decades, understands some of the limits to IT and must have at least some brains in his skull, if I get endlessly caught out, befuddled and simply screaming-out-loud-frustrated by crap IT, how is my poor old mum (and everyone’s poor old parents) supposed to cope?

I’m going to become a Lumberjack. Chainsaws do not have screens and keyboards.

No I.T. Hassles Here

No I.T. Hassles Here