Search

OakieTags

Who's online

There are currently 0 users and 29 guests online.

Recent comments

Oakies Blog Aggregator

Modern Servers Are Better Than You Think For Oracle Database – Part I. What Problems Actually Need To Be Fixed?

Blog update 2012.02.28: I’ve received countless inquiries about the storage used in the proof points I’m making in this post. I’d like to state clearly that the storage is  not a production product, not a glimpse of something that may eventually become product or any such thing. This is a post about CPU, not about storage. That point will be clear as you read the words in the post.

In my recent article entitled How Many Non-Exadata RAC Licenses Do You Need to Match Exadata Performance I brought up the topic of processor requirements for Oracle with and without Exadata. I find the topic intriguing. It is my opinion that anyone influencing how their company’s Oracle-related IT budget is used needs to find this topic intriguing.

Before I can address the poll in the above-mentioned post I have to lay some groundwork. The groundwork I need to lay will come in this and an unknown number of installments in a series.

Exadata for OLTP

There is no value add for Oracle Database on Exadata in the OLTP/ERP use case. Full stop. OLTP/ERP does not offload processing to storage. Your full-rack Exadata configuration has 168 Xeon 5600 cores in the storage grid doing practically nothing in this use case. Or, I should say, the processing that does occur in the Exadata storage cells (in the OLTP/ERP use case) would be better handled in the database host. There simply is no value in introducing off-host I/O handling (and all the associated communication overhead) for random single-block accesses. Additionally, since Exadata cannot scale random writes, it is actually a very weak platform for these use cases. Allow me to explain.

Exadata Random Write I/O
While it is true Exadata offers the bandwidth for upwards of 1.5 million read IOPS (with low latency) in a full rack X2 configuration, the data sheet specification for random writes is a paltry 50,000 gross IOPS—or 25,000 with Automatic Storage Management normal redundancy. Applications do not exhibit 60:1 read to write ratios. Exadata bottlenecks on random writes long before an application can realize the Exadata Smart Flash Cache datasheet random read rates.

Exadata for DW/BI/Analytics

Oracle positions Exadata against products like EMC Greenplum for DW/BI/Analytics workloads. I fully understand this positioning because DW/BI is the primary use case for Exadata. In its inception Exadata addressed very important problems related to data flow. The situation as it stands today, however, is that Exadata addresses problems that no longer exist. Once again, allow me to explain.

The Scourge Of The Front-Side Bus Is Ancient History. That’s Important!
It was not long ago that provisioning ample bandwidth to Real Application Clusters for high-bandwidth scans was very difficult. I understand that. I also understand that, back in those days, commodity servers suffered from internal bandwidth problems limiting a server’s data-ingest capability from storage (PCI->CPU core). I speak of servers in the pre-Quick Path Interconnect (Nehalem EP) days.  In those days it made little sense to connect more than, say, two active 4GFC fibre channel paths (~800 MB/s) to a server because the data would not flow unimpeded from storage to the processors. The bottleneck was the front-side bus choking off the flow of data from storage to processor cores. This fact essentially forced Oracle’s customers to create larger, more complex clusters for their RAC deployments just to accommodate the needed flow of data (throughput).  That is, while some customers toiled with the most basic problems (e.g., storage connectivity), others solved that problem but still required larger clusters to get more front-side buses involved.

It wasn’t really about the processor cores. It was about the bus. Enter Exadata and storage offload processing.

Because the servers of yesteryear had bottlenecks between the storage adapters and the CPU cores (the front-side bus) it was necessary for Oracle to devise a means for reducing payload between storage and RAC host CPUs. Oracle chose to offload the I/O handling (calls to the Kernel for physical I/O), filtration and column projection to storage. This functionality is known as a Smart Scan. Let’s just forget for a moment that the majority of CPU-intensive processing, in a DW/BI query,  occurs after filtration and projection (e.g., table joins, sort, aggregation, etc). Shame on me, I digress.

All right, so imagine for a moment that modern servers don’t really need the offload-processing “help” offered by Exadata? What if modern servers can actually handle data at extreme rates of throughput from storage, over PCI and into the processor cores without offloading the lower level I/O and filtration? Well, the answer to that comes down to how many processor cores are involved with the functionality that is offloaded to Exadata. That is a sophisticated topic, but I don’t think we are ready to tackle it yet because the majority of datacenter folks I interact with suffer from a bit of EarthStillFlat(tm) syndrome. That is, most folks don’t know their servers. They still think it takes lots and lots of processor cores to handle data flow like it did when processor cores were held hostage by front-side bus bottlenecks. In short, we can’t investigate how necessary offload processing is if we don’t know anything about the servers we intend to benefit with said offload. After all, Oracle database is the same software whether running on a Xeon 5600-based server in an Exadata rack or a Xeon 5600-based server not in an Exadata rack.

Know Your Servers

It is possible to know your servers. You just have to measure.

You might be surprised at how capable they are. Why presume modern servers need the help of offloading I/O (handling) and filtration. You license Oracle by the processor core so it is worthwhile knowing what those cores are capable of. I know my server and what it is capable of. Allow me to share a few things I know about my server’s capabilities.

My server is a very common platform as the following screenshot will show. It is a simple 2s12c24t Xeon 5600 (a.k.a. Westmere EP) server:

My server is attached to very high-performance storage which is presented to an Oracle database via Oracle Managed Files residing in an XFS file system in a md(4) software RAID volume. The following screenshot shows this association/hierarchy as well as the fact that the files are accessed with direct, asynchronous I/O. The screenshot also shows that the database is able to scan a table with 1 billion rows (206 GB) in 45 seconds (4.7 GB/s table scan throughput):

The io.sql script accounts for the volume of data that must be ingested to count the billion rows:

$ cat io.sql
set timing off
col physical_reads_GB format 999,999,999;      
select VALUE /1024 /1024 /1024 physical_reads_GB from v$sysstat where STATISTIC# =
(select statistic# from v$statname where name like '%physical read bytes%');
set timing on

So this simple test shows that a 2s12c24t server is able to process 392 MB/s per processor core. When Exadata was introduced most data centers used 4GFC fibre channel for storage connectivity. The servers of the day were bandwidth limited. If only I could teleport my 2-socket Xeon 5600 server back in time and put it next to an Exadata V1 box. Once there, I’d be able to demonstrate a 2-socket server capable of handling the flow of data from 12 active 4GFC FC HBA ports! I’d be the talk of the town because similar servers of that era could neither connect as many active FC HBAs nor ingest the data flowing over the wires—the front-side bus was the bottleneck. But, the earth does not remain flat.

The following screenshot shows the results of five SQL statements explained as:

  1. This SQL scans all 206 GB, locates the 4 char columns (projection) in each row and nibbles the first char of each. The rate of throughput is 2,812 MB/s. There is no filtration
  2. This SQL ingests all the date columns from all rows and maintains 2,481 MB/s. There is no filtration.
  3. This SQL combines the efforts of the previous two queries which brings the throughput down to 1,278 MB/s. There is no filtration.
  4. This SQL processes the entire data mass of all columns in each row and maintains 1,528 MB/s. There is no filtration.
  5. The last SQL statement introduces filtration. Here we see that the platform is able to scan and selectively discard all rows (based on a date predicate) at the rate of 4,882 MB/s. This would be akin to a fully offloaded scan in Exadata that returns no rows.

Summary

This blog series aims to embark on finding good answers to the question I raised in my recent article entitled How Many Non-Exadata RAC Licenses Do You Need to Match Exadata Performance. I’ve explained that offload to Exadata storage consists of payload reduction. I also offered a technical, historical perspective as why that was so important. I’ve also showed that a small, modern QPI-based server can flow data through processor cores at rates ranging from 407 MBPS/core down to 107 MBPS/core depending on what the SQL is doing (SQL with no predicates mind you).

Since payload reduction is the primary value add of Exadata I finished this installment in the series with an example of a simple 2s12c24t Xeon 5600 server filtering out all rows at a rate of 4,882 MB/s—essentially the same throughput as a simple count(*) of all rows as I showed earlier in this post. That is to say that, thus far, I’ve shown that my little lab system can sustain nearly 5GB/s disk throughput whether performing a simple count of rows or filtering out all rows (based on a date predicate). What’s missing here is the processor cost associated with the filtration and I’ll get to that soon enough.

We can’t accurately estimate the benefit of offload until we can accurately associate CPU cost to filtration.  I’ll take this blog series to that point over the next few installments—so long as this topic isn’t too boring for my blog readers.

This is part I in the series. At this point I hope you are beginning to realize that modern servers are better than you probably thought. Moreover, I hope my words about the history of front-side bus impact on sizing systems for Real Application Clusters is starting to make sense. If not, by all means please comment.

As this blog series progresses I aim to help folks better appreciate the costs of performing certain aspects of Oracle query processing on modern hardware. The more we know about modern servers the closer we can get to answer the poll more accurately. You license Oracle by the processor core so it behooves you to know such things…doesn’t it?

By the way, modern storage networking has advanced far beyond 4GFC (400 MB/s).

Finally, as you can tell by my glee in scanning Oracle data from an XFS file system at nearly 5GB/s (direct I/O), I’m quite pleased at the demise of the front-side bus! Unless I’m mistaken, a cluster of such servers, with really fast storage, would be quite a configuration.

Filed under: oracle, Oracle I/O Performance Tagged: Exadata Xeon 5600 Datawarehousing

Next Advanced RAC Training class scheduled.

Just a quick note, I will be conducting next 2-week advanced RAC Training, online class in March 26-30 and April 9-13.

You can find agenda and other details here .

Update 1:
——–
I received few emails about the training outline. You can find the outline below:

ART_online_week1
ART_online_week2

Update 2:
——–
Yes, I do accept Purchase Orders and can invoice you :)

Announcement: Release 1.1.2 of MySQL Plug-in for Oracle Enterprise Manager 10g/11g

This release is just a quick bug fix release of an older 1.1.1 version of the plug-in. It’s long overdue but I’ve managed to fix “” problem only couple weeks ago. I’ve distributed the new version to the folks who have reached out to me by email of via blog reporting the issue in the [...]

Geek Stuff

A recent post on the OTN database forum raises a problem with v$sql_shared_memory:

query to V$SQL_SHARED_MEMORY don’t return rows

please explain why ?

A follow-up posting then describes how the OP picked the view definition from v$fixed_view_definitions and use the text of that query instead of the view itself – and still got no rows returned:

SELECT * FROM
(select /*+use_nl(h,c)*/ c.inst_id,kglnaobj,kglfnobj, kglnahsh, kglobt03, kglobhd6, rtrim(substr(ksmchcom, 1, instr(ksmchcom, ‘:’, 1, 1) – 1)), ltrim(substr(ksmchcom, -(length(ksmchcom) – (instr(ksmchcom, ‘:’, 1, 1))), (length(ksmchcom) – (instr(ksmchcom, ‘:’, 1, 1)) + 1))), ksmchcom, ksmchptr, ksmchsiz, ksmchcls, ksmchtyp, ksmchpar from x$kglcursor c, x$ksmhp h where ksmchds = kglobhd6 and kglhdadr != kglhdpar)
WHERE ROWNUM < 3;

The answer is quite simple – but in two parts. First, the developer who wrote the view definition doesn’t understand how to use hints. Secondly, x$ksmhp is (as far as I can tell) the result set from a call to a subroutine that takes a heap descriptor as an input and returns details of that subheap formatted to look like an ordinary row source.

The query is supposed to report sub-heap 6 of every child cursor – but to get any results you need to get the descriptor for the sub-heap from a child cursor before calling the subheap, which means the execution plan for the query has to do a nested loop join based on the predicate ksmchds = kglobhd6 and it has to visit x$kglcursor (which is one of the derived versions of x$kglob) before it visits x$ksmhp.

So here’s the execution plan from a copy of 10g that I have handy where the query select * from V$sql_shared_memory returns no rows, followed by the plan from a copy of 11g where the query returns thousands of rows. Spot the difference:

Execution Plan (10g)
----------------------------------------------------------
Plan hash value: 2632394999

---------------------------------------------------------------------------------
| Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |             |     1 |  2604 |     0   (0)| 00:00:01 |
|   1 |  NESTED LOOPS     |             |     1 |  2604 |     0   (0)| 00:00:01 |
|   2 |   FIXED TABLE FULL| X$KSMHP     |     1 |    54 |            |          |
|*  3 |   FIXED TABLE FULL| X$KGLCURSOR |     1 |  2550 |     0   (0)| 00:00:01 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("C"."INST_ID"=USERENV('INSTANCE') AND
              "KGLHDADR"<>"KGLHDPAR" AND "KSMCHDS"="KGLOBHD6")

Execution Plan (11g)
----------------------------------------------------------
Plan hash value: 1141239260

--------------------------------------------------------------------------------------------
| Id  | Operation                | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |                 |     1 |  2604 |     0   (0)| 00:00:01 |
|   1 |  NESTED LOOPS            |                 |     1 |  2604 |     0   (0)| 00:00:01 |
|*  2 |   FIXED TABLE FULL       | X$KGLCURSOR     |     1 |  2550 |     0   (0)| 00:00:01 |
|*  3 |   FIXED TABLE FIXED INDEX| X$KSMHP (ind:1) |     1 |    54 |     0   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("C"."INST_ID"=USERENV('INSTANCE') AND "KGLHDADR"<>"KGLHDPAR")
   3 - filter("KSMCHDS"="KGLOBHD6")

In both cases the queries have followed the hint supplied – the plan is a nested loop, and that’s all that the hint requires. However, the 10g plan starts by scanning x$ksmsp without supplying a heap descriptor, so that part of the plan can’t return any rows, so the nested loop returns no rows. The 11g plan scans the x$kglcursor structure to find the heap descriptors, and calls x$ksmhp for each descriptor in turn.

If you want to work around the problem of no data, you could try hinting the query against v$sql_shared_memory (or create your own replacement view) which hints the join properly. (Note that the current hint on the view definition simply says: if h is the second table in the join use a nested loop, if c is the second table use a nested loop; the hint does NOT mean “use a nested loop with h as the first table and c as the second”). Here’s a possible hint set that works 10.2.0.3 – I identified the necessary query block name by using the ‘outline’ option of dbms_xplan to check the execution plan:


select
	/*+
		leading(@sel$5c160134 c@sel$3 h@sel$3 )
		use_nl( @sel$5c160134 h@sel$3)
	*/
	*
from
	v$sql_shared_memory
;

Repairman Jack: Legacies…

Legacies is the second book in the Repairman Jack series by F. Paul Wilson.

This book was more like a straight detective story based around a new scientific discovery. There was no supernatural aspect to the case, which was a bit disappointing. The story itself was fine, but the supernatural thing peaks my interest that bit more. I’m still happy to keep reading the books in the series, especially since they are only £1.97 on Kindle. :)

Cheers

Tim…




Full Table Scans and the Buffer Cache in 11.2 – What is Wrong with this Quote?

February 26, 2012 (Modified February 27, 2012) I found another interesting quote in the “Oracle Database 11gR2 Performance Tuning Cookbook“, this time related to tables and full table scans.  This quote is found on page 170 of the book: “If we do an FTS [full table scan], database buffers are used to read all the [...]

UKOUG

Information for members of the UKOUG:

A little while ago, the UKOUG re-organised its management to split board of directors into a board and a council. The board is responsible solely for handling the company side of the UKOUG while the council focuses solely on the services that the user group supplies to members.

Elections for seats on the council will be starting next week (27th Feb) and I have put forward a nomination in my own name. This includes a description of my involvement with Oracle and the user group, and a description of what I could offer to the user group. The nomination also include a manifesto, which I have reproduced below.

If you are a member of the user group (and the nominated recipient of your company’s membership) then you will have the opportunity to vote for who should be on the new council. If you think my manifesto makes sense and result in a better user group please vote for me.

Note: if you believe your company is a member of the UKOUG but you have no idea who your nominated contact it, please get in touch with the UKOUG, it’s possible that the nominated contact is someone in the account department, or even someone who has left the company, leaving you with a membership that you can’t take full advantage of.

Manifesto

The role of the user group is two-fold. To magnify the voice of the users so that it can be heard by Oracle Corporation, and to help the users to share their experiences.

I believe the first task is addressed very well by the user group, but the second task is a constantly moving target that we need to keep under review so that the users can get the greatest possible benefit from being members of the group.

In recent years the volume of information about Oracle available on the internet has grown exponentially. This growth has made it appear to employers as if the SIG meetings have less importance, and this perception will eventually weaken the user group. As a frequent participant in the online forums, newsgroups, blogosphere, et. al. my perception is that the internet doesn’t really do much to give people the depth of understanding that they can achieve through face to face human interaction.

Because of this, I would like to work with the user group to see how we can enhance the structure of SIGs to increase the amount of discussion that takes place when users come together. The internet is an easy source of reading material and recorded presentation material, and it’s easy to get answers to specific questions on forums (although I often see that answers are slow in arriving, irrelevant, misleading or downright wrong). What the internet does not offer is a chance for people to discuss (or argue) different points of view in anything like real-time – and this is an area where the SIG can be of greatest benefit.

If you think of the “round-table” sessions that the user group has arranged at the annual conference for the last few years, you will see the general direction that I would like to take SIGs if I am elected to the council.

About me:  I have been involved in the IT industry for more than 27 years and have been using the Oracle RDBMS for more than 23 years. I’ve written three books about Oracle, and have contributed to three others. I’ve written 650 blog articles in the last 5 years, and lectured about Oracle in more than 50 different countries in the last 7 years. I’ve been a director of the UKOUG for about 10 years out of the last 15, and was a SIG chairman before becoming a director.

Upgrades again

I’ve just spent a couple of days in Switzerland presenting seminar material to an IT company based in Delemont (about 50 minutes drive from Basle), and during the course I picked up a number of new stories about interesting things that have gone wrong at client sites. Here’s one that might be of particular interest to people thinking of upgrading from 10g to 11g – even if you don’t hit the bug described in the blog, the fact that the new feature has been implemented may leave you wondering where all your machine resources are going during the overnight run.

Footnote: my hosts for the event gave me a couple of souvenirs of the visit – and because we were in Switzerland, one of them had to be a little Swiss Army pen-knife. Since I only travel with cabin-luggage I thought this might be a problem, but the blade was only two inches long and the rules about such things have changed over time so I decided to risk taking it to the airport. On the land-side of the security scanners I showed the pen-knife to the security guard and asked if it was okay to take it through – he said yes, so I dropped it into the little plastic box along with my toothpaste; but by the time it came out at the other end of the scanner the rules must have changed because the security guard on the air-side took it, demanded my boarding pass, disappeared for a couple of minutes, and came back with just the boarding pass. (If I don’t show up at ODTUG this year, it will probably be because the Swiss have put my name down on a list of international arms smugglers – alternatively, maybe there’s just a couple of Swiss security agents who have a side-line in second-hand Swiss Army pen-knives.)

 

DST in Russia

Daylight Saving Time in Russia has been changed last year. Oracle published a FAQ on the support site about this: Russia abandons DST in 2011 – Impact on Oracle RDBMS [ID 1335999.1].

In short if you are using DATEs and TIMESTAMPs without time zone in your application, you are almost “safe” and there’s most likely no need to do anything in particular on the database software level. In our application we use DATE and TIMESTAMP without time zone, so I thought it all should be fine. Not until today when I found out that I’ve a difference of 1 hour between stored TIMESTAMP and current date. Turns out it’s client’s issue (not a rare situation). The client is a WebLogic running Oracle JDK version 6u26 (1.6.0_26). And, unlike Oracle RDBMS, Oracle JRE is not consistent with OS time, so when you create a new java.sql.Timestamp from a current time, it will take DST into account and will return value correspondingly – this was news to me. In my case without time zone patch it means 1 hour behind actual time. The time zone data is fixed in the JDK update 31 (release date 14 Feb 2012); it also can be patched easily with the help of Timezone Updater Tool:

$ ./bin/java -jar tzupdater.jar -V
tzupdater version 1.3.45-b01
JRE time zone data version: tzdata2011g
Embedded time zone data version: tzdata2011n
$ ./bin/java -jar tzupdater.jar -u
$ ./bin/java -jar tzupdater.jar -V
tzupdater version 1.3.45-b01
JRE time zone data version: tzdata2011n
Embedded time zone data version: tzdata2011n

Oracle’s internal JVM (aka Aurora) in Oracle 11.2.0.3 looks to be unaffected and works correctly. It reports itself as version 1.5.0_10:

SQL> create or replace and compile java source named "test" as
  2  import java.util.Date;
  3  public class test {
  4      public static String getCurrentDate() {
  5          return "" + new Date();
  6      }
  7  };
  8  /

Java created.

SQL> sho err
No errors.
SQL>
SQL> create or replace function fnc_get_date return varchar2 as
  2  language java name 'test.getCurrentDate() return java.lang.String';
  3  /

Function created.

SQL> sho err
No errors.

SQL> select fnc_get_date, sysdate from dual;

FNC_GET_DATE                             SYSDATE
---------------------------------------- -----------------
Fri Feb 24 17:59:25 GMT+04:00 2012       20120224 17:59:25

And one more thing on the subject. I’d be happy if somebody would explain an anomaly with V$ACTIVE_SESSION_HISTORY.SAMPLE_TIME:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

SQL> select startup_time from v$instance;

STARTUP_TIME
-----------------
20111020 16:38:21

SQL> sho parameter nls

NAME_COL_PLUS_SHOW_PARAM                 TYPE        VALUE
---------------------------------------- ----------- ----------------------------
nls_calendar                             string      GREGORIAN
nls_comp                                 string      BINARY
nls_currency                             string      $
nls_date_format                          string      YYYYMMDD HH24:MI:SS
nls_date_language                        string      AMERICAN
nls_dual_currency                        string      $
nls_iso_currency                         string      AMERICA
nls_language                             string      AMERICAN
nls_length_semantics                     string      BYTE
nls_nchar_conv_excp                      string      FALSE
nls_numeric_characters                   string      .,
nls_sort                                 string      BINARY
nls_territory                            string      AMERICA
nls_time_format                          string      HH.MI.SSXFF AM
nls_time_tz_format                       string      HH.MI.SSXFF AM TZR
nls_timestamp_format                     string      DD-MON-RR HH.MI.SSXFF AM
nls_timestamp_tz_format                  string      DD-MON-RR HH.MI.SSXFF AM TZR

-- this is the question: v$active_session_history.sample_time lags 1 hour behind
SQL> select systimestamp, sysdate, max(sample_time) from v$active_session_history;

SYSTIMESTAMP                                                                SYSDATE           MAX(SAMPLE_TIME)
--------------------------------------------------------------------------- ----------------- -------------------------
24-FEB-12 06.27.25.685962 PM +04:00                                         20120224 18:27:25 24-FEB-12 05.27.22.510 PM

Filed under: Java, Oracle, WLS Tagged: ASH, DST, timestamp

Friday Philosophy – The Inappropriate Use of Smart Phones

I’m kind of expecting to get a bit of a comment-kicking over this one…

I never much liked mobile phones – Yes they are incredibly useful, yes they allow countries that lack a ground-based telephony network to create a nationwide system, yes they allow communication all the time from almost anywhere. That last point is partly why I dislike them. {Actually, I don’t like normal phones much, or how some people {like my wife} will interrupt a conversation to dash across the room to answer it. It’s just a person on the phone, it will take a message if someone wants to say something significant. If someone calls your name out in a crowd, do you abandon the people you are talking to, dash across the room and listen to them exclusively? No, so what act that way over a phone?}.

However, I hold a special level of cynical dislike for “smart” phones. Why? Because people seem to be slaves to them and they seem to use them in a very antisocial way in social and even business situations. It is no longer just speaking or texting that people do, it’s checking and sending email, it’s twittering and blogging, it’s surfing the net and looking things up. I have no problem with any of this, I do all of these things on my desktop, laptop, netbook. But I don’t do them to the detriment of people who are there in the flesh – whilst supposedly in a conversation with mates at the pub or carrying out a transaction in a shop or using the coffee machine at work or, basically, standing in the bloody way staring at a little screen or rudely ignoring people who I am supposed to be interacting with.

The below is my phone. It makes calls, it sends texts, it might even be able to work as an alarm clock (I am not sure). It does not do anything else much and it was ten quid {actually the below might be the version up from the really cheap thing I have}:

I was pondering this rude (ab)use of Smart Phones in a meeting this week. It was a meeting to discuss a program of work, what needed doing and by whom. It was a meeting where everyone in the room was involved, each person’s opinion was important and we all had a vested interest in the outcome of the meeting. So why did over half of the people not only have their Smart Phone out but were tapping away, scrolling through stuff, looking at some asinine rubbish on Facebook {yes, I saw you}? One or two people in the room might have been able to argue that they needed to keep an eye out for important emails or calls – but really? Are things so incredibly important and only you can deal with them that you can’t just play your full part in a meeting for an hour? I was so annoyed by this that I missed half the meeting internally moaning about it…

I just see it as rude. It’s saying “while you people are talking, I can’t be bothered listening and I certainly don’t need to give you my full attention. And I don’t even care that I’m making it so obvious”. Or “I am buying this item from you and we need to deal with the transaction but you are so inconsequential I don’t even have to pause this conversation about which cafe to meet in next week. You do not deserve more than 15% of my attention”.

I supposed that is what really gets my blood slowly heating up, it’s that it has become accepted to be so rude. Just walk down the street, head down and eyes fixed on your glowing little screen, making no attempt to navigate with your fellow city dwellers. I made a decision 2 {correction, 3} years ago that, if you are walking along staring at your phone and you are going to collide with me, you ARE going to collide with me if you do not become aware of me and make allowances – and I am lower down than you, I braced my shoulder and I am going to win this one. If they are so fixated on that bl00dy screen that they do not heed any attention to others, people ping off me like they’ve been thumped by a tree stump. It now happens a lot and I always “win”. I’m surprised no one has punched me yet.

If I was a manager again I would introduce a simply rule. No Smart Phone in your hand unless you have a stated reason for doing so. There are many valid reasons, which will all be related to the meeting. Otherwise you are just being disrespectful. If you feel the meeting does not apply to you or this section is not relevant, fine. Sit still and listen anyway. You might actually find it useful to know what everyone else is doing. Stop playing bl00dy mental chickens or whatever or updating your status to “bored”.

I will hold strongly to these opinions. Right up until the minute I finally buy that iphone I’ve been considering getting. I really want to be able to check my twitter account during meetings, you see.