So the first tutorial will show how to create a jira issue directly from the database. We will create two types of issues, a bug and a subtask to an existing issue. .... Read more on my blog
I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]
Posted by Pete On 05/03/14 At 10:17 AM
The OTN Yathra 2014 tour is over and I’m back home now. Here are all the blog posts from the tour.
Although I come from the second biggest city in the UK, Birmingham has a very slow pace in comparison to other UK cities. Friends had told me how busy India was, so I was quite nervous about this trip and how I would cope with it. My initial fears were confirmed during my first taxi ride from Amritsar airport to Jalandhar. Getting ill on the first morning of the tour wasn’t a good omen either. Once the kind folks at the Lovely Professional University sorted me out with some medical attention, things started to get better and I started to believe I might make it to the end of the tour alive.
As the tour progressed I got into my stride and really started to enjoy the whole process. As I’ve said many times, I’m not a fan of travelling, but I like being at different places. The travelling part of this tour was very strenuous, which was my own fault for choosing to do all 7 events, but that was easily outweighed by getting the opportunity to connect with the attendees and speakers in all the cities.
Here come the much deserved thank you messages:
My lasting memories of India will be:
Until next time…
Application developers frequently experiment to achieve higher quality builds, they try things out, make mistakes and find fixes. Database design can be like that too, but rapidly changing the schema in traditional shared environment tends to break the application.
In this webinar, Kyle Hailey will explore how different types of cloning technology work and their benefits and limitations. The webinar will also demonstrate how you can setup and coordinate risk-free database experiments using Delphix and Red Gate’s Oracle tools.
Kyle Hailey and James Murtagh
Once upon a time, Oracle Support had a note called Script: Lists All Indexes that Benefit from a Rebuild (Doc ID 122008.1) which lets just say I didn’t view in a particularly positive light :) Mainly because it gave dubious advice which included that indexes should be rebuilt if: Deleted entries represent 20% or more of […]
Aanstaande dinsdag 4 maart wordt er door Oracle een Oracle EMEA Virtual Developer Day georganiseerd. Je kunt je hiervoor gratis inschrijven en live, via het internet, deze volgen, mocht je in de gelegenheid zijn. Je hebt de mogelijkheid om zelf mee te doen, via twee beschikbaar gestelde VirtualBox virtuele demo machine omgevingen, of alleen de
This months project will be in the same line as last month. It will be an oracle integration to the issue tracking system Jira from Atlassian. There are two big use cases for me in this. One, is that a lot of my customers have bought the Confluence->Jira pack, so beeing able to integrate the database directly to issues and tasks, can make a lot of things easier and more effective. .... read more on my blog
In the first part, I explained that Incremental Statistics are designed to allow a partitioned tables Global Statistics to be updated based on a combination of
1) Statistics gathered by analysing the contents of one or more partitions that have just been loaded or have been updated (and see this blog post for more depth on what 'updated' means!)
2) The statistics of existing partitions which are represented by synopses that are already stored in the SYSAUX tablespace.
Using this combination, we can avoid scanning every partition in the table every time we want to update the Global Stats which is an expensive operation that is likely to be unfeasible every time we load some data into a large table.
For me, the key word here is Incremental. Global Statistic updates are an incremental process, building on previous statistics (represented by the synopses) and only updating the Global Statistics based on the changes introduced by loading new partitions.
Understanding this might clear up another area of confusion I keep coming across. After upgrading their database to 11g, people often want to try out Incremental Global Stats on one of their existing large tables because they've always struggled to keep their Global Stats consistent and up to date. Maybe it's just the sites I work at but I'd say this is the most popular use case. Incrementals for a planned large partitioned table in your new systems might be a sensible idea, but there are a lot more existing systems out there with Global Stats collection problems that people have struggled with for years.
Most people I've spoken to initially had the impression that they simply flick the INCREMENTAL switch and perhaps modify some of the parameters to their existing DBMS_STATS calls so that GRANULARITY is AUTO and they use AUTO sampling sizes. All of which is discussed in the various white papers and blog posts out there.
Then they get a hell of a surprise when the very first gather runs for ages! How long is ages? I don't know in your particular case but I've seen this running for hours and hour and hours and people are crying in to their keyboards wondering why something that was supposed to make things run more quickly is so much slower than their usual stats calls.
The best way I've found to explain this phenomenon is to concentrate on the synopses that describe the existing partitions. Where do you think they come from? How are they calculated and populated if you don't ask Oracle to look at the existing data in your enormous table? That's what needs to happen. In order to make future updates to your Global Stats much more efficient, we first need to establish the baseline describing your existing data that Oracle will use as the foundation for the later incremental updates.
Generating the synopses as the baseline for future improvements will be a relatively painful for the largest tables (if it wasn't, you probably wouldn't be so interested in Incrementals ), but it does only have to happen once. You just need to understand that it does have to happen and plan for it as part of your migration.
My personal suggestion is usually to just delete all of the existing stats and start from scratch with modern default parameter values and tidy up any stats-related junk that might be lingering around large, critical tables. Painful but probably worth it!