I wrote a paper about Oracle Database Vault in 12c for SANS last year and this was published in January 2015 by SANS on their website. I also prepared and did a webinar about this paper with SANS. The Paper....[Read More]
Posted by Pete On 30/06/15 At 05:38 PM
I was looking for something in the New Features Manual and I had a total WTF moment when I saw this stuff.
If you look at the final section of the article, you can see in some cases these just get transformed to regular joins and outer joins, but there is certainly something else under the hood, as shown by the pipelined table function example.
I think it’s going to take me a long time before I think of using these in my regular SQL…
Cheers
Tim…
Here’s the video of E4 keynote we delivered together with Kerry Osborne a few weeks ago.
It explains what we see is coming, at a high level, from long time Oracle database professionals’ viewpoint and using database terminology (as the E4 audience is all Oracle users like us).
However, this change is not really about Oracle database world, it’s about a much wider shift in enterprise computing: modern Hadoop data lakes and clouds are here to stay. They are already taking over many workloads traditionally executed on in-house RDBMS systems on SAN storage arrays – especially all kinds of reporting and analytics. Oracle is just one of the many vendors affected by all this and they’ve also jumped onto the Hadoop bandwagon.
Here’s the video of E4 keynote we delivered together with Kerry Osborne a few weeks ago.
It explains what we see is coming, at a high level, from long time Oracle database professionals’ viewpoint and using database terminology (as the E4 audience is all Oracle users like us).
However, this change is not really about Oracle database world, it’s about a much wider shift in enterprise computing: modern Hadoop data lakes and clouds are here to stay.
Here’s the video of E4 keynote we delivered together with Kerry Osborne a few weeks ago.
It explains what we see is coming, at a high level, from long time Oracle database professionals’ viewpoint and using database terminology (as the E4 audience is all Oracle users like us).
However, this change is not really about Oracle database world, it’s about a much wider shift in enterprise computing: modern Hadoop data lakes and clouds are here to stay.
Here’s the video of E4 keynote we delivered together with Kerry Osborne a few weeks ago.
It explains what we see is coming, at a high level, from long time Oracle database professionals’ viewpoint and using database terminology (as the E4 audience is all Oracle users like us).
However, this change is not really about Oracle database world, it’s about a much wider shift in enterprise computing: modern Hadoop data lakes and clouds are here to stay.
Here’s the video of E4 keynote we delivered together with Kerry Osborne a few weeks ago.
It explains what we see is coming, at a high level, from long time Oracle database professionals’ viewpoint and using database terminology (as the E4 audience is all Oracle users like us).
However, this change is not really about Oracle database world, it’s about a much wider shift in enterprise computing: modern Hadoop data lakes and clouds are here to stay.
A lot of blogposts and other internet publications have been written on the full segment scan behaviour of a serial process starting from Oracle version 11gR2. This behaviour is the Oracle engine making a decision between scanning the blocks of a segment into the Oracle buffercache or scanning these blocks into the process’ private process global area (PGA). This decision is even more important on the Exadata platform, because the Oracle engine must have made the decision to read the blocks into the process’ PGA in order to be able to do a smartscan. This means that if you are on Oracle 11gR2 already, and thinking about using the Exadata platform, the wait event ‘direct path read’ gives you an indication on how much potentially could be offloaded on Exadata, if you keep all the settings the same.
OK, time to reveal how a couple of simple deletes can cause an index to double in size. If we go back and look at the tree dump before the delete operation: —– begin tree dump branch: 0x180050b 25167115 (0: nrow: 19, level: 1) leaf: 0x180050c 25167116 (-1: row:540.540 avs:4) leaf: 0x180050d 25167117 (0: row:533.533 […]
Recent comments
3 years 4 weeks ago
3 years 16 weeks ago
3 years 21 weeks ago
3 years 21 weeks ago
3 years 26 weeks ago
3 years 47 weeks ago
4 years 15 weeks ago
4 years 45 weeks ago
5 years 29 weeks ago
5 years 30 weeks ago