Search

OakieTags

Who's online

There are currently 0 users and 49 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Partitioned Clusters

In case you hadn’t noticed it, partitioning has finally reached clusters in 12c – specifically 12.1.0.2. They’re limited to hash clusters with range partitioning, but it may be enough to encourage more people to use the technology. Here’s a simple example of the syntax:


create cluster pt_hash_cluster (
        id              number(8,0),
        d_date          date,
        small_vc        varchar2(8),
        padding         varchar2(100)
)
-- single table
hashkeys 10000
hash is id
size 700
partition by range (d_date) (
        partition p2011Jan values less than (to_date('01-Feb-2011','dd-mon-yyyy')),
        partition p2011Feb values less than (to_date('01-Mar-2011','dd-mon-yyyy')),
        partition p2011Mar values less than (to_date('01-Apr-2011','dd-mon-yyyy'))
)
;

I’ve been waiting for them to appear ever since 11.2.0.1 and the TPC-C benchmark that Oracle did with them – they’ve been a long time coming (check the partition dates – that gives you some idea of when I wrote this example).

Just to add choice (a.k.a. confusion) 12.1.0.2 has also introduce attribute clustering so you can cluster data in single tables without creating clusters – but only while doing direct path loads or table moves. The performance intent is similar, though the technology and circumstances of use are different.

The Future of PL/SQL : My Opinion

Although a lot of my effort at the moment is focused on DBA features, I have written some articles on PL/SQL enhancements. There are a few neat new features for PL/SQL developers in 12c, but you could be forgiven for thinking it is a little underwhelming. There are two ways to look at this:

  1. OMG. Oracle really don’t care about PL/SQL any more. If they did, there would be loads of new features.
  2. Wow. PL/SQL is so mature and cool that there is really not much more to add.

From a base language perspective, I think option 2 is closer to the mark. PL/SQL is a really stable, fast and mature language. There really isn’t very much that you can’t do with PL/SQL these days. So what is the future of PL/SQL in my opinion?

As part of his role as PL/SQL evangelist, Steven Feuerstein contacted a number of people about their opinions of PL/SQL. When he asked me about my wish list, I suggested all functionality in the Alexandria PL/SQL Utility Library should really be in PL/SQL. A quick look at the Alexandria site shows it includes code that supports a large variety of functionality from a variety of authors. Among other things, this library includes:

  • Generating PDF files.
  • Generating Excel files.
  • Generating RTF files.
  • Microsoft Office Integration (OOXML).
  • Zip and Unzip functionality (separate to UTL_COMPRESS).
  • Parse CSV files.
  • Parse RSS feeds.
  • Generate JSON files.
  • FTP support.
  • Email support (SMTP, POP, IMAP, Exchange).
  • Integration with Google services (Google Maps, Google Calendar, Google Translate).
  • Integration with Amazon Web Services.
  • Integration with PayPal.
  • Integration with Twitter.
  • Consuming and publishing SOAP and REST web services.
  • Logging and debugging APIs and frameworks.

You may see some things in that list that look like duplication of functionality we already have. Oracle 10g introduced UTL_MAIL for sending emails from PL/SQL, but the functionality is so limited, you invariably end up coding your own APIs using UTL_SMTP (like this). Oracle 10g also introduced UTL_DBWS for consuming SOAP web services, but once again, it is often easier to do it yourself directly, or using a simpler SOAP_API based on UTL_HTTP. We don’t even have any reasonable tracing functionality. Instead we have to write our own wrappers for DBMS_OUTPUT, or use someone else’s. So although the PL/SQL language is great, when it comes to integration with other technologies you end up having to do a lot of the heavy listing yourself, or rely on using someone else’s unsupported solution.

So in my opinion, the future for PL/SQL is not in major changes to the language itself, but in bringing these sort of support and integration packages into the database. I think we should avoid forcing overly complex frameworks on people. I’m very much talking about simple utility packages. This could be done in one of two ways:

  1. It is literally baked into PL/SQL. The problem with this approach is you will have to wait for a DB upgrade to get the latest and greatest functionality. That would be a shame since most of the functionality works for multiple DB versions.
  2. Oracle produce their own internal version of the Alexandria PL/SQL Utility Library. So you have an Oracle supplied, supported and maintained library of functionality you can download and use, independent of database version. This means updates are independent of major database version changes.

I think option 2 would make a lot of sense. If you think about it, we almost have a precedent for this in the form of APEX.

  • APEX is built on PL/SQL.
  • It ships separately to the main database releases.
  • Each release supports a variety of DB versions.
  • It brings with it a bunch of utility packages. They are there to support APEX, but there is nothing to stop you using them for your own applications, like this example of using APEX_JSON.

Imagine how exciting it would be if part of the Oracle 12cR2 or Oracle 13c announcement included a huge library of support packages like this! :)

Cheers

Tim…


The Future of PL/SQL : My Opinion was first posted on September 21, 2014 at 4:18 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Site Hosting Update : Two Months Down the Line

About two months ago I moved my website between servers within the same hosting company. The old service was being hammered, so I moved to a better dedicated server blah, blah, blah. There was some fallout from this, mostly focussed on lost emails, but I figured it was pretty much over.

Yesterday I received two emails from totally separate people asking why I was blocking their company proxies from accessing my website. I wasn’t. One of the guys got his company to do “something” to their proxy (squid) and they were able to access my site again. The other is waiting for his network folks to check their proxy.

That incident prompted this quick post, just to let people know how things are…

The website is available from these URLs.

If you try to use the following URL, you will get an SSL security warning, but if you accept it, you will be bounced across to the proper HTTPS address.

From a Google indexing perspective, the main (canonical) URL is still “http://www.oracle-base.com”.

Q1: Do I plan to move over to pure HTTPS?

A1: Not at this time. It’s taken about 2 months to get my page views back to where they were before the move. That’s not super important for me, but if that loss in page views was because people and search engines were having issues with the site, that is not great. As I’ve said many times before, I do the site for me and the fact that other people like it is a bonus. Having said that, I would rather not annoy people by changing stuff every five minutes. :) At some point in the future that move will probably happen, but it should be seamless. Redirects are a wonderful thing…

Q2: Does everything work on the HTTPS address?

A2: Almost. It’s not a super important issue for me at this time. As far as I can see, the main website and forum work fine. Currently, links to the blog fail on HTTPS. WordPress is kind-of anal about the main URL. You have to define the base URL and if you try and access it in any other way it is not happy. I did a quick search for solutions to this, but none were forthcoming. I’m sure it is solvable, but I can’t be bothered to spend any more time on it at this point. :) There will no doubt be lots of internal links that use HTTP, so you might jump back to that by accident. I’ll fix these over time, but like I said, low priority at this time. :)

Q3: Does the site work with IPv6?

A3: Yes. I got an email soon after the move saying that IPv6 access was broken. The web configuration wasn’t the problem, since it was listening on the IPv6 address as well as the IPv4 one. I had simply forgotten to open the IPv6 ports for HTTP and HTTPS. Once that was done, life was good.

So as far as I know, there are no outstanding problems on my side that would prevent anyone accessing the site. If you think there are issues (you probably can’t read this) I would like to know, but I would suggest you check on your side first. By that I mean, check your browser cache and company proxy cache are not the problem.

Anyway, enough of the boring stuff. It’s the company financial year end today, so I have to make sure we don’t blow any tablespaces and do the odd backup… Fun, fun, fun! :)

Cheers

Tim…


Site Hosting Update : Two Months Down the Line was first posted on September 20, 2014 at 9:52 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

How to Configure EM12c to NOT Use Load Balancers

This may not come up very often, but for some reason, an administrator might have to reconfigure an EM12c environment to NOT use load balancers.

This could be due to:

1.  Hardware Issue on the Load Balancers
2.  Mis-configured Load Balancers
3.  Re-allocation of Load Balancer hardware for other purposes.

Whatever the reason may be, I noted that the instructions to set up the load balancers are easy to find, but not so clearly found are how to configure the EM12c environment to not use the load balancers once they’ve been put in place. To revert back to non-load balancer configuration:

  • You will need the SYSMAN password to perform the following tasks.
  • First step will require you to reconfigure/secure the OMS without the load balancers.
  • Second step is to secure each agent without the wallet credentials and load balancer information.

Tip:  If you have a large number of agents that will need to be re-secured, scripting the task may be advisable to limit the downtime and the overhead.

Reconfigure the OMS

Log onto the OMS host

Proceed to the $OMS_HOME and secure the OMS, telling it to bypass the load balancers in the parameters:

cd $OMS_HOME

./emctl stop oms -all
./emctl secure oms -no_slb
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Securing OMS... Started.
Enter Enterprise Manager Root (SYSMAN) Password :
Enter Agent Registration Password :
Securing OMS... Succeeded.

Verify Changes

Check the details of your OMS details to ensure that the change has taken place:

./emctl status oms -details
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Enter Enterprise Manager Root (SYSMAN) Password :
....
OMS is not configured with SLB or virtual hostname Agent Upload is unlocked.
...

Re-secure the Agents

Each agent, (outside the one on the OMS that was secured without the load balancers when you ran the command in the last step…) will need to be secured via the following command:

cd $AGENT_HOME

./emctl secure agent
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Agent is already stopped...   Done.
Securing agent...   Started.
Enter Agent Registration Password :
Securing agent...   Suceeded.

That is all there is to reverting your EM12c environment to pre-load balancer configuration!

 

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [How to Configure EM12c to NOT Use Load Balancers], All Right Reserved. 2014.

Oracle OpenWorld 2014 – Bloggers Meetup

Oracle OpenWorld Bloggers Meetup Guess what? You all know that it’s coming, when it’s coming and where… That’s right! The Annual Oracle Bloggers Meetup, one of your top favourite events of OpenWorld, is happening at usual place and time.

What: Oracle Bloggers Meetup 2014

When: Wed, 1-Oct-2014, 5:30pm

Where: Main Dining Room, Jillian’s Billiards @ Metreon, 101 Fourth Street, San Francisco, CA 94103 (street view). Please comment with “COUNT ME IN” if coming — we need to know the attendance numbers.


Traditionally, Oracle Technology Network and Pythian sponsor the venue and drinks. We will also have some cool things happening and a few prizes.

In the age of Big Data and Internet of Things, our mingling activity this year will be virtual — using an app we wrote specifically for this event, so bring your iStuff and Androids to participate and win. Hope this will work! :)

As usual, vintage t-shirts, ties, or bandanas from previous meetups will make you look cool — feel free to wear them.

For those of you who don’t know the history: The Bloggers Meetup during Oracle OpenWorld was started by Mark Rittman and continued by Eddie Awad, and then I picked up the flag in 2009 (gosh…  6 years already?) The meetups have been a great success for making new friends and catching up with old, so let’s keep them this way! To give you an idea, here are the photos from the OOW08 Bloggers Meetup (courtesy of Eddie Awad) and OOW09 meetup blog post update from myself, and a super cool video by a good blogging friend, Bjorn Roest from OOW13.

While the initial meetings were mostly targeted to Oracle database folks, guys and gals from many Oracle technologies — Oracle database, MySQL, Apps, Sun technologies, Java and more join in the fun. All bloggers are welcome. We estimate to gather around 150 bloggers.


If you are planning to attend, please comment here with the phrase “COUNT ME IN”. This will help us ensure we have the attendance numbers right. Please provide your blog URL with your comment — it’s a Bloggers Meetup after all! Make sure you comment here if you are attending so that we have enough room, food, and (most importantly) drinks.

Of course, do not forget to blog and tweet about this year’s bloggers meetup. See you there!

Oracle Open World: Oaktable speakers at Delphix booth

Screen Shot 2014-09-19 at 2.16.51 PM

#222222;" border="0" width="600" cellspacing="0" cellpadding="0" align="left">

#bd2027;">#800000;">
Planning to attend Oracle OpenWorld this year? 

#6e6e6e;">#888888;">If so, stop by to visit Delphix at booth 821 and hear from Oracle industry luminaries such as Jonathan Lewis, Tim Gorman, Kyle Hailey, Ben Prusinski and the Oracle Alchemist, Steve Karam.

  • #bd2027;">#888888;">Taking Back Your Time: The Power of Virtual Databases, Jonathan Lewis
  • #bd2027;">#888888;">Virtual Data Platform: Revolutionizing Database Cloning, Kyle Hailey
  • #bd2027;">#888888;">Virtual Data for the High Performance Warehouse, Steve Karam
  • #bd2027;">#888888;">Transforming IT Infrastructure, Tim Gorman
  • #bd2027;">#888888;">Oracle E-Business Suite Updates with Delphix, Ben Prusinski

#6e6e6e;">#800000;">Join your peers at #CloneAttack #RepAttack and receive a free 30-day trial version of Delphix and Dbvisit Replicate.

#000000;">DATE:  Monday, September 29
#000000;">TIME:  3:30-5:00 PM
#000000;">LOCATION:  OTN Lounge

#6e6e6e;">If you are unable to make it to the OTN Lounge, visit the Children’s Creativity Museum (upstairs from Moscone South) anytime on Tuesday, September 30 to get your demo kit and start running your own private Delphix environment.


#6e6e6e;">#800000;">Visit Delphix in booth 821 to view a demo of their Virtual Data Platform or to meet the team! 

 

#666666;">

Oracle Open World 2014

 

Delphix Booth Speaking Schedule

 

Monday, September 29, 2014

10:00 Jonathan Lewis Taking Back Your Time: The Power of Virtual Databases
11:00 Steve Karam Virtual Data for the High Performance Warehouse
1:00 Tim Gorman Transforming IT Infrastructure
2:00 Hubert Sun Agile Data + Agile Masking
3:00 Rick Caccia Delphix for Copy Data Management
4:00 Ben Prusinski Oracle E-Business Suite Upgrades with Delphix
Tuesday, September 30, 2014
10:00 Jonathan Lewis Taking Back Your Time: The Power of Virtual Databases
11:00 Ben Prusinski Oracle E-Business Suite Upgrades with Delphix
1:00 Tim Gorman Transforming IT Infrastructure
2:00 Hubert Sun Agile Data + Agile Masking
3:00 Charles Moore Public Clouds – How to Eliminate Security Risks and Decrease Costs
4:00 Steve Karam Everybody Gets a VDB – Including DBAs
Wednesday, October 1, 2014
10:00 Tim Gorman Transforming IT Infrastructure
11:00 Ben Prusinski Oracle E-Business Suite Upgrades with Delphix
1:00 Steve Karam Virtual Data for the High Performance Warehouse
2:00 Hubert Sun Agile Data + Agile Masking
3:00 Kyle Hailey Virtual Data Platform: Revolutionizing Database Cloning
4:00 Ansh Patnik Multiple Oracle EBS Environments with the Power of Delphix Technology

New Oracle CEOs “Hurd ‘n’ Catz”: This should be great

“Hurd’n’Catz” – I’ve always liked Larry, and especially in the old unscripted public discussions of technology. The best one I was at was at the Fairmont in 1994, but I’m biased because I was the MC. Larry and Ron Wohl were on hand for a general question and answer session instead of having a keynote talk at a particularly robust OAUG conference. After a few questions from the audience it turned into a snappy debate between Ron and Larry about how the future of some pretty doggone important things to the entire audience were going to go. I barely had to egg them on. It was real and it was useful. I think Larry was genuinely disappointed we had to stop when it was time for the vendor sponsored cocktail reception. I wish Larry well and I would not begrudge him for a second letting go of pain in the butt day to day control. I would also not underestimate the value of Larry just having fun taking a look at the technology base he controls. And he’ll still be in charge of the overall strategy – just not stuck with the day to day pain in the butt execution details. This all seems perfect to me. Mark Hurd and Safra Catz run their portfolios like well-oiled machines. By giving up being CEO Larry can be the Chairman of the Board and still runs the hardware and software development pieces he’ll have fun with. I’m surprised the stock didn’t go up! And Larry, if you ever want me to moderate a talk like that again I’m pretty sure I can clear up my dance card. All the best!

#OakTable World at Oracle OpenWorld 2014

WhereChildren’s Creativity Museum, 221 4th St, San Francisco

When:  Mon-Tue, 29-30 September, 08:30 – 17:00 PDT

For the third year in a row at the same fantastic location right in the heart of the bustling Oracle OpenWorld 2014 extravaganza, OakTable World 2014 is bringing together the top geeks of the worldwide Oracle community to present on the topics not approved for the OpenWorld conference.  At the OpenWorld conference.  For free.

The beauty of this unconference is its ad-hoc nature.  In 2010, weary of flying from Europe to endure marketing-rich content, Mogens Norgaard conceived Oracle ClosedWorld as an informal venue for those who wanted to talk about cool deep-technical topics.  Oracle ClosedWorld was first held in the back dining room at Chevy’s Fresh Mex on 3rd and Howard, fueled by Mogens’ credit card holding an open tab.  The following year in 2011, ClosedWorld was moved a little ways down Howard Street to the upstairs room at the Thirsty Bear, once again fueled by Mogens’ (and other) credit cards keeping a tab open at the bar.

In 2012, Kyle Hailey took the lead, found a fantastic venue, herded all the cats to make a 2-day agenda, and arranged for corporate sponsorship from Delphix, Pythian, and Enkitec, who have continued to sponsor OakTable World each year since.

If you’re coming to Oracle OpenWorld 2014 and are hungry for good deep technical content, stop by at OakTable World 2014, located right between Moscone South and Moscone West, and get your mojo recharged.

If you’re local to the Bay Area but can’t afford Oracle OpenWorld, and you like deep technical stuff about Oracle database, stop by and enjoy the electricity of the largest Oracle conference in the world, and the best Oracle unconference right in the heart of it all.

OakTable World 2014 – driven by the OakTable Network, an informal society of drinkers with an Oracle problem.

Back to technology: Is it effective to place Oracle REDO on SSD?

Ever since I was asked to improve the throughput of an actual general ledger posting job involving Oracle in December 1993 on some hardware where solid state disk (SSD) was available (at high cost relative to “spinning rust” or hard disk drives [HDD]), I have been trying to explain the overall advantage of placing different types of the different Oracle storage selectively on SSD.

When FLASH SSD arrived on the scene, studies quickly arose that writing to FLASH SSD is often not as fast as writing to disk drives dedicated to receiving those writes.

Today I’ll try to explain why I don’t care.

While there was some advantage to writing to SSD in my tests (which were to RAM based SSD on a VAX), the write speed to online REDO was not a significant part of the advantage of placing online REDO on SSD.

As Kevin Closson has repeatedly and carefully explained,  the write speed of online REDO is rarely the problem: http://kevinclosson.wordpress.com/2007/07/21/manly-men-only-use-solid-state-disk-for-redo- logging-lgwr-io-is-simple-but-not-lgwr-processing/

There are two things about moving online REDO to SSD (even the relatively slower FLASH kind) that are a big performance and cost advantage most of the time:

1) Mostly writes to online REDO are small and frequent. This generates a constant stream of seeks to find the correct place to write. On HDD that means either you have dedicated a chunk of HDD (usually two, four, or eight whole trays, because we stripe and duplex by tray in actual big systems and many folks insist on both hardware duplexing and multiple members of each REDO log group on storage that fails separately and you might need to Ping-Pong your REDO log groups so that REDO is written on distinct drives from where ARCH reads REDO) or you degrade the performance of the HDD containing the online redo for other purposes because you pester it with constant seeks away from the other work it is supporting.

2) Reads from online REDO are big drinks by ARCH which demands bandwidth. On HDD that means you either dedicate a chunk of HDD (as above, usually an expensive chunk) to the online REDO or you consume some of the read bandwidth from that HDD that would otherwise be available to all the oracle readers whenever ARCH is running.

Normally the required amount of storage acreage required for online REDO is modest.

Thus, the cost calculation for deploying online REDO on SSD should be for the size of SSD big enough to do the job (times two or four, perhaps for duplexing and multiple members, but never times eight because of seek irrelevancy on SSD) versus the cost of deploying the online REDO on isolated chunks of HDD if overall performance is an issue.

The central value of putting online REDO on SSD is to de-heat the rest of the disk farm.

Unless you are in a rare situation where writing to online REDO is your pacing resource and it is the pacing resource due to the write speed of the media (not available CPU or dimm channel speed and availability), the relative reduced speed to writing some kinds of SSD over writing to dedicated HDD presumably waiting to swallow the write at the correct seek location is of zero concern. (IF you are in that situation, it is probably time to invest in a small amount of RAM based SSD  [or you are doing a laboratory test just driving REDO, which is an interesting test not directly related to production throughput of most real systems.])

Let’s review: If your actual problem is the speed of writing to online REDO log or log file sync, you are not likely to solve that problem by moving online REDO to slower SSD. (There is some possibility that the concomitant de-heating of the disk farm may have that net effect, but you could also achieve that by isolating online REDO on independently operating HDD.)

On the other hand, if you have a hot disk farm that is the pacing resource to your throughput and you can remove a lot of the heat for a modest investment in SSD,  that is an effective use SSD.

The leap to the conclusion that moving online REDO to SSD is for the purpose of speeding up log writing or log file sync makes it seem like a laboratory test showing writing to some kinds of SSD being slower means putting online REDO on SSD is wrong.

I hope today I have helped explain why it is often a good investment to place online REDO on SSD.

Shrink Tablespace

A recent question on the OTN database forum raised the topic of returning free space in a tablespace to the operating system by rebuilding objects to fill the gaps near the start of files and leave the empty space at the ends of files so that the files could be resized downwards.

This isn’t a process that you’re likely to need frequently, but I have written a couple of notes about it, including a sample query to produce a map of the free and used space in a tablespace. While reading the thread, though, it crossed my mind that recent versions of Oracle introduced a feature that can reduce the amount of work needed to get the job done, so I thought I’d demonstrate the point here.

When you move a table its indexes become unusable and have to be rebuilt; but when an index becomes unusable, the more recent versions of Oracle will drop the segment. Here’s a key point – if the index becomes unusable because the table has been moved the segment is dropped only AFTER the move has completed. Pause a minute for thought and you realise that the smart thing to do before you move a table is to make its indexes unusable so that they release their space BEFORE you move the table. (This strategy is only relevant if you’ve mixed tables and indexes in the same tablespace and if you’re planning to do all your rebuilds into the same tablespace rather than moving everything into a new tablespace.)

Here are some outputs demonstrating the effect in a 12.1.0.2 database. I have created (and loaded) two tables in a tablespace of 1MB uniform extents, 8KB block size; then I’ve created indexes on the two tables. Running my ts_hwm.sql script I get the following results for that tablespace:


FILE_ID    BLOCK_ID   END_BLOCK OWNER      SEGMENT_NAME    SEGMENT_TYPE
------- ----------- ----------- ---------- --------------- ------------------
      5         128         255 TEST_USER  T1              TABLE
                256         383 TEST_USER  T2              TABLE
                384         511 TEST_USER  T1_I1           INDEX
                512         639 TEST_USER  T2_I1           INDEX
                640      65,535 free       free

Notice that it was a nice new tablespace, so I can see the two tables followed by the two indexes at the start of the tablespaces. If I now move table t1 and re-run the script this is what happens:


alter table t1 move;

FILE_ID    BLOCK_ID   END_BLOCK OWNER      SEGMENT_NAME    SEGMENT_TYPE
------- ----------- ----------- ---------- --------------- ------------------
      5         128         255 free       free
                256         383 TEST_USER  T2              TABLE
                384         511 free       free
                512         639 TEST_USER  T2_I1           INDEX
                640         767 TEST_USER  T1              TABLE
                768      65,535 free       free

Table t1 is now situated past the previous tablespace highwater mark and I have two gaps in the tablespace where t1 and the index t1_i1 used to be.

Repeat the experiment from scratch (drop the tables, purge, etc. to empty the tablespace) but this time mark the index unusable before moving the table and this is what happens:


FILE_ID    BLOCK_ID   END_BLOCK OWNER      SEGMENT_NAME    SEGMENT_TYPE
------- ----------- ----------- ---------- --------------- ------------------
      5         128         255 free       free
                256         383 TEST_USER  T2              TABLE
                384         511 TEST_USER  T1              TABLE
                512         639 TEST_USER  T2_I1           INDEX
                640      65,535 free       free

Table t1 has moved into the space vacated by index t1_i1, so the tablespace highwater mark has not moved up.

If you do feel the need to reclaim space from a tablespace by rebuilding objects, you can find that it’s quite hard to decide the order in which the objects should be moved/rebuilt to minimise the work you (or rather, Oracle) has to do; if you remember that any table you move will release its index space anyway and insert a step to mark those indexes unusable before you move the table you may find it’s much easier to work out a good order for moving the tables.

Footnote: I appreciate that some readers may already take advantage of the necessity of rebuilding indexes by dropping indexes before moving tables – but I think it’s a nice feature that we can now make them unusable and get the same benefit without introducing a risk of error when using a script to recreate an index we’ve dropped.