Saturday, October 20, 2012

Major Pain - How to convert from 6k blocksize to 8k blocksize

OK, if you've been reading a few of my previous posts, you'll see I have a major problem. I tried to get the cleanest upgrade of a live production database from Oracle 9.2.0.7 x86 to 10.2.0.5 x64. So I used the method of recovering a hot-copy backup. But the problem now is, the database has a DB_BLOCK_SIZE of 6K, which is causing problems, viz unable to use FLASHBACK DATABASE and unable to create compressed RMAN backups (bug id 9789190). I'm supposed to create a Data Guard Standby database from this db, but now I'm not even sure if it'll work, since Data Guard is so highly dependent on archive logs and flashback database is highly recommended.

Anyway, I would like to change to a 8k block size to reduce the hassle of dealing with problems and to actually use compressed backups and FLASHBACK DATABASE.

Here's how I intend on doing it:
  1. Check for Transportable Tablespace violations. Clean up any violations.
  2. Define a DB_8K_CACHE_SIZE.
  3. Create equivalent 8K tablespaces. Resize the new tablespaces to cope with the table transfers.
  4. LONG / LONG RAWS. Many aplication versions ago, some columns were supposed to be converted from LONG RAW to BLOB. But at this customer, they weren't, and now it's my headache. Oracle can convert LONG RAW to BLOB by using the ALTER TABLE MOVE clause, and I've tested that the application still works with BLOB (Powerbuilder handles it internally according to the devs).
  5. Convert some of the tables' columns to BLOB.
  6. For those columns where I can't convert to LOB and I'm forced to maintain as LONG or LONG RAW, export drop, re-create (new tablespace) and import.About 4 tables, which isn't too much work.
  7. Move the LOB data to 8k-tablespaces. I wrote a script for this.
  8. Move all tables to the 8k-equivalent tablespaces.
  9. Rebuild all indexes to a new 8k tablespace.
  10. Set all 8k tablespaces to read only.
  11. Export using expdp for transport.
  12. Export the software code by using exp rows=n.
  13. Create a shiny, new db with 8k blocksize.
  14. Create the application-specific db roles and schema users.
  15. Create the directories.
  16. Attach the transportable tablespaces (application-specific).
  17. Change the default tablespaces per schema user.
  18. Import the software code fromuser / touser on the 8k db.
  19. Recreate the application user Oracle accounts.
Now, I could do a full export and a fromuser/touser import to a new db with a 8k block size. It will probably take around 9 hours. But that wouldn't fix the problem of LONG columns where they shouldn't be.

I have checked that, a table can have columns which reside simultaneously on 6k and 8k blocksize tablespaces.

Bug 9789190 - RMAN RESTORE raises ORA-19660 from a compressed backup taken from an 6k DB_BLOCK_SIZE [ID 9789190.8]

I finally found the issue with my migrated database. It's a bug - it prevents RMAN compressed backups made with a 6k blocksize from being usable. Yup, my backups were completely unusable. Thank goodness I discovered that BEFORE needing a backup.

I'm not going to reproduce the text of the bug. Its Bug ID is 9789190 - you can look it up.

The bug describes it as compressed backups created using an 'unusual' block size. Well, OK, is 6k an unusual block size? I thought in 9i it was pretty common. It's been fixed in 11.2.0.1, it definitely affects 10.2.0.5 & 10.2.0.4.

I discovered it by accident. After converting the training db, I backed it up, and then messed around, then restored it. It didn't restore. I wasn't concerned about the restore per se, since I still had the offline backup from the 9i db. But, I was concerned that a freshly baked backup didn't work. That's how I found this bug. I was not a happy bunny when I found it, since that customer's live db has the same problem.

But, there are workarounds if you discover this issue the easy way. Don't use compressed backups or don't use an 'unusual' blocksize. Sigh.

I tested the use of uncompressed RMAN backups, and yes, that works. I would also say that the standard Export I would also highly recommend in such circumstances.

In a separate blog I'll detail how I'm going to deal with this permanently.

Tuesday, October 16, 2012

Lessons from migrating Oracle 9i to 10g


So I had to migrate a 32-bit Oracle 9.2.0.7 database on Windows 2000 to a new platform - 64-bit 10.2.0.5   on Windows 2008. Now, when doing this sort of thing, you generally have two options:

1. Database export & import. Generally, this tends to be messy, long and tedious. I dislike it, but in this case, I probably should've done it this way. But I'm getting ahead of myself

2. RMAN recovery. I wasn't sure about this method, since, generally the RMAN versions have to the same i.e.  an RMAN backup had to be restored with the same RMAN version. I thought around this, and decided to try something different.

I found that If I took a hot-copy backup, I could re-create the instance, and apply the archive logs! Voila! A nice, clean method of migrating using RMAN. I could then run catupgrd.sql at my leisure.

Here are some lessons from this experience:

1. Controflile restore - don't use RMAN.

I tried to restore the controlfile to an ASM volume, and here we go:

SQL> $rman target /

Recovery Manager: Release 10.2.0.5.0 - Production on Fri Oct 12 14:46:43 2012

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database: live (not mounted)

RMAN> restore controlfile from 'E:\backup321\CONTROL.BAK';

Starting restore at 12-OCT-12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=158 devtype=DISK

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 10/12/2012 14:46:58
ORA-00600: internal error code, arguments: [krbydd_lbszXinbuf], [1048576], [6144], [], [], [], [], []
ORA-19600: input file is control file  (+DG1/live/controlfile/current.270.796488417)
ORA-19601: output file is control file  (E:\LIVE\CONTROLFILE\O1_MF_%U_.CTL)

This error is not documented anywhere. I think it has to do with using RMAN 10g to restore a 9i controlfile.

Lesson: copy the controlfile into place without RMAN.

Update: it has to do with the DB_BLOCK_SIZE  - look at the second argument, it's the DB_BLOCK_SIZE.

2. Check your blocksize.

This 9i db has a block size of 6k, which is unusual in the 10g world. Turning on FLASHBACK DATABASE works, but then try to open the db! This is what you get:



SQL> alter database flashback on;

Database altered.

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-38701: Flashback database log 1 seq 1 thread 1:
"E:\LIVE\FLASHBACK\O1_MF_87TK8985_.FLB"
ORA-27091: unable to queue I/O
ORA-27067: size of I/O buffer is invalid
OSD-04026: Invalid parameter passed. (OS 1048576)


SQL> show parameters db_rec

NAME                                 TYPE        VALUE
------------------------------------ ----------- -------------------
db_recovery_file_dest                string      E:\
db_recovery_file_dest_size           big integer 200G
db_recycle_cache_size                big integer 0
SQL> alter system set db_recovery_file_dest='+DG1' scope=both;

System altered.

SQL> alter database flashback off;

Database altered.

SQL> alter database flashback on;

Database altered.

SQL> alter database open;

Database altered.


Nothing about this in the Oracle Support site. I have been unable to resolve this, but the workaround (as seen above) is the store the flashback logs on an ASM volume.

This error has something to do with the way the flashback log writer (RVWR) is interacting with the OS. I think it has to do with the blocksize.

This is a bit of a kick in the teeth for me, since FLASHBACK DATABASE is one of the most useful features of 10g. If I had used export / import I'd by-pass this issue.

Anyway, hope this helps someone out there.