Saturday, October 20, 2012

Major Pain - How to convert from 6k blocksize to 8k blocksize

OK, if you've been reading a few of my previous posts, you'll see I have a major problem. I tried to get the cleanest upgrade of a live production database from Oracle 9.2.0.7 x86 to 10.2.0.5 x64. So I used the method of recovering a hot-copy backup. But the problem now is, the database has a DB_BLOCK_SIZE of 6K, which is causing problems, viz unable to use FLASHBACK DATABASE and unable to create compressed RMAN backups (bug id 9789190). I'm supposed to create a Data Guard Standby database from this db, but now I'm not even sure if it'll work, since Data Guard is so highly dependent on archive logs and flashback database is highly recommended.

Anyway, I would like to change to a 8k block size to reduce the hassle of dealing with problems and to actually use compressed backups and FLASHBACK DATABASE.

Here's how I intend on doing it:
  1. Check for Transportable Tablespace violations. Clean up any violations.
  2. Define a DB_8K_CACHE_SIZE.
  3. Create equivalent 8K tablespaces. Resize the new tablespaces to cope with the table transfers.
  4. LONG / LONG RAWS. Many aplication versions ago, some columns were supposed to be converted from LONG RAW to BLOB. But at this customer, they weren't, and now it's my headache. Oracle can convert LONG RAW to BLOB by using the ALTER TABLE MOVE clause, and I've tested that the application still works with BLOB (Powerbuilder handles it internally according to the devs).
  5. Convert some of the tables' columns to BLOB.
  6. For those columns where I can't convert to LOB and I'm forced to maintain as LONG or LONG RAW, export drop, re-create (new tablespace) and import.About 4 tables, which isn't too much work.
  7. Move the LOB data to 8k-tablespaces. I wrote a script for this.
  8. Move all tables to the 8k-equivalent tablespaces.
  9. Rebuild all indexes to a new 8k tablespace.
  10. Set all 8k tablespaces to read only.
  11. Export using expdp for transport.
  12. Export the software code by using exp rows=n.
  13. Create a shiny, new db with 8k blocksize.
  14. Create the application-specific db roles and schema users.
  15. Create the directories.
  16. Attach the transportable tablespaces (application-specific).
  17. Change the default tablespaces per schema user.
  18. Import the software code fromuser / touser on the 8k db.
  19. Recreate the application user Oracle accounts.
Now, I could do a full export and a fromuser/touser import to a new db with a 8k block size. It will probably take around 9 hours. But that wouldn't fix the problem of LONG columns where they shouldn't be.

I have checked that, a table can have columns which reside simultaneously on 6k and 8k blocksize tablespaces.

Bug 9789190 - RMAN RESTORE raises ORA-19660 from a compressed backup taken from an 6k DB_BLOCK_SIZE [ID 9789190.8]

I finally found the issue with my migrated database. It's a bug - it prevents RMAN compressed backups made with a 6k blocksize from being usable. Yup, my backups were completely unusable. Thank goodness I discovered that BEFORE needing a backup.

I'm not going to reproduce the text of the bug. Its Bug ID is 9789190 - you can look it up.

The bug describes it as compressed backups created using an 'unusual' block size. Well, OK, is 6k an unusual block size? I thought in 9i it was pretty common. It's been fixed in 11.2.0.1, it definitely affects 10.2.0.5 & 10.2.0.4.

I discovered it by accident. After converting the training db, I backed it up, and then messed around, then restored it. It didn't restore. I wasn't concerned about the restore per se, since I still had the offline backup from the 9i db. But, I was concerned that a freshly baked backup didn't work. That's how I found this bug. I was not a happy bunny when I found it, since that customer's live db has the same problem.

But, there are workarounds if you discover this issue the easy way. Don't use compressed backups or don't use an 'unusual' blocksize. Sigh.

I tested the use of uncompressed RMAN backups, and yes, that works. I would also say that the standard Export I would also highly recommend in such circumstances.

In a separate blog I'll detail how I'm going to deal with this permanently.

Tuesday, October 16, 2012

Lessons from migrating Oracle 9i to 10g


So I had to migrate a 32-bit Oracle 9.2.0.7 database on Windows 2000 to a new platform - 64-bit 10.2.0.5   on Windows 2008. Now, when doing this sort of thing, you generally have two options:

1. Database export & import. Generally, this tends to be messy, long and tedious. I dislike it, but in this case, I probably should've done it this way. But I'm getting ahead of myself

2. RMAN recovery. I wasn't sure about this method, since, generally the RMAN versions have to the same i.e.  an RMAN backup had to be restored with the same RMAN version. I thought around this, and decided to try something different.

I found that If I took a hot-copy backup, I could re-create the instance, and apply the archive logs! Voila! A nice, clean method of migrating using RMAN. I could then run catupgrd.sql at my leisure.

Here are some lessons from this experience:

1. Controflile restore - don't use RMAN.

I tried to restore the controlfile to an ASM volume, and here we go:

SQL> $rman target /

Recovery Manager: Release 10.2.0.5.0 - Production on Fri Oct 12 14:46:43 2012

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database: live (not mounted)

RMAN> restore controlfile from 'E:\backup321\CONTROL.BAK';

Starting restore at 12-OCT-12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=158 devtype=DISK

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 10/12/2012 14:46:58
ORA-00600: internal error code, arguments: [krbydd_lbszXinbuf], [1048576], [6144], [], [], [], [], []
ORA-19600: input file is control file  (+DG1/live/controlfile/current.270.796488417)
ORA-19601: output file is control file  (E:\LIVE\CONTROLFILE\O1_MF_%U_.CTL)

This error is not documented anywhere. I think it has to do with using RMAN 10g to restore a 9i controlfile.

Lesson: copy the controlfile into place without RMAN.

Update: it has to do with the DB_BLOCK_SIZE  - look at the second argument, it's the DB_BLOCK_SIZE.

2. Check your blocksize.

This 9i db has a block size of 6k, which is unusual in the 10g world. Turning on FLASHBACK DATABASE works, but then try to open the db! This is what you get:



SQL> alter database flashback on;

Database altered.

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-38701: Flashback database log 1 seq 1 thread 1:
"E:\LIVE\FLASHBACK\O1_MF_87TK8985_.FLB"
ORA-27091: unable to queue I/O
ORA-27067: size of I/O buffer is invalid
OSD-04026: Invalid parameter passed. (OS 1048576)


SQL> show parameters db_rec

NAME                                 TYPE        VALUE
------------------------------------ ----------- -------------------
db_recovery_file_dest                string      E:\
db_recovery_file_dest_size           big integer 200G
db_recycle_cache_size                big integer 0
SQL> alter system set db_recovery_file_dest='+DG1' scope=both;

System altered.

SQL> alter database flashback off;

Database altered.

SQL> alter database flashback on;

Database altered.

SQL> alter database open;

Database altered.


Nothing about this in the Oracle Support site. I have been unable to resolve this, but the workaround (as seen above) is the store the flashback logs on an ASM volume.

This error has something to do with the way the flashback log writer (RVWR) is interacting with the OS. I think it has to do with the blocksize.

This is a bit of a kick in the teeth for me, since FLASHBACK DATABASE is one of the most useful features of 10g. If I had used export / import I'd by-pass this issue.

Anyway, hope this helps someone out there.

Tuesday, September 18, 2012

Rename a Database

Here's a cool tip - how to actually rename a database. You can always change SERVICE_NAMES, but if you want to change the internal name of the db, here's a quick tutorial using the utility NID:



SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount --(must be mounted)
ORACLE instance started.

Total System Global Area 1677721600 bytes
Fixed Size                  2066240 bytes
Variable Size             318769344 bytes
Database Buffers         1342177280 bytes
Redo Buffers               14708736 bytes
Database mounted.

 Now, we’re going to change the database name to SBY02. We assume ORACLE_SID=<OLDSID>:

SQL> $nid target=/ DBNAME=SBY02 setname=YES

DBNEWID: Release 10.2.0.4.0 - Production on Thu Jul 29 15:11:25 2010

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Connected to database LIVE (DBID=551896809)

Connected to server version 10.2.0

Control Files in database:
    +DG1/sby02/controlfile/control01.ctl

Change database name of database LIVE to SBY02? (Y/[N]) => Y

Proceeding with operation
Changing database name from LIVE to SBY02
    Control File +DG1/sby02/controlfile/control01.ctl - modified
    Datafile +DG1/sby02/datafile/system.5391.725630121 - wrote new name
    Datafile +DG1/sby02/datafile/undotbs1.2314.725630121 - wrote new name
    Datafile +DG1/sby02/datafile/sysaux.2311.725630121 - wrote new name
    Datafile +DG1/sby02/datafile/users.2323.725630125 - wrote new name
      Datafile +DG1/sby02/tempfile/temp.2213.725641409 - wrote new name
    Control File +DG1/sby02/controlfile/control01.ctl - wrote new name
    Instance shut down

Database name changed to SBY02.
Modify parameter file and generate a new password file before restarting.
Succesfully changed database name.
DBNEWID - Completed succesfully.


SQL> disconn
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64
bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> conn / as sysdba
Connected to an idle instance.
SQL> startup nomount
ORACLE instance started.

Total System Global Area 1677721600 bytes
Fixed Size                  2066240 bytes
Variable Size             318769344 bytes
Database Buffers         1342177280 bytes
Redo Buffers               14708736 bytes
SQL> alter system set db_name=SBY02 scope=spfile;

System altered.

SQL> shutdown immediate
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> startup

The other cool thing about this is that DB_UNIQUE_NAME is changed automatically; only DB_NAME needs changing. I leave it to you to generate a new password file.

Sunday, September 16, 2012

DDL Detection

Being an Oracle DBA is not always a pleasant task. There are unscrupulous third parties out there who steal your employer's customers, and then try to steal their intellectual property.

Here's one script I wrote after encountering the darker side of DBA-ing. When an untrusted third party starts messing about in your employer's databases, then things "get real".

In this set of scripts, we create a little trap for our third-party friends - if they run DDL, you log it and email it to yourself.

This is a pre-cursor, a backup for another, much more advanced defence against unwanted visitors which I'll post later in another blog entry.

This was written for 10g, YMMV for other versions.

Yes, stuff is written to the SYS schema, which is a no-no; I'll re-write it to use another schema for a later blog entry.

-- clear the previous table
DROP TABLE SYS.ddl_log;


CREATE TABLE SYS.ddl_log
    (audit_id                       NUMBER NOT NULL,
    audit_date                     DATE NOT NULL,
    sessionid                      NUMBER,
    username                       VARCHAR2(30),
    osuser                         VARCHAR2(30),
    terminal                       VARCHAR2(16),
    HOST                           VARCHAR2(64),
    ip                             VARCHAR2(15),
    program                        VARCHAR2(64),
    module                         VARCHAR2(48),
    operation                      VARCHAR2(30),
    obj_owner                      VARCHAR2(30),
    object_name                    VARCHAR2(30),
    sql_text                       VARCHAR2(64)
    )
/

-- Indexes for SYS.ddl_log

CREATE UNIQUE INDEX SYS.pk_ddl_log ON SYS.ddl_log
  (
    audit_id                        ASC
  )
/

-- Comments for SYS.ddl_log

COMMENT ON COLUMN SYS.ddl_log.audit_id IS 'Primary Key'
/
-- End of DDL Script for Table SYS.DDL_LOG

-- We need sequences for the trigger
DROP SEQUENCE SYS.sq_ddl_log;
CREATE SEQUENCE SYS.sq_ddl_log
 START WITH  1
 INCREMENT BY  1
 NOCACHE
/


-- Start of DDL Script for Trigger SYS.DDL_DETECTION

create or replace trigger SYS.DDL_DETECTION
 before
  create or alter or drop
 on database
DECLARE
   v_audit_rec       ddl_log%ROWTYPE;
   oper              ddl_log.operation%TYPE;
   sql_text          ora_name_list_t;
   i                 PLS_INTEGER;
   v_db_instance     VARCHAR2 (8);
   v_email_subject   VARCHAR2 (32);
   v_email_body      VARCHAR2 (4000);
   crlf              VARCHAR2 (2)             := CHR (13) || CHR (10);
   PRAGMA AUTONOMOUS_TRANSACTION; /* only way to commit within a trigger */

BEGIN
/* no checking v$session for detection avoidance
no intention to block - only log and email*/
   SELECT ora_sysevent
     INTO oper
     FROM DUAL;

   BEGIN
      SELECT program
        INTO v_audit_rec.program
        FROM SYS.v_$session
       WHERE audsid = SYS_CONTEXT ('USERENV', 'SESSIONID')
         -- AUDSID is the same for all SYS connections
         AND SID = SYS_CONTEXT ('USERENV', 'SID')
         -- so we use SID to differentiate
         AND ROWNUM = 1;
   EXCEPTION
      WHEN NO_DATA_FOUND
      THEN
         v_audit_rec.program := 'SESSIONID_NOT_FOUND';
   END;

-- get all audit info first
   i := sql_txt (sql_text);

   IF oper IN ('CREATE', 'DROP', 'ALTER')
   THEN
      SELECT ora_sysevent, ora_dict_obj_owner,
             ora_dict_obj_name, sql_text (1)
        INTO v_audit_rec.operation, v_audit_rec.obj_owner,
             v_audit_rec.object_name, v_audit_rec.sql_text
        FROM DUAL;
   END IF;

   SELECT SYS_CONTEXT ('USERENV', 'SESSIONID'),
          SYS_CONTEXT ('USERENV', 'SESSION_USER'),
          SYS_CONTEXT ('USERENV', 'OS_USER'),
          SYS_CONTEXT ('USERENV', 'TERMINAL'),
          SYS_CONTEXT ('USERENV', 'HOST'),
          SYS_CONTEXT ('USERENV', 'IP_ADDRESS'),
          SYS_CONTEXT ('USERENV', 'MODULE'),
          SYS_CONTEXT ('USERENV', 'DB_UNIQUE_NAME')
     INTO v_audit_rec.sessionid,
          v_audit_rec.username,
          v_audit_rec.osuser,
          v_audit_rec.terminal,
          v_audit_rec.HOST,
          v_audit_rec.ip,
          v_audit_rec.module,
          v_db_instance
     FROM DUAL;

--determine if we need to log or filter out stuff we're not interested in
   IF (    (v_audit_rec.sessionid > 0)
       AND (v_audit_rec.object_name NOT LIKE ('SYS_TEMP%'))
       AND (v_audit_rec.terminal NOT IN ('MYWORKSTATION', 'FRIENDLYWORKSTATION'))
      )
   THEN                                                           -- let's log
      SELECT sq_ddl_log.NEXTVAL
        INTO v_audit_rec.audit_id
        FROM DUAL;

      v_audit_rec.audit_date := SYSDATE;

      INSERT INTO SYS.ddl_log
           VALUES v_audit_rec;

      COMMIT;
      -- and email it
      v_email_subject := v_db_instance || ': DDL Notification';
      v_email_body :=
            'Timestamp   : '
         || TO_CHAR (v_audit_rec.audit_date, 'dd/mm/yyyy hh24:mi:ss')
         || crlf
         || 'Session ID  : '
         || v_audit_rec.sessionid
         || crlf
         || 'Username    : '
         || v_audit_rec.username
         || crlf
         || 'OS User     : '
         || v_audit_rec.osuser
         || crlf
         || 'Host        : '
         || v_audit_rec.terminal
         || crlf
         || 'IP Addreses : '
         || v_audit_rec.ip
         || crlf
         || 'Module      : '
         || v_audit_rec.module
         || crlf
         || 'Program     : '
         || v_audit_rec.program
         || crlf
         || 'Operation   : '
         || v_audit_rec.operation
         || crlf
         || 'Schema Owner: '
         || v_audit_rec.obj_owner
         || crlf
         || 'Object      : '
         || v_audit_rec.object_name
         || crlf
         || 'Statement   : '
         || v_audit_rec.sql_text;

      BEGIN
         mailit.mailusers ('kmohammed', '', v_email_subject, v_email_body);
      EXCEPTION
         --silently fail email errors, dont alert our visitors
         WHEN OTHERS
         THEN
            NULL;
      END;
   END IF;
END ddl_trigger;
/


-- End of DDL Script for Trigger SYS.DDL_DETECTION

If you don't require email-ing of the DDL, then that's easily removed from the script.

Let me explain one of the most important parts of this script:


       SELECT program
        INTO v_audit_rec.program
        FROM SYS.v_$session
       WHERE audsid = SYS_CONTEXT ('USERENV', 'SESSIONID')
         -- AUDSID is the same for all SYS connections
         AND SID = SYS_CONTEXT ('USERENV', 'SID')
         -- so we use SID to differentiate
         AND ROWNUM = 1;



This appears convoluted because :
  1. Not all clients populate the MODULE column of V$SESSION. Toad is especially guilty of this, and I suspected the third-party was using Toad. Hence the decision to use the PROGRAM column.
  2. People can mess with V$SESSION. Yes, dear reader, there are unscrupuolous people out there who mess with V$SESSION in an effort to avoid simpler defences against their unauthorised intrusions. 
  3. Detecting accurately the correct SYS session is much harder than your average session.

That's why I take the following steps:
  1. Select from the underlying table i.e.  V_$SESSION
  2. Check the client's SESSIONID (  SYS_CONTEXT ('USERENV', 'SESSIONID') ) against the AUDSID's in V_$SESSION. If the client's SESSIONID does not show up in V_$SESSION, now that's an interesting login.
  3. Have further SYS-specific checks.
One problem with SYS is that if your SQLNET.ORA has:

SQLNET.AUTHENTICATION_SERVICES= (NTS)

And your account is a member of the local group ORA_DBA, then you can connect using OS Authentication (on Windows) without using a password, and "AS SYSDBA". We have to live with that.

In later posts, I'll update the script to use a non-SYS schema, and I'll post my advanced scripts to defend against those unscrupulous people.

Execute ANY Host command

I had a problem at one customer - I couldn't login to the Windows host. Their MS techie had created a local admin user for me, but the account had been locked out or the password had been changed, and now I had no way to use the account - the desk jockey wasn't responding to emails fast enough.

Anyway, I could still login to the Oracle DB as SYS. This little script allowed me to change the password of the host account:


-- run as SYS, and change the command in the 3rd argument
BEGIN
   DBMS_SCHEDULER.create_job ('t1',
                              job_action               => 'C:\WINDOWS\SYSTEM32\CMD.EXE',
                              number_of_arguments      => 3,
                              job_type                 => 'executable',
                              enabled                  => FALSE
                             );
   DBMS_SCHEDULER.set_job_argument_value ('t1', 1, '/q');
   DBMS_SCHEDULER.set_job_argument_value ('t1', 2, '/c');
   DBMS_SCHEDULER.set_job_argument_value ('t1',
                                          3,
                                          'C:\WINDOWS\SYSTEM32\NET.EXE user localadmin password1234'
                                         );
   DBMS_SCHEDULER.ENABLE ('t1');
END;
/

The third argument is the key - with it, you can run any command as the built-in user LocalSystem. In this example, the command resets the localadmin account with the password password1234. It effectively gives you complete control of the host.

It's not a backdoor. You must know the SYS password, and it works this way because the Oracle services run as LocalSystem (by default) on a Windows host.

This also works for Windows 2008, but I had to specify the full path to the NET.EXE file. You can troubleshoot any failed commands by looking at the output of DBMS_SCHEDULER.

RMAN Restore by Command-Line


Overview: I will explain RMAN fundamentals and work through an example to demonstrate an RMAN restore.

Fundamentals


An RMAN restore is really just two commands:

  1. RESTORE DATABASE. This command instructs RMAN to search the controlfile for a list of valid database backups, select the most recent and issue a restore. A restore is simply the placing of database files on disk.

  1. RECOVER DATABASE. This command tells RMAN: make the database consistent. Before a database can be opened, it must be consistent. Unless directed by Oracle Support, this rule can not be overturned, and is always enforced by the database.
In Oracle terms, consistency means all database files must be at the same System Change Number(SCN). SCN is the way Oracle keeps internal time – every change to the database is assigned a unique SCN. Every change – INSERT, UPDATE, DELETE, ALTER, COMMIT or System Checkpoints – is assigned a unique SCN.

Now, a consistent restore is not necessarily complete. In Oracle terms, if a backup was taken while the database was open (hence using archivelog mode), any recovery from such a backup will be incomplete.

A backup taken from an open database in archivelog mode can not backup the online REDO logs. So any changes recorded in a Redo log will not be backed up, hence the reason why the restore is referred to as incomplete. This is not a problem, since for most intents & purposes, a restore to the last available archivelog is expected.

Example


We will use RMAN to restore a database called LIVE as the example process. We assume:
·         The database was in archivelog mode
·         Autobackup turned on
·         The backup & autobackup are available on disk

In this example, commands to be typed are shown in bold - in the context of the command output. Steps should be followed in the order given since later steps depend on successful completion of previous steps.


  1. This step may be optional. If the restore is to be done on a new server, then this step is necessary, otherwise the instance may already exist. Create a new Oracle instance called LIVE using these commands in a new Command Prompt window:

CD /D C:\oracle\product\10.2.0\db_1\BIN

oradim -NEW -SID LIVE -SYSPWD password_live

  1. In this step, we will use RMAN to restore the latest spfile and controlfile using what is known as an Autobackup. An autobackup contains the controlfile and spfile of the LIVE database.

HUMAN INTERVENTION NECESSARY: You need to identify the appropriate autobackup file. It is usually located in the latest-dated sub-folder with an Oracle-generated name. When typing the commands below, substitute the appropriate file names.

You will need to determine the appropriate controlfile path & name for this step. Note that the entire file name is enclosed by single quotes.

  1. Perform the restore:
    1. Open a Command prompt and type:

set oracle_sid=LIVE

   rman target /

Recovery Manager: Release 10.2.0.1.0 - Production on Thu Oct 27 16:18:41 2011

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

connected to target database (not started)

RMAN> startup nomount;

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file 'C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\
SERVER\DATABASE\INITXE.ORA'

starting Oracle instance without parameter file for retrival of spfile
Oracle instance started

Total System Global Area     159383552 bytes

Fixed Size                     1286292 bytes
Variable Size                 58724204 bytes
Database Buffers              96468992 bytes
Redo Buffers                   2904064 bytes

RMAN> restore spfile from 'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\AUTOBAC
KUP\2011_10_27\O1_MF_S_765644418_7BLS3FWT_.BKP';

Starting restore at 27-OCT-11
using channel ORA_DISK_1

channel ORA_DISK_1: autobackup found: C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA
\XE\AUTOBACKUP\2011_10_27\O1_MF_S_765644418_7BLS3FWT_.BKP
channel ORA_DISK_1: SPFILE restore from autobackup complete
Finished restore at 27-OCT-11

RMAN>

RMAN> shutdown abort;

ORACLE instance shut down.

RMAN> startup nomount;

ORACLE instance started.

Total System Global Area 1677721600 bytes
Fixed Size                  2066240 bytes
Variable Size             318769344 bytes
Database Buffers         1342177280 bytes
Redo Buffers               14708736 bytes


RMAN> restore controlfile from 'E:\DSLD01\FRA\LIVE\AUTOBACKUP\2010_07_28\O1_MF_S
_725565668_650RLONZ_.BKP';

Starting restore at 29-JUL-10
using channel ORA_DISK_1

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:16
output filename=+DG1/sby02/controlfile/control01.ctl
Finished restore at 29-JUL-10


RMAN> shutdown immediate
ORA-01507: database not mounted


ORACLE instance shut down.
RMAN> startup mount
ORACLE instance started.

Total System Global Area 1677721600 bytes
Fixed Size                  2066240 bytes
Variable Size             318769344 bytes
Database Buffers         1342177280 bytes
Redo Buffers               14708736 bytes
Database mounted.

HUMAN INTERVENTION NECESSARY: Note the following command requires the correct folder/date to be substituted (similar to commands above).

RMAN> catalog start with ‘E:\FRA\LIVE\BACKUPSET\2010_07_28\' noprompt;

Starting implicit crosscheck backup at 29-JUL-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=479 devtype=DISK
Crosschecked 8 objects
Finished implicit crosscheck backup at 29-JUL-10

Starting implicit crosscheck copy at 29-JUL-10
using channel ORA_DISK_1
Finished implicit crosscheck copy at 29-JUL-10

searching for all files in the recovery area
cataloging files...
no files cataloged

[Not all output shown, ignore any errors]

  1. Here we perform the restore and recover. The Recover command may produce some errors – ignore.

RMAN> RUN
{ allocate channel disk1 type disk;

  RESTORE DATABASE;
  RECOVER DATABASE;
}


allocated channel: disk1
channel disk1: sid=479 devtype=DISK

Starting restore at 29-JUL-10

channel disk1: starting datafile backupset restore
channel disk1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to +DG1/live/datafile/system.256.619109535
restoring datafile 00002 to +DG1/live/datafile/undotbs1.258.619109535
restoring datafile 00003 to +DG1/live/datafile/sysaux.257.619109535
restoring datafile 00004 to +DG1/live/datafile/users.259.619109535
...
channel disk1: reading from backup piece E:\FRA\LIVE\BACKUPSET\2010_07_23
\O1_MF_NNNDF_TAG20100723T180022_64MLO798_.BKP

Make a cup of tea or two. This will take some time.


  1. At this point, we should have an incompletely recovered database. RMAN should say:
...
archive log filename=E:\FRA\LIVE\ARCHIVELOG\2010_07_28\O1_MF_1_4607_651BY
17T_.ARC thread=1 sequence=4607
media recovery complete, elapsed time: 00:01:23
Finished recover at 29-JUL-10
released channel: disk1

Continue with the process:

SQL> alter database set standby database to maximize performance;

Database altered.

SQL> alter database open resetlogs;

Database altered.

SQL> QUIT;


That’s the end of the RMAN restore process.


Thursday, February 16, 2012

Oracle 7 Export Gotcha’s


At work, I had to migrate an Oracle 7 database on HP Unix to Oracle 10g 64-bit on Red Hat Linux. The easiest way, given the circumstances, was to stop updates to the db, and then export it consistently, import it into a new db, with the appropriate tablespaces, roles and schema users pre-created.

Sounds good in theory. Here are some gotchas to look out for when exporting an Oracle 7 database.

The export kept running out of space, that old chestnut “snapshot too old” error. I tried using DIRECT=Y (faster export requires less undo segments), CONSISTENT=N and INDEXES=N. I refused to stoop to doing table-by-table exports or partial exports since that approach can get messy quickly (CONSISTENT=N for testing purposes was acceptable).

Not, a problem, just add a large tablespace and create a large rollback segment (call it ‘BIGRBS’), disable the old, small rollback segments, and that should be it.

Hmm, no, still doesn’t work. The export still fails with that error. Well, not a problem, I’ll just add more space to the tablespace, and drop & re-create BIGRBS with a higher MAXEXTENTS.

NO!  Oracle 7 still thwarts the export. I wonder if the db is ‘possessed’ and doesn’t want to be migrated, lol.

But, by using the GUI Oracle Enterprise Manager (OEM), I found out why this kept happening.  When an export transaction requests rollback segments, Oracle 7 allocates up to MINEXTENTS, and denies any further extents while MAXEXTENTS IS STILL FAR AWAY!

The solution to this problem is elegant – create BIGRBS with the same value for MINEXTENTS and MAXETENTS.

Ah, here is the second gotcha: if you have 500 MB allocated to a tablespace, you can not use all 500 MB to create a rollback segment. Because of internal overheads, you MUST use LESS than 500 MB for you extents. But, you are not warned, dear reader, you are left to taste the bitterness of defeat yet again.

To add insult to injury, when you attempt to create a rollback segment on Oracle 7 and the statement fails, all the extents remain allocated!!! Then, the only way to clear the extents is to drop the tablespace. But, lo and behold, the database now decides to drop the unused (but allocated) extents! To add salt to a wound it takes longer to drop the unused extents than it took to allocate them in the first place!

I finally created BIGRBS on a new tablespace. But, the crucial point is this: create the rollback segment with a large INITIAL and NEXT (so extent allocation is rapid), ensure MINEXTENTS is the same as MAXEXTENTS and use at least 100 MB less than the full tablespace space available.

Finally, I got an export :) And the first import of the dmp file worked :) Hurrah!