The only way I think should you used is exp/imp.
Or create some database link to the prod and then insert data from PRod if you needn't sync the views/packages/functions/procedures from prod.
Strong recommend you used the same version of OS and platform for the test.

There can be several methods to do this. Lets discuss a few here.
But we need to know how big and critical is your production system like
what is the size of the database, whether production has after hours
downtime etc.
1. If the production db is small go for a full export and copy the dump
to test server, drop the test db, create test db afresh and do a full
import from the export dump taken from production.

2. if it is big, can it have a downtime, then neatly shutdown and take a
cold backup by copying the datafiles (assuming they are in filesystems
and not raw devices) to the test server, create controlfiles afresh and
bring up test db.
3. if it is big and cannot have a downtime, then try for a hot backup
if it is in ARCHIVELOG mode by putting production tablespaces into begin
backup mode, copy all the datafiles into test server, revert production
tablespaces into end backup mode (chek the status from v$backup), copy
all the archivelogs to test server, create controlfiles afresh for test
db, open it after recover by applying from necessary archivelogs.

Could you help me clear something up. I would like to take a nightly
hotbackup of my database on node A, with control file and archive files...
copy it to an identical server, node B, and through out the day copy archive
files as they are created on Node A to Node B. Then if Node A fails,

mount the database on Node B, recovery using the archive files. Does this
sound correct? How would the control file copied over from the hotbackup
now that the archivelogs during the day would need to be applied?

Since the test server and the production server are different operating
systems, copying the datafiles will not work. The headers in the files are
platform specific.

Short term, imp/exp is pretty much your only option.

A couple of things you can do to make this smoother in the future:

1. get a test system that reflects your production system. The test
environment isn't really valid if the platforms are different, because you
can occassionally see issues that are platform-specific, i.e. if there is an
issue specific to Solaris, your test system won't catch it.

2. If you can't do that, you should at least upgrade your databases to 10G
so that you can use DBCA to create a template of the production database
without shutting it down. This will ensure that you have all the current
tables, users, roles, packages, etc. and will make for a smoother import.

3. In any event, if at all possible you should upgrade your database anyway;
8.1.7 has been desupported, unless you have an Extended Support contract,
and even then support will only be provided until Dec. 31st, 2006. Even
that does not include bug fixes, escalation, or response-time adherence
(i.e., they'll get to you when they can, you're not a top priority).

Why not setup a physical standby database with RMAN and let Dataguard manage
the transmission of redo? That way you can failover to a database that is
already in sync with the primary database or nearly so, depending on what
level of protection you choose... with max performance, you may have a bit
of a lag if there are a lot of transactions going on, but you'll still be
closer than if you had to recover to the last hot backup. You have the
added benefit of being able to use the standby database as a reporting
instance as well.

Here is a quick overview of Dataguard... you don't say what version of
database you are using, but Dataguard has been around since at least 8i.
You can find earlier documentation on OTN if you're not at 10G yet:

Thank you for the reply. That would be too simple...and this is a 7.x
environment and am looking for a short term fix. I'm just confusing the
basic recovery process. Making something very complicated that is simple
sort of thing.

- Take nightly hotbackup...controlfile, archive logfiles on Node A
- Copy to Node B
- 5 archivelogs are created daily. Crontab job to check directory
When new one is created and copy to Node B.

*** So now I should be able to do an incomplete recovery up to the time
of corruption or loss. Right

When I mount the database on Node B and do a recovery how will the control
file know to apply log files that were created after the creation of the
control file in use...(the one copied over during the hotbackup)?

Or will I have to create a new control file every time I copy an archive
file?

And if anyone could offer a better suggestion, I'd truly appreciate it!!!