I would like to enable the Archivelog mode in my db. I understand the importance of having archive logs. However as the nature of every db is different, my concern is:

1] What are those configurations that I need to test first BEFORE implementing archivelog mode? (redo log settings, transactions used per day etc.?)
2] How can I reduce the impact on my running production database performance as much as possible, when I implement archivelog mode?
3] What are the things that I need to check and monitor AFTER implementing archivelog mode? (for example, Archivelog disk spaces?)

You don't really need to "configure" anything. However, you must monitor

"redo size" statistic --- you can get this from StatsPack reports. This will indicate the volume of ArchiveLogs that will be generated.

I/O speed of target archivelog volume/filesystem --- slow I/O means that the Archiver will not be fast enough and may cause LGWR to wait on ARCH, resulting in occassional "freezing" of database transaction

- check what volume of log you write each day, and if there are peak periods (batches etc)
- check for log switch interval and consider if the log size is optimal
- define what your archivelog backup and delete policy will be (now long to keep on disk, how often to backup, how many backups to take before deleting)
- write and test your backup / delete scripts (rman ?)
- check that you have sufficient disk to handle unexpected events
- check that you have monitoring enabled to detect disk filling up
- consider using FRA instead of archiving to a directory
- consider whether to archive locally, or to a remote server or NFS share
- check for objects or transactions with nologging (select force_logging from v$database)
- consider if you will duplex archivelogs or not
- consider how many archiver processes you will need (log_archive_max_processes)
- consider what you want to happen if archival fails (log_archive_min_succeed_dest)

The ARCH processes only copy files, so impact on performance can easily be simulated, and will depend on the volume of files being copied, and number of parallel copies.
Archivelogs are useless unless you have a database backup, so you need to include that in your scripting and testing, along with restore tests.

'user commits' and 'user rollbacks' will not show you the redo volume. One transaction may commit 1 row update, another transaction may commit a 1000 row delete. The redo generation by the latter transaction will be much more than the former.

You should query v$sysstat 'redo size' (which is expressed in bytes).
Note : Particularly with 32bit implementations, you would find the statistics wrap around after 4billion (which isn't very large for 'redo size'). So querying across 'n' days would not be useful. You could query every hour and find the incremental value. OR you could use StatsPack which will include redo size as one of the elements it reports.

The top main reason why I need to ask questions pertaining to Achivelog Mode Performance is because currently this DB (11gR2 64bit) production is on no archivelog mode. I want to move it to Archivelog mode on. But it seems like this is not an easy move. However the consequences of an instance crash or logical error (loss of data) will be too grave if this DB is in no archivelog mode. My management (senior IT staff and boss) are not taking this matter seriously because they claim that ShadowProtect (posted in another thread) is more than enough. However, my responsibility is to start researching, test and implement it to prepare for the worst.