Sunday, 4 August 2013

ACME Guide to 4a-ing a Factory Fresh NetApp System

There are circumstances when you might want to 4a (clean
configuration and initialize all disks) on a brand new out-of-the-factory
NetApp FAS Clustered Data ONTAP/7-Mode system. This post is not going to go
into the circumstances, just the how to do.

Note 1: A sample
output is contained in the following Appendix B

Note 2: ‘To 4a a
system’ comes from pre-Data ONTAP 8 days when there used to be option (4) ‘Clean
configuration’ and option (4a) ‘Clean configuration and initialize all disks’
in the boot menu.

Walkthrough

1. Boot both controllers

2. Press Ctrl-C for Boot Menu when
prompted

3. Selection option 5 ‘Maintenance mode
boot’

4. Answer ‘y’ to ‘Continue with boot?’

5. Run the following command to get the
system ID:

disk show -n

6. Run the following command to find the
existing root aggregate disks:

18. Answer ‘y’ to ‘This will erase all
the data on the disks, are you sure?’

19. On the partner node - Repeat step 12
to 18

20. The End!

Appendix A: Disk Zeroing Times

Image: NetApp Disk Zeroing Times for SSD/FC/SAS/SATA Disks

Appendix B:
Example output from one head

*******************************

* *

* Press Ctrl-C
for Boot Menu. *

* *

*******************************

Please choose one
of the following:

(1) Normal Boot.

(2) Boot without
/etc/rc.

(3) Change
password.

(4) Clean
configuration and initialize all disks.

(5) Maintenance
mode boot.

(6) Update flash
from backup config.

(7) Install new
software first.

(8) Reboot node.

Selection (1-8)?
5

You have selected
the maintenance boot option:

The system has
booted in maintenance mode allowing the following operations to be performed:

? disk

key_manager fcadmin

fcstat sasadmin

sasstat acpadmin

halt help

ifconfig raid_config

storage sesdiag

sysconfig vmservices

version vol

aggr sldiag

dumpblock environment

systemshell vol_db

led_on led_off

sata acorn

stsb scsi

nv8 disk_list

ha-config fctest

disktest diskcopy

vsa xortest

disk_mung

Type "help
" for more details.

In a High
Availablity configuration, you MUST ensure that the partner node is (and
remains) down, or that takeover is manually disabled on the partner node,
because High Availability software is not started or fully enabled in
Maintenance mode.

FAILURE TO DO SO
CAN RESULT IN YOUR FILESYSTEMS BEING DESTROYED

NOTE: It is okay
to use 'show/status' sub-commands such as 'disk show or aggr status' in
Maintenance mode while the partner is up