Taken straight from NetApp's site.
https://kb.netapp.com/support/index?page=content&id=3013873&locale=en_US
In clustered Data ONTAP 8.2, it has become a lot easier to move the root volume to a new root aggregate than it was in earlier releases. Still, there are a number of steps involved, especially if the maintenance is to be non-disruptive.
Warning: Although not required, it is recommended to upgrade to 8.2.1P1 or 8.2.2RC1 (or an as of the writing of this document not available later release) before doing this maintenance, to avoid Burt # 810014. Due to the nature of this bug, avoid moving the root aggregate more than 1 time if possible.
Due to the possible impact of a system that is down for the duration of this maintenance on other nodes in the cluster, the first steps are somewhat different for 2-node, 4-node, or larger clusters, and exist to protect data access in the rest of the cluster.
In the example below, root aggregate aggr0_node1
on node01 is going to be hosted on 3 manually specified drives.
Warning: Below steps marked in green are very important to perform first. If the steps for 2-node or 4-node clusters are not performed, you will risk an outage of all nodes in the cluster, and will need to contact support to get data access recovered, this can possibly take hours. There have been users who have skipped these pre-steps and have experienced real outages as a result.
Important: If you are running a 2-node cluster, make sure to disable cluster HA prior to maintenance (this will not disable failover but will change quorum voting for 2-node clusters) and make sure the node that will not be changed (the partner node of the system being worked on) is set to be epsilon.
::>cluster ha modify -configured false
::> set adv
::*> cluster modify -node node01 -epsilon false
::*> cluster modify -node node02 -epsilon true
::*> cluster show
The commands above will allow halting node01 to maintenance mode without a takeover (required for some of the steps later on), and prevent node02 from going out of quorum as a result. Prior to this halt, relevant storage will be moved to node02 so it can continue to be served in steps below. If above steps are not followed in a 2-node cluster, the surviving node hosting all the storage will not serve any of it or it's partner's data and there will be an outage of all data on both nodes of the 2-node cluster.
If you are running a 4-node cluster, run the following prior to maintenance. Check if the node you are working on is epsilon; if it is, move epsilon to a different node in the cluster to reduce the risk of a cluster quorum loss in case of an unexpected failover in other parts of the cluster.
::> set adv
::*> cluster show
::*> cluster modify -node node01 -epsilon false
::*> cluster modify -node node03 -epsilon true
::*> cluster show
If you are running a cluster with more than 4 nodes, no additional protections are neccessary.
Important: Check the administration documentation on size requirements for the root volume for the platform you are using. If the new root aggregate drives you are using are smaller in size, you may need to have a root aggregate of more than 3 disks to acommodate for the space.
If you are running a cluster with only a single node, please refer to KB 1014615 How to move mroot to a new root aggregate in a single node cluster. Single node cluster root volume migration cannot be done non-disruptively.