1) Shutdown CRS on all the nodes:
# crsctl stop crs
2) Then start the clusterware in exclusive mode on node #1:
# crsctl start crs -excl -nocrs
Note: On release 11.2.0.1, you need to use the next command:
# crsctl start crs -excl
3) Connect to the +ASM1 instance and then make sure all the diskgroups are mounted including the OCRVOTE diskgroup:
SQL> select name, state from v$asm_diskgroup;
4) If not, then mount them (example):
SQL. alter diskgroup OCRVOTE mount;
SQL> select name, state from v$asm_diskgroup;
5) Then shutdown the clusterware on node #1:
# crsctl stop crs -f
6) Now, start the the clusterware in exclusive mode on node #2:
# crsctl start crs -excl -nocrs
Note: On release 11.2.0.1, you need to use the next command:
# crsctl start crs -excl
7) Connect to the +ASM2 instance and then make sure all the diskgroups are mounted including the
OCRVOTE diskgroup:
SQL> select name, state from v$asm_diskgroup;
8) If not, then mount them:
SQL. alter diskgroup OCRVOTE mount;
SQL> select name, state from v$asm_diskgroup;
9) Then shutdown the clusterware on node #2:
# crsctl stop crs -f
10) Please repeat the same steps on the additional nodes.
# crsctl stop crs
2) Then start the clusterware in exclusive mode on node #1:
# crsctl start crs -excl -nocrs
Note: On release 11.2.0.1, you need to use the next command:
# crsctl start crs -excl
3) Connect to the +ASM1 instance and then make sure all the diskgroups are mounted including the OCRVOTE diskgroup:
SQL> select name, state from v$asm_diskgroup;
4) If not, then mount them (example):
SQL. alter diskgroup OCRVOTE mount;
SQL> select name, state from v$asm_diskgroup;
5) Then shutdown the clusterware on node #1:
# crsctl stop crs -f
6) Now, start the the clusterware in exclusive mode on node #2:
# crsctl start crs -excl -nocrs
Note: On release 11.2.0.1, you need to use the next command:
# crsctl start crs -excl
7) Connect to the +ASM2 instance and then make sure all the diskgroups are mounted including the
OCRVOTE diskgroup:
SQL> select name, state from v$asm_diskgroup;
8) If not, then mount them:
SQL. alter diskgroup OCRVOTE mount;
SQL> select name, state from v$asm_diskgroup;
9) Then shutdown the clusterware on node #2:
# crsctl stop crs -f
10) Please repeat the same steps on the additional nodes.