ASM Commands : 2 — Migrating a DiskGroup to New Disk(s)

In 11gR2 Grid Infrastructure and RAC

After the previous demonstration of adding a new DiskGroup, I now demonstrate migrating the DiskGroup to a new pair of disks.

First, I create a table in the Tablespace on that DiskGroup.

[root@node1 ~]# su - oracle
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:17:28 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: hemant/hemant

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> create table new_tbs_tbl
2 tablespace new_tbs
3 as select * from dba_objects
4 /

Table created.

SQL> select segment_name, bytes/1048576
2 from user_segments
3 where tablespace_name = 'NEW_TBS'
4 /

SEGMENT_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
NEW_TBS_TBL
9


SQL> select file_name, bytes/1048576
2 from dba_data_files
3 where tablespace_name = 'NEW_TBS'
4 /

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
+DATA3/racdb/datafile/new_tbs.256.855792859
100


SQL>

Next, I verify that the DiskGroup is currently on disk asmdisk.7 and that the two new disks that I plan to migrate the DiskGroup to are available as asmdisk.8 and asmdisk.9 (yes, unfortunately, they are on /fra, instead of /data1 or /data2 because I have run out of disk space in /data1 and /data2).
This I do from node1 :

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
-sh-3.2$ exit
logout

[root@node1 ~]#
[root@node1 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:22:32 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> l
1 select d.name, d.path
2 from v$asm_disk d, v$asm_diskgroup g
3 where d.group_number=g.group_number
4* and g.name = 'DATA3'
SQL> /

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
DATA3_0000
/data1/asmdisk.7


SQL>
SQL> !sh
sh-3.2$ ls -l /fra/asmdisk*
-rwxrwxr-x 1 grid oinstall 1024000000 Aug 24 22:06 /fra/asmdisk.8
-rwxrwxr-x 1 grid oinstall 1024000000 Aug 24 22:07 /fra/asmdisk.9
sh-3.2$

Note how the ownership and permissions are set for the two new disks (see my previous post).

I now add the two new disks.

sh-3.2$ exit
exit

SQL> show parameter power

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
asm_power_limit integer 1
SQL> alter diskgroup data3 add disk '/fra/asmdisk.8', '/fra/asmdisk.9';

Diskgroup altered.

SQL>
SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES ERROR_CODE
----------- --------------------------------------------
3 REBAL RUN 1 1 1 101 60
1


SQL>

With ASM_POWERLIMIT set to 1, Oracle ASM automatically starts a REBALANCE operation. However, since I did *not* drop the existing asmdisk.7, Oracle will still continue to use it.

After a while, I confirm that the REBALANCE has completed. I can now drop asmdisk.7. Unfortunately, this will trigger a new REBALANCE !

SQL> l
1* select * from v$asm_operation
SQL> /

no rows selected

SQL>
SQL> l
1 select d.name, d.path
2 from v$asm_disk d, v$asm_diskgroup g
3 where d.group_number=g.group_number
4* and g.name = 'DATA3'
SQL> /

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
DATA3_0000
/data1/asmdisk.7

DATA3_0002
/fra/asmdisk.9

DATA3_0001
/fra/asmdisk.8


SQL>
SQL> alter diskgroup data3 drop disk '/data1/asmdisk.7';
alter diskgroup data3 drop disk '/data1/asmdisk.7'
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15054: disk "/DATA1/ASMDISK.7" does not exist in diskgroup "DATA3"


SQL> alter diskgroup data3 drop disk 'DATA3_0000';

Diskgroup altered.

SQL>
SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES ERROR_CODE
----------- --------------------------------------------
3 REBAL RUN 1 1 2 102 120
0


SQL>
SQL> l
1* select * from v$asm_operation
SQL>
SQL> /

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES ERROR_CODE
----------- --------------------------------------------
3 REBAL RUN 1 1 47 101 95
0


SQL> /

no rows selected

SQL>

NOTE : Note how I must specify the Disk NAME (not the PATH) for the DROP. When I added disks asmdisk.8 and asmdisk.9, I could have given then meaningful names as well. Oracle has automatically named them.

Ideally, what I should have done is to use the ADD and DROP command together. That way, I would have a single-pass REBALANCE required.

After a while, I run my validation queries on node2.

[root@node2 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:42:39 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select d.name, d.path
from v$asm_disk d, v$asm_diskgroup g
where d.group_number=g.group_number
and g.name = 'DATA3' 2 3 4
5
SQL> l
1 select d.name, d.path
2 from v$asm_disk d, v$asm_diskgroup g
3 where d.group_number=g.group_number
4* and g.name = 'DATA3'
SQL> /

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
DATA3_0002
/fra/asmdisk.9

DATA3_0001
/fra/asmdisk.8


SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ exit
logout

[root@node2 ~]# su - oracle
-sh-3.2$
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:44:10 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: hemant/hemant

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select count(*) from new_tbs_tbl;

COUNT(*)
----------
72460

SQL>

I have now accessed the table, tablespace, diskgroup and disks from node2 successfully. Disk asmdisk.7 is no longer part of the DiskGroup.

I can physically remove disk asmdisk7 from the storage.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ exit
logout

[root@node1 ~]# cd /data1
[root@node1 data1]# ls -l asmdisk.7
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 24 22:39 asmdisk.7
[root@node1 data1]# rm asmdisk.7
rm: remove regular file `asmdisk.7'? y
[root@node1 data1]#
[root@node1 data1]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:50:18 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> set pages 60
SQL> col name format a15
SQL> col path format a20
SQL> select group_number, name, path
2 from v$asm_disk
3 order by 1,2;

GROUP_NUMBER NAME PATH
------------ --------------- --------------------
0 /crs/voting.disk
0 /data1/votedisk.1
0 /data2/votedisk.2
0 /fra/votedisk.3
1 DATA1_0000 /data1/asmdisk.1
1 DATA1_0001 /data2/asmdisk.4
2 DATA2_0000 /data1/asmdisk.2
2 DATA2_0001 /data2/asmdisk.5
2 DATA2_0002 /data2/asmdisk.6
3 DATA3_0001 /fra/asmdisk.8
3 DATA3_0002 /fra/asmdisk.9
4 DATA_0000 /crs/ocr.configurati
on

5 FRA_0000 /fra/fradisk.3
5 FRA_0001 /fra/fradisk.2
5 FRA_0002 /fra/fradisk.1
5 FRA_0003 /fra/fradisk.4

16 rows selected.

SQL>

The disk asmdisk.7 is no longer part of the storage. (Remember : All my disks here are on NFS).
.
.
.

ASM Commands : 1 — Adding and Using a new DiskGroup for RAC

In 11gR2 Grid Infrastructure and RAC

On node1, I discover and add a disk to ASM. NFS “devices” asmdisk.1 to asmdisk.6 are present as ASM Disks. asmdisk.7 has been added on NFS mount point /data1. (Disks asmdisk.3 to asmdisk.6 are on /data2)

I start on node1 in my Cluster

[root@node1 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 16 23:42:02 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> show parameter asm_diskstring

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring string /crs/*, /data1/*, /data2/*, /f
ra/*
SQL> !ls -l /data1/asm*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 16 23:42 /data1/asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 16 23:42 /data1/asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 16 23:33 /data1/asmdisk.7

SQL> create diskgroup DATA3 disk '/data1/asmdisk.7';
create diskgroup DATA3 disk '/data1/asmdisk.7'
*
ERROR at line 1:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 2 regular failure groups, discovered only
1


SQL> create diskgroup DATA3 external redundancy disk '/data1/asmdisk.7';

Diskgroup created.

SQL>
SQL> select group_number, name, total_mb
2 from v$asm_diskgroup
3 where name = 'DATA3'
4 /

GROUP_NUMBER NAME TOTAL_MB
------------ ------------------------------ ----------
5 DATA3 1953

SQL>

I now have a new DiskGroup using External Redundancy with a single disk. Is it visible at node2 ?

[root@node2 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 16 23:47:45 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select group_number, name, total_mb
2 from v$asm_diskgroup
3 where name = 'DATA3'
4 /

GROUP_NUMBER NAME TOTAL_MB
------------ ------------------------------ ----------
0 DATA3 0

SQL>

Why is the size not visible yet ? Because, although the CREATE from node1 had also MOUNTed the Disk Group, it hasn’t been mounted on node2 yet.

SQL> alter diskgroup DATA3 mount;

Diskgroup altered.

SQL> select group_number, name, total_mb
2 from v$asm_diskgroup
3 where name = 'DATA3'
4 /

GROUP_NUMBER NAME TOTAL_MB
------------ ------------------------------ ----------
5 DATA3 1953

SQL>

Can I confirm the underlying disk ?

SQL> select group_number, disk_number, header_status, state, total_mb
2 from v$asm_disk
3 where group_number = 5;

GROUP_NUMBER DISK_NUMBER HEADER_STATU STATE TOTAL_MB
------------ ----------- ------------ -------- ----------
5 0 MEMBER NORMAL 1953

SQL>

What happens when I create a tablespace/datafile in this DiskGroup, from the instance on node1 ?

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ su - oracle
Password:
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:08:31 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> create tablespace NEW_TBS datafile '+DATA3';
create tablespace NEW_TBS datafile '+DATA3'
*
ERROR at line 1:
ORA-01119: error in creating database file '+DATA3'
ORA-15045: ASM file name '+DATA3' is not in reference form
ORA-17502: ksfdcre:5 Failed to create file +DATA3
ORA-15081: failed to submit an I/O operation to a disk


SQL>

Why do I get this error ? I could create a DiskGroup on the ASM Disk but I couldn’t add a datafile ? Let me check the permissions.

SQL> !sh
sh-3.2$ cd /data1
sh-3.2$ ls -l asmd*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.7
sh-3.2$ su grid
Password:
sh-3.2$ pwd
/data1
sh-3.2$ ls -l asmd*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.7
sh-3.2$ chmod 775 asmdisk.7
sh-3.2$ ls -l asmdisk.7
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.7
sh-3.2$

The oinstall group that is used by “oracle” did not have write permissions. Let me go back to Oracle now after having granted the permissions.

sh-3.2$ exit
exit
sh-3.2$ exit
exit

SQL> l
1* create tablespace NEW_TBS datafile '+DATA3'
SQL> /

Tablespace created.

SQL>

The CREATE TABLESPACE has succeeded. I can verify the datafile and the ASM file from node2 now.

-sh-3.2$ id
uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba)
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:17:19 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select group_number, file_number, bytes/1048576, type, redundancy
2 from v$asm_file
3 where group_number=5;

GROUP_NUMBER FILE_NUMBER BYTES/1048576
------------ ----------- -------------
TYPE REDUND
---------------------------------------------------------------- ------
5 256 100.007813
DATAFILE UNPROT


SQL>
SQL> exit
suDisconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$
-sh-3.2$ su - oracle
Password:
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:19:34 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select file_name, bytes/1048576 from dba_data_files
2 where tablespace_name = 'NEW_TBS';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
+DATA3/racdb/datafile/new_tbs.256.855792859
100


SQL>

Now, I have the new DataFile visible in ASM and the Database on the New DiskGroup.
.
.
.

GI Commands : 2 — Managing the Local and Cluster Registry

In 11gR2

There are essentially two registries, the Local Registry and the Cluster Registry.

Let’s check the Local Registry :

[root@node1 ~]# cat /etc/oracle/olr.loc
olrconfig_loc=/u01/app/grid/11.2.0/cdata/node1.olr
crs_home=/u01/app/grid/11.2.0
[root@node1 ~]#
[root@node1 ~]# file /u01/app/grid/11.2.0/cdata/node1.olr
/u01/app/grid/11.2.0/cdata/node1.olr: data
[root@node1 ~]#

So, like the Cluster Registry, the Local Registry is a binary file. It is on a local filesystem on the node, not on ASM/NFS/CFS. Each node in the cluster has its own Local Registry.

The Local Registry can be checked for consistency (corruption) using ocrcheck with the “-local” flag. Note : As demonstrated in my previous post, the root account must be used for the check.

[root@node1 ~]# su - grid
-sh-3.2$ su
Password:
[root@node1 grid]# ocrcheck -local
Status of Oracle Local Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2696
Available space (kbytes) : 259424
ID : 1388021147
Device/File Name : /u01/app/grid/11.2.0/cdata/node1.olr
Device/File integrity check succeeded

Local registry integrity check succeeded

Logical corruption check succeeded

[root@node1 grid]#

Now let’s look at the Cluster Registry :

[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3668
Available space (kbytes) : 258452
ID : 605940771
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File Name : /fra/ocrfile
Device/File integrity check succeeded
Device/File Name : +FRA
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@node1 grid]#

The Cluster Registry is distributed across two ASM DiskGroups (+DATA and +FRA) and one filesystem (/fra/ocrfile). Yes, this is a special case that I’ve created to distribute the OCR in this manner.

I cannot add the OCR to a location which is an ASM diskgroup with a lower asm.compatible.

[root@node1 grid]# ocrconfig -add +DATA2
PROT-30: The Oracle Cluster Registry location to be added is not accessible
PROC-8: Cannot perform cluster registry operation because one of the parameters is invalid.
ORA-15056: additional error message
ORA-17502: ksfdcre:4 Failed to create file +DATA2.255.1
ORA-15221: ASM operation requires compatible.asm of 11.1.0.0.0 or higher
ORA-06512: at line 4

[root@node1 grid]#

I now remove the filesystem copy of the OCR.

[root@node1 grid]# ocrconfig -delete /fra/ocrfile
[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3668
Available space (kbytes) : 258452
ID : 605940771
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File Name : +FRA
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@node1 grid]#

Note, however, that the ocrconfig delete doesn’t actually remove the filesystem file that I had created.

[root@node1 grid]# ls -l /fra/ocrfile
-rw-r--r-- 1 root root 272756736 Aug 3 21:27 /fra/ocrfile
[root@node1 grid]# rm /fra/ocrfile
rm: remove regular file `/fra/ocrfile'? yes
[root@node1 grid]#

I will now add a filesystem location for the OCR.

[root@node1 grid]# touch /fra/new_ocrfile
[root@node1 grid]# ocrconfig -add /fra/new_ocrfile
[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3668
Available space (kbytes) : 258452
ID : 605940771
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File Name : +FRA
Device/File integrity check succeeded
Device/File Name : /fra/new_ocrfile
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@node1 grid]# ls -l /fra/new_ocrfile
-rw-r--r-- 1 root root 272756736 Aug 3 21:30 /fra/new_ocrfile
[root@node1 grid]#

What about OCR Backups ? (Note : Oracle does frequent automatic backups of the OCR, but *not* of the OLR).
N.B. : This listing doesn’t show all the OCR backups you’d expect because I don’t have my cluster running continuously through all the days.

[root@node1 grid]# ocrconfig -showbackup

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/backup00.ocr

node1 2011/10/22 03:09:03 /u01/app/grid/11.2.0/cdata/rac/backup01.ocr

node1 2011/10/21 23:06:39 /u01/app/grid/11.2.0/cdata/rac/backup02.ocr

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/day.ocr

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/week.ocr

node1 2014/07/06 22:39:55 /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1 2014/07/05 17:30:25 /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1 2014/06/16 22:15:07 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr

node1 2014/06/16 22:14:05 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221405.ocr

node1 2011/11/09 23:20:25 /u01/app/grid/11.2.0/cdata/rac/backup_20111109_232025.ocr
[root@node1 grid]#

Let me run an additional backup from node2.

[root@node2 grid]# ocrconfig -manualbackup

node1 2014/08/03 21:37:17 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_213717.ocr

node1 2014/07/06 22:39:55 /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1 2014/07/05 17:30:25 /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1 2014/06/16 22:15:07 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr

node1 2014/06/16 22:14:05 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221405.ocr
[root@node2 grid]#

We can see that the backup done today (03-Aug) is listed at the top. Let’s check a listing from node1

[root@node1 grid]# ocrconfig -showbackup

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/backup00.ocr

node1 2011/10/22 03:09:03 /u01/app/grid/11.2.0/cdata/rac/backup01.ocr

node1 2011/10/21 23:06:39 /u01/app/grid/11.2.0/cdata/rac/backup02.ocr

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/day.ocr

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/week.ocr

node1 2014/08/03 21:37:17 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_213717.ocr

node1 2014/07/06 22:39:55 /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1 2014/07/05 17:30:25 /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1 2014/06/16 22:15:07 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr

node1 2014/06/16 22:14:05 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221405.ocr
[root@node1 grid]#

Yes, the backup of 03-Aug is also listed. But, wait ! Why is it on node1 ? Let’s go back to node2 and do a filesytem listing.

[root@node2 grid]# ls -l /u01/app/grid/11.2.0/cdata/rac/backup*
ls: /u01/app/grid/11.2.0/cdata/rac/backup*: No such file or directory
[root@node2 grid]#

Yes, as we’ve noticed. The backup doesn’t really exist on node2.

[root@node1 grid]# ls -lt /u01/app/grid/11.2.0/cdata/rac/
total 114316
-rw------- 1 root root 8024064 Aug 3 21:37 backup_20140803_213717.ocr
-rw------- 1 root root 8003584 Jul 6 22:39 backup_20140706_223955.ocr
-rw------- 1 root root 8003584 Jul 6 21:53 day.ocr
-rw------- 1 root root 8003584 Jul 6 21:53 week.ocr
-rw------- 1 root root 8003584 Jul 6 21:53 backup00.ocr
-rw------- 1 root root 8003584 Jul 5 17:30 backup_20140705_173025.ocr
-rw------- 1 root root 7708672 Jun 16 22:15 backup_20140616_221507.ocr
-rw------- 1 root root 7708672 Jun 16 22:14 backup_20140616_221405.ocr
-rw------- 1 root root 7688192 Nov 9 2011 backup_20111109_232025.ocr
-rw------- 1 root root 7667712 Nov 9 2011 backup_20111109_230940.ocr
-rw------- 1 root root 7647232 Nov 9 2011 backup_20111109_230916.ocr
-rw------- 1 root root 7626752 Nov 9 2011 backup_20111109_224725.ocr
-rw------- 1 root root 7598080 Nov 9 2011 backup_20111109_222941.ocr
-rw------- 1 root root 7593984 Oct 22 2011 backup01.ocr
-rw------- 1 root root 7593984 Oct 21 2011 backup02.ocr
[root@node1 grid]#

Yes, *ALL* the OCR backups to date have been created on node1 — even when executed from node2. node1 is still the “master” node for OCR backups as long as it is up and running. I shut down Grid Infrastructure on node1.

[root@node1 grid]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.racdb.new_svc.svc' on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.cvu' on 'node1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'node1'
CRS-2673: Attempting to stop 'ora.gns' on 'node1'
CRS-2677: Stop of 'ora.cvu' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'node2'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'node1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node1' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.scan3.vip' on 'node2'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'node1'
CRS-2677: Stop of 'ora.scan2.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'node2'
CRS-2676: Start of 'ora.cvu' on 'node2' succeeded
CRS-2677: Stop of 'ora.racdb.new_svc.svc' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.node1.vip' on 'node1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node1'
CRS-2673: Attempting to stop 'ora.racdb.db' on 'node1'
CRS-2677: Stop of 'ora.node1.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.node1.vip' on 'node2'
CRS-2676: Start of 'ora.scan3.vip' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'node2'
CRS-2676: Start of 'ora.scan2.vip' on 'node2' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'node2'
CRS-2677: Stop of 'ora.gns' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gns.vip' on 'node1'
CRS-2677: Stop of 'ora.gns.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.gns.vip' on 'node2'
CRS-2676: Start of 'ora.node1.vip' on 'node2' succeeded
CRS-2676: Start of 'ora.gns.vip' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.gns' on 'node2'
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'node2' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'node2' succeeded
CRS-2677: Stop of 'ora.racdb.db' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.DATA2.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'node1'
CRS-2676: Start of 'ora.gns' on 'node2' succeeded
CRS-2677: Stop of 'ora.DATA1.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.DATA2.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'node2'
CRS-2676: Start of 'ora.oc4j' on 'node2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'node1'
CRS-2677: Stop of 'ora.ons' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node1'
CRS-2677: Stop of 'ora.net1.network' on 'node1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node1' has completed
CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'
CRS-2673: Attempting to stop 'ora.crf' on 'node1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node1'
CRS-2677: Stop of 'ora.crf' on 'node1' succeeded
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node1'
CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'node1' succeeded
CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'
CRS-2673: Attempting to stop 'ora.diskmon' on 'node1'
CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'node1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@node1 grid]#

So, all the Grid Infrastructure services are down on node1. I will run an OCR Backup from node2 and verify it’s location.

[root@node2 grid]# ocrconfig -manualbackup

node2 2014/08/03 21:49:02 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_214902.ocr

node1 2014/08/03 21:37:17 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_213717.ocr

node1 2014/07/06 22:39:55 /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1 2014/07/05 17:30:25 /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1 2014/06/16 22:15:07 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr
[root@node2 grid]# ls -l /u01/app/grid/11.2.0/cdata/rac/backup*
-rw------- 1 root root 8024064 Aug 3 21:49 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_214902.ocr
[root@node2 grid]#

Yes, the backup got created on node2 now.

Question : Would there have been a way to create a backup on node2 without shutting down node1 ?

.
.
.

GI Commands : 1 — Monitoring Status of Resources

In 11gR2

Listing the Status of Resources

[root@node1 ~]# su - grid
-sh-3.2$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.DATA1.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.DATA2.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.FRA.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.LISTENER.lsnr
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.asm
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.gsd
OFFLINE OFFLINE node1
OFFLINE OFFLINE node2
ora.net1.network
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.ons
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.registry.acfs
ONLINE ONLINE node1
ONLINE ONLINE node2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE node2
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE node1
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE node1
ora.cvu
1 ONLINE ONLINE node1
ora.gns
1 ONLINE ONLINE node1
ora.gns.vip
1 ONLINE ONLINE node1
ora.node1.vip
1 ONLINE ONLINE node1
ora.node2.vip
1 ONLINE ONLINE node2
ora.oc4j
1 ONLINE ONLINE node1
ora.racdb.db
1 ONLINE ONLINE node1 Open
2 ONLINE ONLINE node2 Open
ora.racdb.new_svc.svc
1 ONLINE ONLINE node1
2 ONLINE ONLINE node2
ora.scan1.vip
1 ONLINE ONLINE node2
ora.scan2.vip
1 ONLINE ONLINE node1
ora.scan3.vip
1 ONLINE ONLINE node1
-sh-3.2$

So we see that :
a) The Cluster consists of two nodes node1 and node2
b) There are 4 ASM DiskGroups DATA, DATA1, DATA2 and FRA
c) GSD is offline as expected — it is required only for 9i Databases
d) There is a database racdb and a service new_svc (see my
previous post)

Listing the status of SCAN Listeners

-sh-3.2$ id
uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba)
-sh-3.2$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node node2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node node1
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node node1
-sh-3.2$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node2
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node node1
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node node1
-sh-3.2$

So we see that
a) There are 3 SCAN Listeners
b) Since this is a 2-node cluster, 2 of the SCAN Listeners are on one node node1

Listing the status of the OCR

-sh-3.2$ id
uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba)
-sh-3.2$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3668
Available space (kbytes) : 258452
ID : 605940771
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File Name : /fra/ocrfile
Device/File integrity check succeeded
Device/File Name : +FRA
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-privileged user

-sh-3.2$ su root
Password:
[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3668
Available space (kbytes) : 258452
ID : 605940771
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File Name : /fra/ocrfile
Device/File integrity check succeeded
Device/File Name : +FRA
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@node1 grid]#

So we see that :
a) The OCR is in 3 locations +DATA, +FRA and NFS filesystem /fra/ocrfile
b) A Logical corruption check of the OCR can only be done by root, not by grid

Listing the status of the Vote Disk

-sh-3.2$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 0e94545ce44f4fb1bf6906dc6889aaff (/fra/votedisk.3) []
2. ONLINE 0d13305520f84f3fbf6c2008a6f79829 (/data1/votedisk.1) []
Located 2 voting disk(s).
-sh-3.2$

So we see that :
a) There are 2 votedisk copies (yes, two — not the recommended three !)
b) Both are on filesystem
How do I happen to have 2 votedisk copies ? I actually had 3 but removed one. DON’T TRY THIS ON YOUR PRODUCTION CLUSTER. I am adding the third one back now :

-sh-3.2$ crsctl add css votedisk /data2/votedisk.2
Now formatting voting disk: /data2/votedisk.2.
CRS-4603: Successful addition of voting disk /data2/votedisk.2.
-sh-3.2$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 0e94545ce44f4fb1bf6906dc6889aaff (/fra/votedisk.3) []
2. ONLINE 0d13305520f84f3fbf6c2008a6f79829 (/data1/votedisk.1) []
3. ONLINE 41c24037b51c4f97bf4cb7002649aee4 (/data2/votedisk.2) []
Located 3 voting disk(s).
-sh-3.2$

There, I am now back to 3 votedisk copies.
.
.
.

RAC Commands : 2 — Updating Configuration for Services

In 11gR2

NOTE : This is in a Policy Managed configuration

Adding a database service

[root@node1 ~]# su - oracle
-sh-3.2$ srvctl add service -d RACDB -s NEW_SVC -g RACSP -c SINGLETON
-sh-3.2$ srvctl config service -d RACDB -s NEW_SVC
Service name: NEW_SVC
Service is enabled
Server pool: RACSP
Cardinality: SINGLETON
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Service is enabled on nodes:
Service is disabled on nodes:
-sh-3.2$
-sh-3.2$ srvctl start service -d RACDB -s NEW_SVC
-sh-3.2$

Since this is Policy Managed database in the RACSP Server Pool, I added a service with the appropriate parameters. The SINGLETON cardinality means that it will run on only one instance. (See the previous post for the service MY_RAC_SVC with the cardinliaty UNIFORM).

Let’s verify the alert.log entry.

-sh-3.2$ cd /u01/app/oracle/diag/rdbms/racdb/RACDB_1
-sh-3.2$ cd trace
-sh-3.2$ tail alert_RACDB_1.log
Begin automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK"
Tue Jul 15 22:01:20 2014
End automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK"
Tue Jul 15 22:06:45 2014
db_recovery_file_dest_size of 4000 MB is 22.25% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Tue Jul 15 22:11:10 2014
ALTER SYSTEM SET service_names='MY_RAC_SVC','NEW_SVC' SCOPE=MEMORY SID='RACDB_1';
-sh-3.2$

The SCOPE specification of the ALTER SYSTEM limits the service to only this instance. (Note : MY_RAC_SVC had already been added to RACDB_2 earlier).

Removing a database service

Now, let’s remove a database service

-sh-3.2$ srvctl config service -d RACDB -s MY_RAC_SVC
Service name: MY_RAC_SVC
Service is enabled
Server pool: RACSP
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Service is enabled on nodes:
Service is disabled on nodes:
-sh-3.2$ srvctl remove service -d RACDB -s MY_RAC_SVC
PRCR-1025 : Resource ora.racdb.my_rac_svc.svc is still running

I need to first stop the service.

-sh-3.2$ srvctl stop service -d RACDB -s MY_RAC_SVC
-sh-3.2$
-sh-3.2$ srvctl config service -d RACDB -s MY_RAC_SVC
Service name: MY_RAC_SVC
Service is enabled
Server pool: RACSP
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Service is enabled on nodes:
Service is disabled on nodes:
-sh-3.2$ tail alert_RACDB_1.log
End automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK"
Tue Jul 15 22:06:45 2014
db_recovery_file_dest_size of 4000 MB is 22.25% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Tue Jul 15 22:11:10 2014
ALTER SYSTEM SET service_names='MY_RAC_SVC','NEW_SVC' SCOPE=MEMORY SID='RACDB_1';
Tue Jul 15 22:17:48 2014
ALTER SYSTEM SET service_names='NEW_SVC' SCOPE=MEMORY SID='RACDB_1';
-sh-3.2$

I can now remove the service.

-sh-3.2$ srvctl remove service -d RACDB -s MY_RAC_SVC
-sh-3.2$ srvctl config service -d RACDB
Service name: NEW_SVC
Service is enabled
Server pool: RACSP
Cardinality: SINGLETON
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Service is enabled on nodes:
Service is disabled on nodes:
-sh-3.2$

Now, only the new service is listed.

Modifying the cardinality of a service

-sh-3.2$ srvctl modify service -d RACDB -s NEW_SVC -c UNIFORM
-sh-3.2$ srvctl config service -d RACDB -s NEW_SVC
Service name: NEW_SVC
Service is enabled
Server pool: RACSP
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Service is enabled on nodes:
Service is disabled on nodes:
-sh-3.2$
-sh-3.2$ srvctl start service -d RACDB -s NEW_SVC

The service has been modified from SINGLETON (single instance) to UNIFORM (all instances).
Verifying this on node 2.

-sh-3.2$ pwd
/u01/app/oracle/diag/rdbms/racdb/RACDB_2/trace
-sh-3.2$ tail -2 alert_RACDB_2.log
Tue Jul 15 22:27:36 2014
ALTER SYSTEM SET service_names='NEW_SVC' SCOPE=MEMORY SID='RACDB_2';
-sh-3.2$

The service has been enabled on RACDB_2 as well now.

.
.
.

RAC Commands : 1 — Viewing Configuration

In 11gR2

Viewing the configuration of a RAC database

[root@node1 ~]# su - oracle
-sh-3.2$ srvctl config database -d RACDB
Database unique name: RACDB
Database name: RACDB
Oracle home: /u01/app/oracle/rdbms/11.2.0
Oracle user: oracle
Spfile: +DATA1/RACDB/spfileRACDB.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: RACSP
Database instances:
Disk Groups: DATA1,FRA,DATA2
Mount point paths:
Services: MY_RAC_SVC
Type: RAC
Database is policy managed
-sh-3.2$

So, we see that :
a) The database name is RACDB
b) It is a Policy Managed database (not Administrator Managed)
c) It is dependent on 3 ASM Disk Groups DATA1, DATA2, FRA
d) There is one service called MY_RAC_SVC configured
e) The database is in the RACSP server pool
f) The database is configured to be Auto-started when Grid Infrastructure starts

Viewing the configuration of a RAC service

-sh-3.2$ srvctl config service -d RACDB -s MY_RAC_SVC
Service name: MY_RAC_SVC
Service is enabled
Server pool: RACSP
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Service is enabled on nodes:
Service is disabled on nodes:
-sh-3.2$

So, we see that :
a) The service name is MY_RAC_SVC
b) The UNIFORM cardinality means that it is to run on all active nodes in the server pool
c) The server-side connection load balancing goal is LONG (for long running sessions)

Viewing the configuration of Server Pools

-sh-3.2$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Candidate server names:
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Candidate server names:
Server pool name: RACSP
Importance: 0, Min: 0, Max: 2
Candidate server names:
-sh-3.2$

So we see that :
a) The RACSP server pool is the only created (named) server pool
b) This server pool has a max of 2 nodes

Passed the 11g RAC and Grid Expert Exam

I passed the 11g RAC and Grid Expert Exam yesterday.
.
For those who are interested :

You must absolutely read the documentation on ASM, Grid Infrastructure and RAC.

I also recommend 3 books
1) Pro Oracle Database 11g RAC on Linux — by Steve Shaw and Martin Bach [Apress Publishing]
2) Oracle 11g R1/R2 Real Application Clusters Essentials — by Ben Prusinsky and Syed Jaffer Hussain [Packt Publishing]
OR
2) Oracle 11g R1/R2 Real Application Clusters Handbook — by Ben Prusinsky, Guenad Jilveski and Syed Jaffer Husssain [Packt Publishing]
3) Oracle Database 11g Release 2 High Availability — by Scott Jesse, Bill Burton and Bryan Vongray [Oracle Press]

The 11gR2 Grid and RAC Accelerated training at Oracle University is also recommended but expensive.
.
.
.