1) The following command should return ‘Yes’ for the grid disks being listed:

cellcli -e list griddisk attributes name,asmmodestatus,asmdeactivationoutcome

2) Run cellcli command to Inactivate all grid disks on the cell you wish to power down/reboot:

cellcli -e alter griddisk all inactive

3) Confirm that the griddisks are now offline by performing the following actions:

(a) Execute the command below and the output should show either asmmodestatus=OFFLINE or asmmodestatus=UNUSED and asmdeactivationoutcome=Yes for all griddisks once the disks are offline in ASM. Only then is it safe to proceed with shutting down or restarting the cell:

cellcli -e list griddisk attributes name,asmmodestatus,asmdeactivationoutcome

( there has also been a reported case of asmmodestatus= OFFLINE: Means Oracle ASM has taken this grid disk offline. This status is also fine and can proceed with remaining instructions)

(b) List the griddisks to confirm all now show inactive:

cellcli -e list griddisk

4) Shut down the cell services and ocrvottargetd service using the following commands:

# cellcli -e alter cell shutdown services all
# service ocrvottargetd stop

5) Use the ipconf utility to change the DNS settings using the following command:

# /usr/local/bin/ipconf

>>> LOG
[root@dm01cel01 ~]# /usr/local/bin/ipconf
Logging started to /var/log/cellos/ipconf.log
Interface ib0 is Linked.  hca: mlx4_0
Interface ib1 is Linked.  hca: mlx4_0
Interface eth0 is Linked.  driver/mac: igb/00:21:28:a5:8a:c8
Interface eth1 is … ^[[Unlinked.  driver/mac: igb/00:21:28:a5:8a:c9
Interface eth2 is … Unlinked.  driver/mac: igb/00:21:28:a5:8a:ca
Interface eth3 is … Unlinked.  driver/mac: igb/00:21:28:a5:8a:cb

Network interfaces
Name     State      IP address      Netmask         Gateway         Net type     Hostname       
ib0      Linked                                                                                 
ib1      Linked                                                                                 
eth0     Linked                                                                                 
eth1     Unlinked                                                                               
eth2     Unlinked                                                                               
eth3     Unlinked                                                                               
Warning. Some network interface(s) are disconnected. Check cables and swicthes and retry
Do you want to retry (y/n) [y]: n

The current nameserver(s): 10.50.128.60 10.235.1.60 10.229.156.60 10.236.1.52(custom) 10.235.1.52(custom)
Do you want to change it (y/n) [n]: y
Nameserver: 10.235.1.52
Add more nameservers (y/n) [n]: y
Nameserver: 10.236.1.52
Add more nameservers (y/n) [n]: n
The current timezone: America/Chicago
Do you want to change it (y/n) [n]:
The current NTP server(s): wmntp01p.prod.wedorac.com
Do you want to change it (y/n) [n]:

Network interfaces
Name     State      IP address      Netmask         Gateway         Net type     Hostname       
eth0     Linked     10.253.49.34    255.255.255.0   10.253.49.1     Management   dm01cel01.prod.wedorac.com
eth1     Unlinked                                                                               
eth2     Unlinked                                                                               
eth3     Unlinked                                                                               
bondib0  ib0,ib1    192.168.10.5    255.255.252.0                   Private      dm01cel01-priv.prod.wedorac.com
Select interface name to configure or press Enter to continue:

Select canonical hostname from the list below
1: dm01cel01.prod.ç
2: dm01cel01-priv.prod.wedorac.com
Canonical fully qualified domain name [1]:

Select default gateway interface from the list below
1: eth0
Default gateway interface [1]:

Canonical hostname: dm01cel01.prod.wedorac.com
Nameservers: 10.235.1.52 10.236.1.52
Timezone: America/Chicago
NTP servers: wmntp01p.prod.wedorac.com
Default gateway device: eth0
Network interfaces
Name     State      IP address      Netmask         Gateway         Net type     Hostname       
eth0     Linked     10.253.49.34    255.255.255.0   10.253.49.1     Management   dm01cel01.prod.wedorac.com
eth1     Unlinked                                                                               
eth2     Unlinked                                                                               
eth3     Unlinked                                                                               
bondib0  ib0,ib1    192.168.10.5    255.255.252.0                   Private      dm01cel01-priv.prod.wedorac.com
Is this correct (y/n) [y]:

Do you want to configure basic ILOM settings (y/n) [y]:
Loading basic configuration settings from ILOM …
ILOM Fully qualified hostname [dm01cel01-ilom.prod.wedorac.com]:
ILOM IP discovery (static/dhcp) [static]:
ILOM IP address [10.253.49.45]:
ILOM Netmask [255.255.255.0]:
ILOM Gateway or none [10.253.49.1]:
ILOM Nameserver or none [10.50.128.60]: 10.236.1.52
ILOM Use NTP Servers (enabled/disabled) [enabled]:
ILOM First NTP server. Fully qualified hostname or ip address or none [10.235.33.31]: 10.116.230.105
ILOM Second NTP server. Fully qualified hostname or ip address or none [none]: 10.116.230.108

Basic ILOM configuration settings:
Hostname             : dm01cel01-ilom.prod.wedorac.com
IP Discovery         : static
IP Address           : 10.253.49.45
Netmask              : 255.255.255.0
Gateway              : 10.253.49.1
DNS servers          : 10.236.1.52
Use NTP servers      : enabled
First NTP server     : 10.116.230.105
Second NTP server    : 10.116.230.108
Timezone (read-only) : America/Chicago

Is this correct (y/n) [y]: y
Connected. Use ^D to exit.
-> set /SP/clients/dns nameserver=10.236.1.52
Set ‘nameserver’ to ‘10.236.1.52’

-> Session closed
Disconnected
Connected. Use ^D to exit.
-> set /SP/clients/ntp/server/1 address=10.116.230.105
Set ‘address’ to ‘10.116.230.105’

-> Session closed
Disconnected
Connected. Use ^D to exit.
-> set /SP/clients/ntp/server/2 address=10.116.230.108
Set ‘address’ to ‘10.116.230.108’

-> Session closed
Disconnected

Info. Run /opt/oracle.cellos/validations/init.d/saveconfig
Info. Custom changes have been detected in /etc/resolv.conf
Info. Original file will be saved in /etc/resolv.conf.backupbyExadata

Warning. You modified DNS name server.
         Ensure you also update the Infiniband Switch DNS server
         if the same DNS server was also used by the Infiniband switch.

<<< EOF

6) Restart the server using the following command:

# shutdown -r now

7) Once the cell comes back online – you will need to reactive the griddisks:

cellcli -e alter griddisk all active

8) Issue the command below and all disks should show ‘active’:

cellcli -e list griddisk

9)  Verify grid disk status:

(a) Verify all grid disks have been successfully put online using the following command:

cellcli -e list griddisk attributes name, asmmodestatus

(b) Wait until asmmodestatus is ONLINE for all grid disks. Each disk will go to a ‘SYNCING’ state first then ‘ONLINE’. The following is an example of the output:
DATA_CD_00_dm01cel01 ONLINE
DATA_CD_01_dm01cel01 SYNCING
DATA_CD_02_dm01cel01 OFFLINE
DATA_CD_03_dm01cel01 OFFLINE
DATA_CD_04_dm01cel01 OFFLINE
DATA_CD_05_dm01cel01 OFFLINE
DATA_CD_06_dm01cel01 OFFLINE
DATA_CD_07_dm01cel01 OFFLINE
DATA_CD_08_dm01cel01 OFFLINE
DATA_CD_09_dm01cel01 OFFLINE
DATA_CD_10_dm01cel01 OFFLINE
DATA_CD_11_dm01cel01 OFFLINE

(c) Oracle ASM synchronization is only complete when all grid disks show asmmodestatus=ONLINE.

( Please note:  this operation uses Fast Mirror Resync operation – which does not trigger an ASM rebalance. The Resync operation restores only the extents that would have been written while the disk was offline.)

 

We ran into a situation where Oracle ACS delivered Exadata (Quarter Rack) with 20% of space allocated to DATA Diskgroup, and 80% to RECO! We requested the reverse. Resizing ASM Disk Groups in Exadata is not as straight forward as it is in non-Exadata environment.

We followed MOS Note: 1245494.1 to resize ASM Disk Groups.

SQL>  select name, total_mb, free_mb, required_mirror_free_mb from v$asm_diskgroup;

NAME                 TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB
—————————— ———- ———- ———————–
DBFS_DG               1038240    1036984              346080
DATA_EXA             15593472    8392296             5197824
RECO_EXA             86155200   82385996            28718400

SQL> select name,total_mb,free_mb
from v$asm_disk
where mount_status=’CACHED’ and (name like ‘DATA%’ or name like ‘RECO%’) order by 1;

NAME                 TOTAL_MB    FREE_MB
—————————— ———- ———-
DATA_EXA_CD_00_EXACEL01       433152     233152
DATA_EXA_CD_00_EXACEL02       433152     233180
DATA_EXA_CD_00_EXACEL03       433152     233124
DATA_EXA_CD_01_EXACEL01       433152     233116
DATA_EXA_CD_01_EXACEL02       433152     233312
DATA_EXA_CD_01_EXACEL03       433152     233244
DATA_EXA_CD_02_EXACEL01       433152     233056
DATA_EXA_CD_02_EXACEL02       433152     233208
DATA_EXA_CD_02_EXACEL03       433152     233096
DATA_EXA_CD_03_EXACEL01       433152     233024
DATA_EXA_CD_03_EXACEL02       433152     233092
DATA_EXA_CD_03_EXACEL03       433152     233100
DATA_EXA_CD_04_EXACEL01       433152     233200
DATA_EXA_CD_04_EXACEL02       433152     233216
DATA_EXA_CD_04_EXACEL03       433152     233000
DATA_EXA_CD_05_EXACEL01       433152     233040
DATA_EXA_CD_05_EXACEL02       433152     233012
DATA_EXA_CD_05_EXACEL03       433152     233176
DATA_EXA_CD_06_EXACEL01       433152     233152
DATA_EXA_CD_06_EXACEL02       433152     233184
DATA_EXA_CD_06_EXACEL03       433152     233064
DATA_EXA_CD_07_EXACEL01       433152     232968
DATA_EXA_CD_07_EXACEL02       433152     232928
DATA_EXA_CD_07_EXACEL03       433152     233124
DATA_EXA_CD_08_EXACEL01       433152     233128
DATA_EXA_CD_08_EXACEL02       433152     233176
DATA_EXA_CD_08_EXACEL03       433152     232984
DATA_EXA_CD_09_EXACEL01       433152     233140
DATA_EXA_CD_09_EXACEL02       433152     233148
DATA_EXA_CD_09_EXACEL03       433152     233348
DATA_EXA_CD_10_EXACEL01       433152     233036
DATA_EXA_CD_10_EXACEL02       433152     233084
DATA_EXA_CD_10_EXACEL03       433152     233188
DATA_EXA_CD_11_EXACEL01       433152     233104
DATA_EXA_CD_11_EXACEL02       433152     233032
DATA_EXA_CD_11_EXACEL03       433152     233136
RECO_EXA_CD_00_EXACEL01      2393200    2288452
RECO_EXA_CD_00_EXACEL02      2393200    2288392
RECO_EXA_CD_00_EXACEL03      2393200    2288496
RECO_EXA_CD_01_EXACEL01      2393200    2288332
RECO_EXA_CD_01_EXACEL02      2393200    2288372
RECO_EXA_CD_01_EXACEL03      2393200    2288512
RECO_EXA_CD_02_EXACEL01      2393200    2288504
RECO_EXA_CD_02_EXACEL02      2393200    2288492
RECO_EXA_CD_02_EXACEL03      2393200    2288520
RECO_EXA_CD_03_EXACEL01      2393200    2288452
RECO_EXA_CD_03_EXACEL02      2393200    2288572
RECO_EXA_CD_03_EXACEL03      2393200    2288528
RECO_EXA_CD_04_EXACEL01      2393200    2288576
RECO_EXA_CD_04_EXACEL02      2393200    2288428
RECO_EXA_CD_04_EXACEL03      2393200    2288460
RECO_EXA_CD_05_EXACEL01      2393200    2288620
RECO_EXA_CD_05_EXACEL02      2393200    2288500
RECO_EXA_CD_05_EXACEL03      2393200    2288428
RECO_EXA_CD_06_EXACEL01      2393200    2288456
RECO_EXA_CD_06_EXACEL02      2393200    2288592
RECO_EXA_CD_06_EXACEL03      2393200    2288588
RECO_EXA_CD_07_EXACEL01      2393200    2288392
RECO_EXA_CD_07_EXACEL02      2393200    2288412
RECO_EXA_CD_07_EXACEL03      2393200    2288448
RECO_EXA_CD_08_EXACEL01      2393200    2288628
RECO_EXA_CD_08_EXACEL02      2393200    2288456
RECO_EXA_CD_08_EXACEL03      2393200    2288440
RECO_EXA_CD_09_EXACEL01      2393200    2288448
RECO_EXA_CD_09_EXACEL02      2393200    2288540
RECO_EXA_CD_09_EXACEL03      2393200    2288532
RECO_EXA_CD_10_EXACEL01      2393200    2288604
RECO_EXA_CD_10_EXACEL02      2393200    2288512
RECO_EXA_CD_10_EXACEL03      2393200    2288584
RECO_EXA_CD_11_EXACEL01      2393200    2288584
RECO_EXA_CD_11_EXACEL02      2393200    2288528
RECO_EXA_CD_11_EXACEL03      2393200    2288560

72 rows selected.

CellCLI> list griddisk attributes name,size,status
     DATA_EXA_CD_00_EXAcel01     423G             active
     DATA_EXA_CD_01_EXAcel01     423G             active
     DATA_EXA_CD_02_EXAcel01     423G             active
     DATA_EXA_CD_03_EXAcel01     423G             active
     DATA_EXA_CD_04_EXAcel01     423G             active
     DATA_EXA_CD_05_EXAcel01     423G             active
     DATA_EXA_CD_06_EXAcel01     423G             active
     DATA_EXA_CD_07_EXAcel01     423G             active
     DATA_EXA_CD_08_EXAcel01     423G             active
     DATA_EXA_CD_09_EXAcel01     423G             active
     DATA_EXA_CD_10_EXAcel01     423G             active
     DATA_EXA_CD_11_EXAcel01     423G             active
     DBFS_DG_CD_02_EXAcel01       33.796875G       active
     DBFS_DG_CD_03_EXAcel01       33.796875G       active
     DBFS_DG_CD_04_EXAcel01       33.796875G       active
     DBFS_DG_CD_05_EXAcel01       33.796875G       active
     DBFS_DG_CD_06_EXAcel01       33.796875G       active
     DBFS_DG_CD_07_EXAcel01       33.796875G       active
     DBFS_DG_CD_08_EXAcel01       33.796875G       active
     DBFS_DG_CD_09_EXAcel01       33.796875G       active
     DBFS_DG_CD_10_EXAcel01       33.796875G       active
     DBFS_DG_CD_11_EXAcel01       33.796875G       active
     RECO_EXA_CD_00_EXAcel01     2337.109375G     active
     RECO_EXA_CD_01_EXAcel01     2337.109375G     active
     RECO_EXA_CD_02_EXAcel01     2337.109375G     active
     RECO_EXA_CD_03_EXAcel01     2337.109375G     active
     RECO_EXA_CD_04_EXAcel01     2337.109375G     active
     RECO_EXA_CD_05_EXAcel01     2337.109375G     active
     RECO_EXA_CD_06_EXAcel01     2337.109375G     active
     RECO_EXA_CD_07_EXAcel01     2337.109375G     active
     RECO_EXA_CD_08_EXAcel01     2337.109375G     active
     RECO_EXA_CD_09_EXAcel01     2337.109375G     active
     RECO_EXA_CD_10_EXAcel01     2337.109375G     active
     RECO_EXA_CD_11_EXAcel01     2337.109375G     active

SQL> select distinct failgroup from v$asm_disk;

FAILGROUP
——————————
EXACEL01
EXACEL02
EXACEL03

— For DATA diskgroup, get the list of disk name for DATA dikgroup and failgroup storage EXACEL01

SQL> select name,header_status,mount_status,failgroup from v$asm_disk where group_number=2 and failgroup=’EXACEL01′;

NAME                   HEADER_STATU MOUNT_S FAILGROUP
—————————— ———— ——- ——————–
DATA_EXA_CD_08_EXACEL01      MEMBER        CACHED  EXACEL01
DATA_EXA_CD_05_EXACEL01      MEMBER        CACHED  EXACEL01
DATA_EXA_CD_09_EXACEL01      MEMBER        CACHED  EXACEL01
DATA_EXA_CD_10_EXACEL01      MEMBER        CACHED  EXACEL01
DATA_EXA_CD_07_EXACEL01      MEMBER        CACHED  EXACEL01
DATA_EXA_CD_03_EXACEL01      MEMBER        CACHED  EXACEL01
DATA_EXA_CD_01_EXACEL01      MEMBER        CACHED  EXACEL01
DATA_EXA_CD_00_EXACEL01      MEMBER        CACHED  EXACEL01
DATA_EXA_CD_06_EXACEL01      MEMBER        CACHED  EXACEL01
DATA_EXA_CD_02_EXACEL01      MEMBER        CACHED  EXACEL01
DATA_EXA_CD_04_EXACEL01      MEMBER        CACHED  EXACEL01
DATA_EXA_CD_11_EXACEL01      MEMBER        CACHED  EXACEL01

12 rows selected.

— Now use the below command to drop all disks of failgroup “EXACEL01”

SQL> alter diskgroup DATA_EXA drop disks in failgroup EXACEL01 rebalance power 11 NOWAIT;

— Check status of rebalance using following SQL:

SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT      POWER     ACTUAL       SOFAR   EST_WORK   EST_RATE
———— —– —- ———- ———- ———- ———- ———-
EST_MINUTES ERROR_CODE
———– ——————————————–
       2 REBAL RUN        11       11      592583     869386      9634
     28

— Once the rebalance complets ,check the header_status column in v$asm_disk by running below sql..It should show as FORMER for dropped disk
 
set linesize 300
column path format a40
select name,path,header_status,mount_status
from v$asm_disk
where group_number=0
order by 2;

NAME                   PATH                    HEADER_STATU MOUNT_S
—————————— —————————————- ———— ——-
                   o/192.168.10.3/DATA_EXA_CD_00_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_01_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_02_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_03_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_04_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_05_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_06_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_07_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_08_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_09_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_10_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_11_EXAcel01 FORMER         CLOSED

12 rows selected.

— Now peform the steps for RECO diskgroup. For RECO diskgroup, get the list of disk name for RECO dikgroup and failgroup EXACEL01

column failgroup format a20
set pages 200
set linesize 200
select name,header_status,mount_status,group_number,failgroup
from v$asm_disk where group_number=3 and failgroup=’EXACEL01′;

NAME                   HEADER_STATU MOUNT_S GROUP_NUMBER FAILGROUP
—————————— ———— ——- ———— ——————–
RECO_EXA_CD_10_EXACEL01      MEMBER        CACHED           3 EXACEL01
RECO_EXA_CD_09_EXACEL01      MEMBER        CACHED           3 EXACEL01
RECO_EXA_CD_07_EXACEL01      MEMBER        CACHED           3 EXACEL01
RECO_EXA_CD_00_EXACEL01      MEMBER        CACHED           3 EXACEL01
RECO_EXA_CD_08_EXACEL01      MEMBER        CACHED           3 EXACEL01
RECO_EXA_CD_02_EXACEL01      MEMBER        CACHED           3 EXACEL01
RECO_EXA_CD_11_EXACEL01      MEMBER        CACHED           3 EXACEL01
RECO_EXA_CD_01_EXACEL01      MEMBER        CACHED           3 EXACEL01
RECO_EXA_CD_06_EXACEL01      MEMBER        CACHED           3 EXACEL01
RECO_EXA_CD_05_EXACEL01      MEMBER        CACHED           3 EXACEL01
RECO_EXA_CD_03_EXACEL01      MEMBER        CACHED           3 EXACEL01
RECO_EXA_CD_04_EXACEL01      MEMBER        CACHED           3 EXACEL01

12 rows selected.

— Now use the below command to drop all disks of failgroup  “EXACEL01′
 
alter diskgroup RECO_EXA drop disks in failgroup EXACEL01 rebalance power 11 NOWAIT;

Diskgroup altered.

— Check the rebalance operation has started or not. Use rebalance_progress.sh as shown below to monitor progress of ASM rebalance operation as v$asm_operation will not show you right estimate.
 
select * from v$asm_operation;
SQL> l
  1* select INST_ID, OPERATION, STATE, POWER, SOFAR, EST_WORK, EST_RATE, EST_MINUTES from GV$ASM_OPERATION
SQL> /

   INST_ID OPERA STAT       POWER      SOFAR   EST_WORK     EST_RATE EST_MINUTES
———- —– —- ———- ———- ———- ———- ———–
     1 REBAL RUN          11       9137    470223         7659       60
     2 REBAL WAIT          11

— Once the rebalance completes ,check the header_status column in v$asm_disk. It should show as FORMER for dropped disk
 
set linesize 300
column path format a40
select name,path,header_status,mount_status from v$asm_disk where group_number=0 order by 2;

NAME                   PATH                    HEADER_STATU MOUNT_S
—————————— —————————————- ———— ——-
                   o/192.168.10.3/DATA_EXA_CD_00_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_01_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_02_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_03_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_04_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_05_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_06_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_07_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_08_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_09_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_10_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/DATA_EXA_CD_11_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/RECO_EXA_CD_00_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/RECO_EXA_CD_01_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/RECO_EXA_CD_02_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/RECO_EXA_CD_03_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/RECO_EXA_CD_04_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/RECO_EXA_CD_05_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/RECO_EXA_CD_06_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/RECO_EXA_CD_07_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/RECO_EXA_CD_08_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/RECO_EXA_CD_09_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/RECO_EXA_CD_10_EXAcel01 FORMER         CLOSED
                   o/192.168.10.3/RECO_EXA_CD_11_EXAcel01 FORMER         CLOSED

24 rows selected.

— Drop and re-create the Grid disks at Storage cell node EXACEL01 with desired size.

[celladmin@EXAcel01 ~]$ cellcli
CellCLI: Release 11.2.3.1.1 – Production on Sun Oct 21 08:23:03 CDT 2012

Copyright (c) 2007, 2011, Oracle.  All rights reserved.
Cell Efficiency Ratio: 1,026

CellCLI> list griddisk attributes name,cellDisk,size,status;
     DATA_EXA_CD_00_EXAcel01     CD_00_EXAcel01     423G             active
     DATA_EXA_CD_01_EXAcel01     CD_01_EXAcel01     423G             active
     DATA_EXA_CD_02_EXAcel01     CD_02_EXAcel01     423G             active
     DATA_EXA_CD_03_EXAcel01     CD_03_EXAcel01     423G             active
     DATA_EXA_CD_04_EXAcel01     CD_04_EXAcel01     423G             active
     DATA_EXA_CD_05_EXAcel01     CD_05_EXAcel01     423G             active
     DATA_EXA_CD_06_EXAcel01     CD_06_EXAcel01     423G             active
     DATA_EXA_CD_07_EXAcel01     CD_07_EXAcel01     423G             active
     DATA_EXA_CD_08_EXAcel01     CD_08_EXAcel01     423G             active
     DATA_EXA_CD_09_EXAcel01     CD_09_EXAcel01     423G             active
     DATA_EXA_CD_10_EXAcel01     CD_10_EXAcel01     423G             active
     DATA_EXA_CD_11_EXAcel01     CD_11_EXAcel01     423G             active
     DBFS_DG_CD_02_EXAcel01       CD_02_EXAcel01     33.796875G       active
     DBFS_DG_CD_03_EXAcel01       CD_03_EXAcel01     33.796875G       active
     DBFS_DG_CD_04_EXAcel01       CD_04_EXAcel01     33.796875G       active
     DBFS_DG_CD_05_EXAcel01       CD_05_EXAcel01     33.796875G       active
     DBFS_DG_CD_06_EXAcel01       CD_06_EXAcel01     33.796875G       active
     DBFS_DG_CD_07_EXAcel01       CD_07_EXAcel01     33.796875G       active
     DBFS_DG_CD_08_EXAcel01       CD_08_EXAcel01     33.796875G       active
     DBFS_DG_CD_09_EXAcel01       CD_09_EXAcel01     33.796875G       active
     DBFS_DG_CD_10_EXAcel01       CD_10_EXAcel01     33.796875G       active
     DBFS_DG_CD_11_EXAcel01       CD_11_EXAcel01     33.796875G       active
     RECO_EXA_CD_00_EXAcel01     CD_00_EXAcel01     2337.109375G     active
     RECO_EXA_CD_01_EXAcel01     CD_01_EXAcel01     2337.109375G     active
     RECO_EXA_CD_02_EXAcel01     CD_02_EXAcel01     2337.109375G     active
     RECO_EXA_CD_03_EXAcel01     CD_03_EXAcel01     2337.109375G     active
     RECO_EXA_CD_04_EXAcel01     CD_04_EXAcel01     2337.109375G     active
     RECO_EXA_CD_05_EXAcel01     CD_05_EXAcel01     2337.109375G     active
     RECO_EXA_CD_06_EXAcel01     CD_06_EXAcel01     2337.109375G     active
     RECO_EXA_CD_07_EXAcel01     CD_07_EXAcel01     2337.109375G     active
     RECO_EXA_CD_08_EXAcel01     CD_08_EXAcel01     2337.109375G     active
     RECO_EXA_CD_09_EXAcel01     CD_09_EXAcel01     2337.109375G     active
     RECO_EXA_CD_10_EXAcel01     CD_10_EXAcel01     2337.109375G     active
     RECO_EXA_CD_11_EXAcel01     CD_11_EXAcel01     2337.109375G     active

#Login into EXACEL01 cell server and start cellcli.

ALTER GRIDDISK  DATA_EXA_CD_00_EXACEL01,DATA_EXA_CD_01_EXACEL01,DATA_EXA_CD_02_EXACEL01,DATA_EXA_CD_03_EXACEL01,DATA_EXA_CD_04_EXACEL01,DATA_EXA_CD_05_EXACEL01,DATA_EXA_CD_06_EXACEL01,DATA_EXA_CD_07_EXACEL01,DATA_EXA_CD_08_EXACEL01,DATA_EXA_CD_09_EXACEL01,DATA_EXA_CD_10_EXACEL01,DATA_EXA_CD_11_EXACEL01 INACTIVE
GridDisk DATA_EXA_CD_00_EXAcel01 successfully altered
GridDisk DATA_EXA_CD_01_EXAcel01 successfully altered
GridDisk DATA_EXA_CD_02_EXAcel01 successfully altered
GridDisk DATA_EXA_CD_03_EXAcel01 successfully altered
GridDisk DATA_EXA_CD_04_EXAcel01 successfully altered
GridDisk DATA_EXA_CD_05_EXAcel01 successfully altered
GridDisk DATA_EXA_CD_06_EXAcel01 successfully altered
GridDisk DATA_EXA_CD_07_EXAcel01 successfully altered
GridDisk DATA_EXA_CD_08_EXAcel01 successfully altered
GridDisk DATA_EXA_CD_09_EXAcel01 successfully altered
GridDisk DATA_EXA_CD_10_EXAcel01 successfully altered
GridDisk DATA_EXA_CD_11_EXAcel01 successfully altered

CellCLI> DROP GRIDDISK ALL PREFIX=DATA_EXA;
GridDisk DATA_EXA_CD_00_EXAcel01 successfully dropped
GridDisk DATA_EXA_CD_01_EXAcel01 successfully dropped
GridDisk DATA_EXA_CD_02_EXAcel01 successfully dropped
GridDisk DATA_EXA_CD_03_EXAcel01 successfully dropped
GridDisk DATA_EXA_CD_04_EXAcel01 successfully dropped
GridDisk DATA_EXA_CD_05_EXAcel01 successfully dropped
GridDisk DATA_EXA_CD_06_EXAcel01 successfully dropped
GridDisk DATA_EXA_CD_07_EXAcel01 successfully dropped
GridDisk DATA_EXA_CD_08_EXAcel01 successfully dropped
GridDisk DATA_EXA_CD_09_EXAcel01 successfully dropped
GridDisk DATA_EXA_CD_10_EXAcel01 successfully dropped
GridDisk DATA_EXA_CD_11_EXAcel01 successfully dropped

— Inactivate and DROP GDs for RECO

CellCLI> ALTER GRIDDISK  RECO_EXA_CD_00_EXACEL01,RECO_EXA_CD_01_EXACEL01,RECO_EXA_CD_02_EXACEL01,RECO_EXA_CD_03_EXACEL01,RECO_EXA_CD_04_EXACEL01,RECO_EXA_CD_05_EXACEL01,RECO_EXA_CD_06_EXACEL01,RECO_EXA_CD_07_EXACEL01,RECO_EXA_CD_08_EXACEL01,RECO_EXA_CD_09_EXACEL01,RECO_EXA_CD_10_EXACEL01,RECO_EXA_CD_11_EXACEL01 INACTIVE
GridDisk RECO_EXA_CD_00_EXAcel01 successfully altered
GridDisk RECO_EXA_CD_01_EXAcel01 successfully altered
GridDisk RECO_EXA_CD_02_EXAcel01 successfully altered
GridDisk RECO_EXA_CD_03_EXAcel01 successfully altered
GridDisk RECO_EXA_CD_04_EXAcel01 successfully altered
GridDisk RECO_EXA_CD_05_EXAcel01 successfully altered
GridDisk RECO_EXA_CD_06_EXAcel01 successfully altered
GridDisk RECO_EXA_CD_07_EXAcel01 successfully altered
GridDisk RECO_EXA_CD_08_EXAcel01 successfully altered
GridDisk RECO_EXA_CD_09_EXAcel01 successfully altered
GridDisk RECO_EXA_CD_10_EXAcel01 successfully altered
GridDisk RECO_EXA_CD_11_EXAcel01 successfully altered

CellCLI> DROP GRIDDISK ALL PREFIX=RECO_EXA;
GridDisk RECO_EXA_CD_00_EXAcel01 successfully dropped
GridDisk RECO_EXA_CD_01_EXAcel01 successfully dropped
GridDisk RECO_EXA_CD_02_EXAcel01 successfully dropped
GridDisk RECO_EXA_CD_03_EXAcel01 successfully dropped
GridDisk RECO_EXA_CD_04_EXAcel01 successfully dropped
GridDisk RECO_EXA_CD_05_EXAcel01 successfully dropped
GridDisk RECO_EXA_CD_06_EXAcel01 successfully dropped
GridDisk RECO_EXA_CD_07_EXAcel01 successfully dropped
GridDisk RECO_EXA_CD_08_EXAcel01 successfully dropped
GridDisk RECO_EXA_CD_09_EXAcel01 successfully dropped
GridDisk RECO_EXA_CD_10_EXAcel01 successfully dropped
GridDisk RECO_EXA_CD_11_EXAcel01 successfully dropped

CellCLI> CREATE GRIDDISK ALL PREFIX=DATA_EXA, size=2337.10G;
Cell disks were skipped because they had no freespace for grid disks: FD_00_EXAcel01, FD_01_EXAcel01, FD_02_EXAcel01, FD_03_EXAcel01, FD_04_EXAcel01, FD_05_EXAcel01, FD_06_EXAcel01, FD_07_EXAcel01, FD_08_EXAcel01, FD_09_EXAcel01, FD_10_EXAcel01, FD_11_EXAcel01, FD_12_EXAcel01, FD_13_EXAcel01, FD_14_EXAcel01, FD_15_EXAcel01.
GridDisk DATA_EXA_CD_00_EXAcel01 successfully created
GridDisk DATA_EXA_CD_01_EXAcel01 successfully created
GridDisk DATA_EXA_CD_02_EXAcel01 successfully created
GridDisk DATA_EXA_CD_03_EXAcel01 successfully created
GridDisk DATA_EXA_CD_04_EXAcel01 successfully created
GridDisk DATA_EXA_CD_05_EXAcel01 successfully created
GridDisk DATA_EXA_CD_06_EXAcel01 successfully created
GridDisk DATA_EXA_CD_07_EXAcel01 successfully created
GridDisk DATA_EXA_CD_08_EXAcel01 successfully created
GridDisk DATA_EXA_CD_09_EXAcel01 successfully created
GridDisk DATA_EXA_CD_10_EXAcel01 successfully created
GridDisk DATA_EXA_CD_11_EXAcel01 successfully created

CellCLI> CREATE GRIDDISK ALL PREFIX=RECO_EXA, size=423G;
Cell disks were skipped because they had no freespace for grid disks: FD_00_EXAcel01, FD_01_EXAcel01, FD_02_EXAcel01, FD_03_EXAcel01, FD_04_EXAcel01, FD_05_EXAcel01, FD_06_EXAcel01, FD_07_EXAcel01, FD_08_EXAcel01, FD_09_EXAcel01, FD_10_EXAcel01, FD_11_EXAcel01, FD_12_EXAcel01, FD_13_EXAcel01, FD_14_EXAcel01, FD_15_EXAcel01.
GridDisk RECO_EXA_CD_00_EXAcel01 successfully created
GridDisk RECO_EXA_CD_01_EXAcel01 successfully created
GridDisk RECO_EXA_CD_02_EXAcel01 successfully created
GridDisk RECO_EXA_CD_03_EXAcel01 successfully created
GridDisk RECO_EXA_CD_04_EXAcel01 successfully created
GridDisk RECO_EXA_CD_05_EXAcel01 successfully created
GridDisk RECO_EXA_CD_06_EXAcel01 successfully created
GridDisk RECO_EXA_CD_07_EXAcel01 successfully created
GridDisk RECO_EXA_CD_08_EXAcel01 successfully created
GridDisk RECO_EXA_CD_09_EXAcel01 successfully created
GridDisk RECO_EXA_CD_10_EXAcel01 successfully created
GridDisk RECO_EXA_CD_11_EXAcel01 successfully created

— Check Griddisk information

CellCLI>  list griddisk attributes name,cellDisk,size,status;
     DATA_EXA_CD_00_EXAcel01     CD_00_EXAcel01     2337.09375G     active
     DATA_EXA_CD_01_EXAcel01     CD_01_EXAcel01     2337.09375G     active
     DATA_EXA_CD_02_EXAcel01     CD_02_EXAcel01     2337.09375G     active
     DATA_EXA_CD_03_EXAcel01     CD_03_EXAcel01     2337.09375G     active
     DATA_EXA_CD_04_EXAcel01     CD_04_EXAcel01     2337.09375G     active
     DATA_EXA_CD_05_EXAcel01     CD_05_EXAcel01     2337.09375G     active
     DATA_EXA_CD_06_EXAcel01     CD_06_EXAcel01     2337.09375G     active
     DATA_EXA_CD_07_EXAcel01     CD_07_EXAcel01     2337.09375G     active
     DATA_EXA_CD_08_EXAcel01     CD_08_EXAcel01     2337.09375G     active
     DATA_EXA_CD_09_EXAcel01     CD_09_EXAcel01     2337.09375G     active
     DATA_EXA_CD_10_EXAcel01     CD_10_EXAcel01     2337.09375G     active
     DATA_EXA_CD_11_EXAcel01     CD_11_EXAcel01     2337.09375G     active
     DBFS_DG_CD_02_EXAcel01       CD_02_EXAcel01     33.796875G      active
     DBFS_DG_CD_03_EXAcel01       CD_03_EXAcel01     33.796875G      active
     DBFS_DG_CD_04_EXAcel01       CD_04_EXAcel01     33.796875G      active
     DBFS_DG_CD_05_EXAcel01       CD_05_EXAcel01     33.796875G      active
     DBFS_DG_CD_06_EXAcel01       CD_06_EXAcel01     33.796875G      active
     DBFS_DG_CD_07_EXAcel01       CD_07_EXAcel01     33.796875G      active
     DBFS_DG_CD_08_EXAcel01       CD_08_EXAcel01     33.796875G      active
     DBFS_DG_CD_09_EXAcel01       CD_09_EXAcel01     33.796875G      active
     DBFS_DG_CD_10_EXAcel01       CD_10_EXAcel01     33.796875G      active
     DBFS_DG_CD_11_EXAcel01       CD_11_EXAcel01     33.796875G      active
     RECO_EXA_CD_00_EXAcel01     CD_00_EXAcel01     423G            active
     RECO_EXA_CD_01_EXAcel01     CD_01_EXAcel01     423G            active
     RECO_EXA_CD_02_EXAcel01     CD_02_EXAcel01     423G            active
     RECO_EXA_CD_03_EXAcel01     CD_03_EXAcel01     423G            active
     RECO_EXA_CD_04_EXAcel01     CD_04_EXAcel01     423G            active
     RECO_EXA_CD_05_EXAcel01     CD_05_EXAcel01     423G            active
     RECO_EXA_CD_06_EXAcel01     CD_06_EXAcel01     423G            active
     RECO_EXA_CD_07_EXAcel01     CD_07_EXAcel01     423G            active
     RECO_EXA_CD_08_EXAcel01     CD_08_EXAcel01     423G            active
     RECO_EXA_CD_09_EXAcel01     CD_09_EXAcel01     423G            active
     RECO_EXA_CD_10_EXAcel01     CD_10_EXAcel01     423G            active
     RECO_EXA_CD_11_EXAcel01     CD_11_EXAcel01     423G            active

–Last step is to add disks back to ASM disk groups and rebalance disks.

SQL> ALTER DISKGROUP DATA_EXA ADD DISK ‘o/192.168.10.3/DATA_EXA_CD*’ rebalance power 11 NOWAIT;

Diskgroup altered.

SQL> select INST_ID, OPERATION, STATE, POWER, SOFAR, EST_WORK, EST_RATE, EST_MINUTES from GV$ASM_OPERATION;

   INST_ID OPERA STAT       POWER      SOFAR   EST_WORK     EST_RATE EST_MINUTES
———- —– —- ———- ———- ———- ———- ———–
     1 REBAL RUN          11       1897    1796448         9344      192
     2 REBAL WAIT          11

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Real Application Clusters and Automatic Storage Management options
[oracle@EXAdb01 ~]$ cd scripts/
[oracle@EXAdb01 scripts]$ ./rebalance_progress.sh
######################################################################
This script will monitor Phase 1 (rebalance) file by file and Phase 2
(compaction) disk by disk. Both phases should increment, showing progress.
This script will *not* estimate how long the rebalance will take.
######################################################################
 
Diskgroup being rebalanced is DATA_EXA
ASM file numbers for databases start at 256.
Default check interval is 600 seconds. This run using 600 seconds…
 
Sun Oct 21 08:42:16 CDT 2012: PHASE 1 (of 2): Processing file 261 out of 605
Sun Oct 21 09:28:10 CDT 2012: PHASE 1 (of 2): Processing file 562 out of 605
Sun Oct 21 09:32:11 CDT 2012: PHASE 1 (of 2): Processing file 588 out of 605
Sun Oct 21 10:08:39 CDT 2012: PHASE 1 (of 2): Processing file 596 out of 605
Sun Oct 21 10:24:57 CDT 2012: PHASE 1 (of 2): Processing file 605 out of 605
*******************************************************
Sun Oct 21 10:27:27 CDT 2012: PHASE 1 (of 2) complete.
*******************************************************

SQL> ALTER DISKGROUP RECO_EXA ADD DISK ‘o/192.168.10.3/RECO_EXA_CD*’ rebalance power 11 NOWAIT;

Diskgroup altered.

Diskgroup being rebalanced is DATA_EXA
ASM file numbers for databases start at 256.
Default check interval is 600 seconds. This run using 600 seconds…
 
Sun Oct 21 10:31:59 CDT 2012: PHASE 1 (of 2): Processing file 315 out of 605
Sun Oct 21 10:41:59 CDT 2012: PHASE 1 (of 2): Processing file 592 out of 605
Sun Oct 21 10:51:59 CDT 2012: PHASE 1 (of 2): Processing file 605 out of 605
*******************************************************
Sun Oct 21 11:02:29 CDT 2012: PHASE 1 (of 2) complete.
*******************************************************

Problem:

The exachk report showed:
 
WARNING! The data collection activity appears to be incomplete for this exachk run. Please review the “Killed Processes” and / or “Skipped Checks” section and refer to “Appendix A – Troubleshooting Scenarios” of the “Exachk User Guide” for corrective actions. 

On Infiniband Switch – No Checks reported but “Killed Processes” and / or “Skipped Checks” exist
 
Exachk has a “watchdog” process that monitors exachk execution and will kill commands that exceed default timeouts to prevent “hangs”. Occasionally on a busy system, checks may be killed simply because the target of the check has not responded within the default timeout. These environment variables can be used to lengthen the default timeouts. The most common timeout environment variables are:

  • RAT_TIMEOUT (default 90 seconds, non-root individual commands)
  • RAT_ROOT_TIMEOUT (default 300 seconds, root userid command sets)
  • RAT_PASSWORDCHEK_TIMEOUT (default 1 second, ssh login DNS handshake)

 
 Solution:

Please make sure the connection between db server and IB switches.  Please rerun exachk at a quiet time when system is less busy.  Or update the default time out:
 
export RAT_TIMEOUT=120
export RAT_ROOT_TIMEOUT=600
export RAT_PASSWORDCHECK_TIMEOUT=10

Removing ASM Disks

February 2, 2010

If you try to release the ASM disk, it fails sometimes with following error:

[root]# /etc/init.d/oracleasm deletedisk ASM3

Removing ASM disk “ASM3”:                                  [FAILED]

To get around this problem, it is necessary to overwrite the ASM header information on the disk. This can be achieved with the UNIX command dd. The following command will write 100x1024b blocks to the specified device:

[root]# dd if=/dev/zero of=/dev/mapper/asm3p1 bs=1024 count=100
100+0 records in
100+0 records out
102400 bytes (102 kB) copied, 0.003025 seconds, 33.9 MB/s

If you try to deletedisk, it should work now.

[root]# /etc/init.d/oracleasm deletedisk ASM3
Removing ASM disk “ASM3”:                                  [  OK  ]

If you get “device or resource busy” message in /var/log/oracleasm

[root]# tail -f /var/log/oracleasm
Clearing disk header: oracleasm-write-label: Unable to open device “/dev/oracleasm/disks/ASM3”: Device or resource busy
failed

Check who is using the asm device:

[root]# fuser /dev/mapper/asm3p1
/dev/mapper/asm3p1:  27612

[root]# ps -ef | grep 27612
root      5076 24373  0 17:06 pts/3    00:00:00 grep 27612
oracle   27612     1  0 16:02 ?        00:00:00 oracle+ASM1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

If this is the case you will have to take your ASM instance down to release the device.

Files are an integral part of modern database applications. You can keep files synched with the relational data stored in the database, by storing business data files in the database. You also get transactional consistency, unified security, backup, and search.

Oracle Database File System (DBFS) provides file system interface to files stored in the database tables. DBFS enables existing file based tools to access database files through familiar pathnames, directories, and links. Files in DBFS are either kept in a dedicated file store , or existing application tables.

DBFS provides unified data and file backups, Disaster Recovery, and management of both relational data as well as files. DBFS also add advanced features of compression, deduplication and encryption to files.

The DBFS Content Store allows each database user to create one or more file systems that can be mounted by clients. Each file system has its own dedicated tables that hold the file system content. The DBFS Content API is the PL/SQL interface in the Oracle RDBMS

How it works?

DBFS is a shared file system like NFS and consists of a server (Oracle Database) and a client (dbfs_client in Linux or internal DB client). The dbfs_client provides a command interface to allow files to be easily copied in and out of the database from any host on the network. On Linux platforms the dbfs_client can be used to mount the database file system on a regular mount point. This is done using the “Filesystem in Userspace” (FUSE) module. This allows Linux machines to access DBFS like any other physical file system.

Application makes file calls. Linux FUSE module receives these calls, and forwards them on to the dbfs_client exectutable. DBFS_client makes remote calls to DBFS content store in the database. DBFS_client makes OCI, LOB and SQL calls to database.

High Availability

DBFS Linux client offers HA by leverging RAC technology. The failure of a database instance is detected based on FAN notification. You will have to configure an extra service for failover. dbfs_client transparently redirects file acces to surviving RAC instances on node failures. Any outstanding transaction is replayed to surviving RAC instance.

DBFS Limitations

  • does not support aync IOs.
  • cannot be used when database is NOT running.
  • DBFS does not support exporting NFS exports.
Related Links

GPnP in 11gR2

December 20, 2009

Grid Plug and Play (GPnP)

  • GPnP Makes it easy to add, replace, or remove nodes in a cluster.
  • Allow the cluster to manage it’s own virtual ip addresses > No need to go back to the network administrator.

In the past, adding or removing servers in a cluster required extensive manual preparation. With this release, you can continue to configure server nodes manually, or use Grid Plug and Play to configure them dynamically as nodes are added or removed from the cluster.

Grid Plug and Play reduces the costs of installing, configuring, and managing server nodes by starting a grid naming service within the cluster to allow each node to perform the following tasks dynamically:

  • Negotiating appropriate network identities for itself
  • Acquiring additional information it needs to operate from a configuration profile
  • Configuring or reconfiguring itself using profile data, making hostnames and addresses resolvable on the network.

Because servers perform these tasks dynamically, the number of steps required to add or nodes is minimized.

Grid Naming Service (GNS)

  • Lets the cluster manage it’s own network
  • Support DHCP for IPs and VIPs
  • No need to go back to the Network Admin

The Grid Naming Service (GNS) is a part of the Grid Plug and Play feature of Oracle RAC 11g Release 2. It provides name resolution for the cluster. If you have a larger cluster or a requirement to have a dynamic cluster (you expect to add or remove nodes in the cluster), then you should implement GNS. If you are implementing a small cluster, you do not need to add GNS.

The GNS virtual IP address is a static IP address configured in the DNS. The DNS delegates queries to the GNS virtual IP address, and the GNS daemon responds to incoming name resolution requests at that address.

Within the subdomain, the GNS uses multicast Domain Name Service (mDNS), included with Oracle Clusterware, to enable the cluster to map hostnames and IP addresses dynamically as nodes are added and removed from the cluster, without requiring additional host configuration in the DNS.

To enable GNS, you must have your network administrator provide a set of IP addresses for a subdomain assigned to the cluster (for example, grid.example.com), and delegate DNS requests for that subdomain to the GNS virtual IP address for the cluster, which GNS will serve. The set of IP addresses is provided to the cluster through DHCP, which must be available on the public network for the cluster.

SCAN

Single Client Access Name (SCAN) is a single name that allows client connections to connect to any database in an Oracle cluster independently of which node in the cluster the database (or service) is currently running. The SCAN should be used in all client connection strings and does not change when you add/remove nodes from the cluster.

Oracle Database 11g release 2 clients connect to the database using SCANs. The SCAN and its associated IP addresses provide a stable name for clients to use for connections, independent of the nodes that make up the cluster. SCAN addresses, virtual IP addresses, and public IP addresses must all be on the same subnet.

The SCAN is a virtual IP name, similar to the names used for virtual IP addresses, such as node1-vip. However, unlike a virtual IP, the SCAN is associated with the entire cluster, rather than an individual node, and associated with multiple IP addresses, not just one address.

The SCAN works by being able to resolve to multiple IP addresses reflecting multiple listeners in the cluster handling public client connections. When a client submits a request, the SCAN listener listening on a SCAN IP address and the SCAN port is contracted on a client’s behalf. Because all services on the cluster are registered with the SCAN listener, the SCAN listener replies with the address of the local listener on the least-loaded node where the service is currently being offered. Finally, the client establishes connection to the service through the listener on the node where service is offered. All of these actions take place transparently to the client without any explicit configuration required in the client.

During installation, listeners are created on nodes for the SCAN IP addresses. Oracle Net Services routes application requests to the least loaded instance providing the service. Because the SCAN addresses resolve to the cluster, rather than to a node address in the cluster, nodes can be added to or removed from the cluster without affecting the SCAN address configuration.

The SCAN should be configured so that it is resolvable either by using Grid Naming Service (GNS) within the cluster, or by using Domain Name Service (DNS) resolution. For high availability and scalability, Oracle recommends that you configure the SCAN name so that it resolves to three IP addresses. At a minimum, the SCAN must resolve to at least one address.

If you specify a GNS domain, then the SCAN name defaults to clustername-scan.GNS_domain. Otherwise, it defaults to clustername-scan.current_domain. For example, if you start Oracle grid infrastructure installation from the server node1, the cluster name is mycluster, and the GNS domain is grid.example.com, then the SCAN Name is mycluster-scan.grid.example.com.

Clients configured to use IP addresses for Oracle Database releases prior to Oracle Database 11g release 2 can continue to use their existing connection addresses; using SCANs is not required. When you upgrade to Oracle Clusterware 11g release 2 (11.2), the SCAN becomes available, and you should use the SCAN for connections to Oracle Database 11g release 2 or later databases. When an earlier version of Oracle Database is upgraded, it registers with the SCAN listeners, and clients can start using the SCAN to connect to that database. The database registers with the SCAN listener through the remote listener parameter in the init.ora file.

The Oracle Clusterware and Oracle RAC will work with both static and DHCP hostnames. When you use GNS, Oracle will use DHCP for the VIPs which includes node vips and SCAN vips.

References