11 Appendix 1: Output: ./cluvfy stage -pre nodeadd
#############################################################
Output:
./cluvfy stage -pre nodeadd -n m-lrkdb3 
#############################################################
[grid@m-lrkdb2:bin]$
./cluvfy stage -pre nodeadd -n m-lrkdb3 
Performing
pre-checks for node addition 
Checking
node reachability... 
Node
reachability check passed from node "m-lrkdb2" 
Checking
user equivalence... 
User
equivalence check passed for user "grid" 
Checking
CRS integrity... 
CRS
integrity check passed 
Clusterware
version consistency passed. 
Checking
shared resources... 
Checking
CRS home location... 
Path
"/u01/app/11.2.0/grid_3" either already exists or can be
successfully created on nodes: "m-lrkdb3" 
Shared
resources check for node addition passed 
Checking
node connectivity... 
Checking
hosts config file... 
Verification
of the hosts config file successful 
Check:
Node connectivity using interfaces on subnet "10.19.62.0" 
Node
connectivity passed for subnet "10.19.62.0" with node(s)
m-lrkdb3,m-lrkdb2 
TCP
connectivity check passed for subnet "10.19.62.0" 
Check:
Node connectivity using interfaces on subnet "192.168.1.0" 
Node
connectivity passed for subnet "192.168.1.0" with node(s)
m-lrkdb2,m-lrkdb3 
TCP
connectivity check passed for subnet "192.168.1.0" 
Checking
subnet mask consistency... 
Subnet
mask consistency check passed for subnet "10.19.62.0". 
Subnet
mask consistency check passed for subnet "192.168.1.0". 
Subnet
mask consistency check passed. 
Node
connectivity check passed 
Checking
multicast communication... 
Checking
subnet "192.168.1.0" for multicast communication with
multicast group "224.0.0.251"... 
Check
of subnet "192.168.1.0" for multicast communication with
multicast group "224.0.0.251" passed. 
Check
of multicast communication passed. 
Total
memory check passed 
Available
memory check passed 
Swap
space check failed 
Check
failed on nodes: 
 m-lrkdb2,m-lrkdb3
Free
disk space check failed for
"m-lrkdb2:/usr,m-lrkdb2:/var,m-lrkdb2:/etc,m-lrkdb2:/u01/app/11.2.0/grid_3,m-lrkdb2:/sbin,m-lrkdb2:/tmp"
Check
failed on nodes: 
 m-lrkdb2
Free
disk space check failed for
"m-lrkdb3:/usr,m-lrkdb3:/var,m-lrkdb3:/etc,m-lrkdb3:/u01/app/11.2.0/grid_3,m-lrkdb3:/sbin,m-lrkdb3:/tmp"
Check
failed on nodes: 
 m-lrkdb3
Check
for multiple users with UID value 502 passed 
User
existence check passed for "grid" 
Run
level check passed 
Hard
limits check passed for "maximum open file descriptors" 
Soft
limits check passed for "maximum open file descriptors" 
Hard
limits check passed for "maximum user processes" 
Soft
limits check passed for "maximum user processes" 
System
architecture check passed 
WARNING:
PRVF-7524
: Kernel version is not consistent across all the nodes. 
Kernel
version = "2.6.18-164.el5" found on nodes: m-lrkdb2. 
Kernel
version = "2.6.39-200.24.1.el6uek.x86_64" found on nodes:
m-lrkdb3. 
Kernel
version check passed 
Kernel
parameter check passed for "semmsl" 
Kernel
parameter check passed for "semmns" 
Kernel
parameter check passed for "semopm" 
Kernel
parameter check passed for "semmni" 
Kernel
parameter check passed for "shmmax" 
Kernel
parameter check passed for "shmmni" 
Kernel
parameter check passed for "shmall" 
Kernel
parameter check passed for "file-max" 
Kernel
parameter check passed for "ip_local_port_range" 
Kernel
parameter check passed for "rmem_default" 
Kernel
parameter check passed for "rmem_max" 
Kernel
parameter check passed for "wmem_default" 
Kernel
parameter check passed for "wmem_max" 
Kernel
parameter check passed for "aio-max-nr" 
Package
existence check passed for "make" 
Package
existence check passed for "binutils" 
Package
existence check passed for "gcc(x86_64)" 
Package
existence check passed for "libaio(x86_64)" 
Package
existence check passed for "glibc(x86_64)" 
Package
existence check passed for "compat-libstdc++-33(x86_64)" 
Package
existence check passed for "glibc-devel(x86_64)" 
Package
existence check passed for "gcc-c++(x86_64)" 
Package
existence check passed for "libaio-devel(x86_64)" 
Package
existence check passed for "libgcc(x86_64)" 
Package
existence check passed for "libstdc++(x86_64)" 
Package
existence check passed for "libstdc++-devel(x86_64)" 
Package
existence check passed for "sysstat" 
Package
existence check passed for "ksh" 
Package
existence check passed for "nfs-utils" 
Check
for multiple users with UID value 0 passed 
Current
group ID check passed 
Starting
check for consistency of primary group of root user 
Check
for consistency of root user's primary group passed 
Group
existence check passed for "asmadmin" 
Group
existence check passed for "asmdba" 
Checking
OCR integrity... 
OCR
integrity check passed 
Checking
Oracle Cluster Voting Disk configuration... 
Oracle
Cluster Voting Disk configuration check passed 
Time
zone consistency check passed 
Starting
Clock synchronization checks using Network Time Protocol(NTP)... 
NTP
configuration file "/etc/ntp.conf" existence check passed 
Liveness
check passed for "ntpd" 
Check
for NTP daemon or service alive passed on all nodes 
Check
of common NTP Time Server passed 
Clock
time offset check passed 
Clock
synchronization check using Network Time Protocol(NTP) passed 
User
"grid" is not part of "root" group. Check passed 
Checking
integrity of file "/etc/resolv.conf" across nodes 
"domain"
and "search" entries do not coexist in any 
"/etc/resolv.conf" file 
All
nodes have same "search" order defined in file
"/etc/resolv.conf" 
The
DNS response time for an unreachable node is within acceptable limit
on all nodes 
Check
for integrity of file "/etc/resolv.conf" passed 
Checking
integrity of name service switch configuration file
"/etc/nsswitch.conf" ... 
Check
for integrity of name service switch configuration file
"/etc/nsswitch.conf" passed 
Pre-check
for node addition was unsuccessful on all the nodes. 
12 Appendix 2: Output: ./cluvfy stage -pre crsinst
#############################################################
Output:
 ./cluvfy stage -pre crsinst -n m-lrkdb2,m-lrkdb3 -verbose 
#############################################################
[grid@m-lrkdb1:bin]$
 ./cluvfy stage -pre crsinst -n m-lrkdb1,m-lrkdb3 -fixup -verbose 
Performing
pre-checks for cluster services setup 
Checking
node reachability... 
Check:
Node reachability from node "m-lrkdb1" 
Destination
Node                      Reachable?              
------------------------------------
 ------------------------ 
m-lrkdb1
                             yes                     
m-lrkdb3
                             yes                     
Result:
Node reachability check passed from node "m-lrkdb1" 
Checking
user equivalence... 
Check:
User equivalence for user "grid" 
Node
Name                             Status                  
------------------------------------
 ------------------------ 
m-lrkdb1
                             passed                  
m-lrkdb3
                             passed                  
Result:
User equivalence check passed for user "grid" 
Checking
node connectivity... 
Checking
hosts config file... 
Node
Name                             Status                  
------------------------------------
 ------------------------ 
m-lrkdb1
                             passed                  
m-lrkdb3
                             passed                  
Verification
of the hosts config file successful 
Interface
information for node "m-lrkdb1" 
Name
  IP Address      Subnet          Gateway         Def. Gateway    HW
Address        MTU   
------
--------------- --------------- --------------- ---------------
----------------- ------ 
eth0
  10.19.62.64     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3C 1500  
eth0
  10.19.62.66     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3C 1500  
eth0
  10.19.62.67     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3C 1500  
eth0
  10.19.62.49     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3C 1500  
eth1
  192.168.1.133   192.168.1.0     0.0.0.0         10.19.62.254   
00:50:56:A6:00:3E 1500  
eth1
  169.254.162.124 169.254.0.0     0.0.0.0         10.19.62.254   
00:50:56:A6:00:3E 1500  
eth2
  192.168.0.133   192.168.0.0     0.0.0.0         10.19.62.254   
00:50:56:A6:00:3F 1500  
Interface
information for node "m-lrkdb3" 
Name
  IP Address      Subnet          Gateway         Def. Gateway    HW
Address        MTU   
------
--------------- --------------- --------------- ---------------
----------------- ------ 
eth0
  10.19.62.53     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:8F:44:FC 1500  
eth0
  10.19.62.86     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:8F:44:FC 1500  
eth1
  192.168.1.135   192.168.1.0     0.0.0.0         10.19.62.254   
00:50:56:8F:40:F8 1500  
eth2
  192.168.0.135   192.168.0.0     0.0.0.0         10.19.62.254   
00:50:56:8F:F3:9A 1500  
Check:
Node connectivity using interfaces on subnet "10.19.62.0" 
Check:
Node connectivity of subnet "10.19.62.0" 
Source
                         Destination                     Connected?  
   
------------------------------
 ------------------------------  ---------------- 
m-lrkdb1[10.19.62.64]
          m-lrkdb1[10.19.62.67]           yes             
m-lrkdb1[10.19.62.64]
          m-lrkdb1[10.19.62.49]           yes             
m-lrkdb1[10.19.62.64]
          m-lrkdb1[10.19.62.66]           yes             
m-lrkdb1[10.19.62.64]
          m-lrkdb3[10.19.62.53]           yes             
m-lrkdb1[10.19.62.64]
          m-lrkdb3[10.19.62.86]           yes             
m-lrkdb1[10.19.62.67]
          m-lrkdb1[10.19.62.49]           yes             
m-lrkdb1[10.19.62.67]
          m-lrkdb1[10.19.62.66]           yes             
m-lrkdb1[10.19.62.67]
          m-lrkdb3[10.19.62.53]           yes             
m-lrkdb1[10.19.62.67]
          m-lrkdb3[10.19.62.86]           yes             
m-lrkdb1[10.19.62.49]
          m-lrkdb1[10.19.62.66]           yes             
m-lrkdb1[10.19.62.49]
          m-lrkdb3[10.19.62.53]           yes             
m-lrkdb1[10.19.62.49]
          m-lrkdb3[10.19.62.86]           yes             
m-lrkdb1[10.19.62.66]
          m-lrkdb3[10.19.62.53]           yes             
m-lrkdb1[10.19.62.66]
          m-lrkdb3[10.19.62.86]           yes             
m-lrkdb3[10.19.62.53]
          m-lrkdb3[10.19.62.86]           yes             
Result:
Node connectivity passed for subnet "10.19.62.0" with
node(s) m-lrkdb1,m-lrkdb3 
Check:
TCP connectivity of subnet "10.19.62.0" 
Source
                         Destination                     Connected?  
   
------------------------------
 ------------------------------  ---------------- 
m-lrkdb1:10.19.62.64
           m-lrkdb1:10.19.62.67            passed          
m-lrkdb1:10.19.62.64
           m-lrkdb1:10.19.62.49            passed          
m-lrkdb1:10.19.62.64
           m-lrkdb1:10.19.62.66            passed          
m-lrkdb1:10.19.62.64
           m-lrkdb3:10.19.62.53            passed          
m-lrkdb1:10.19.62.64
           m-lrkdb3:10.19.62.86            passed          
Result:
TCP connectivity check passed for subnet "10.19.62.0" 
Check:
Node connectivity using interfaces on subnet "192.168.1.0" 
Check:
Node connectivity of subnet "192.168.1.0" 
Source
                         Destination                     Connected?  
   
------------------------------
 ------------------------------  ---------------- 
m-lrkdb3[192.168.1.135]
        m-lrkdb1[192.168.1.133]         yes             
Result:
Node connectivity passed for subnet "192.168.1.0" with
node(s) m-lrkdb3,m-lrkdb1 
Check:
TCP connectivity of subnet "192.168.1.0" 
Source
                         Destination                     Connected?  
   
------------------------------
 ------------------------------  ---------------- 
m-lrkdb3:192.168.1.135
         m-lrkdb1:192.168.1.133          passed          
Result:
TCP connectivity check passed for subnet "192.168.1.0" 
Checking
subnet mask consistency... 
Subnet
mask consistency check passed for subnet "10.19.62.0". 
Subnet
mask consistency check passed for subnet "192.168.1.0". 
Subnet
mask consistency check passed. 
Result:
Node connectivity check passed 
Checking
multicast communication... 
Checking
subnet "192.168.1.0" for multicast communication with
multicast group "224.0.0.251"... 
Check
of subnet "192.168.1.0" for multicast communication with
multicast group "224.0.0.251" passed. 
Check
of multicast communication passed. 
Check:
Total memory 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     11.7301GB (1.2299908E7KB)  4GB (4194304.0KB)         passed    
m-lrkdb3
     11.7598GB (1.2331028E7KB)  4GB (4194304.0KB)         passed    
Result:
Total memory check passed 
Check:
Available memory 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     6.9644GB (7302676.0KB)    50MB (51200.0KB)          passed    
m-lrkdb3
     11.5275GB (1.2087508E7KB)  50MB (51200.0KB)          passed    
Result:
Available memory check passed 
Check:
Swap space 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     9.9968GB (1.0482372E7KB)  11.7301GB (1.2299908E7KB)  failed    
m-lrkdb3
     9.999GB (1.0484732E7KB)   11.7598GB (1.2331028E7KB)  failed    
Result:
Swap space check failed 
Check:
Free disk space for
"m-lrkdb1:/usr,m-lrkdb1:/var,m-lrkdb1:/etc,m-lrkdb1:/u01/app/11.2.0/grid_3,m-lrkdb1:/sbin,m-lrkdb1:/tmp"
Path
             Node Name     Mount point   Available     Required     
Status      
----------------
 ------------  ------------  ------------  ------------  ------------
/usr
             m-lrkdb1      /             4.1719GB      7.9635GB     
failed      
/var
             m-lrkdb1      /             4.1719GB      7.9635GB     
failed      
/etc
             m-lrkdb1      /             4.1719GB      7.9635GB     
failed      
/u01/app/11.2.0/grid_3
 m-lrkdb1      /             4.1719GB      7.9635GB      failed      
/sbin
            m-lrkdb1      /             4.1719GB      7.9635GB     
failed      
/tmp
             m-lrkdb1      /             4.1719GB      7.9635GB     
failed      
Result:
Free disk space check failed for
"m-lrkdb1:/usr,m-lrkdb1:/var,m-lrkdb1:/etc,m-lrkdb1:/u01/app/11.2.0/grid_3,m-lrkdb1:/sbin,m-lrkdb1:/tmp"
Check:
Free disk space for
"m-lrkdb3:/usr,m-lrkdb3:/var,m-lrkdb3:/etc,m-lrkdb3:/u01/app/11.2.0/grid_3,m-lrkdb3:/sbin,m-lrkdb3:/tmp"
Path
             Node Name     Mount point   Available     Required     
Status      
----------------
 ------------  ------------  ------------  ------------  ------------
/usr
             m-lrkdb3      /             6.4492GB      7.9635GB     
failed      
/var
             m-lrkdb3      /             6.4492GB      7.9635GB     
failed      
/etc
             m-lrkdb3      /             6.4492GB      7.9635GB     
failed      
/u01/app/11.2.0/grid_3
 m-lrkdb3      /             6.4492GB      7.9635GB      failed      
/sbin
            m-lrkdb3      /             6.4492GB      7.9635GB     
failed      
/tmp
             m-lrkdb3      /             6.4492GB      7.9635GB     
failed      
Result:
Free disk space check failed for
"m-lrkdb3:/usr,m-lrkdb3:/var,m-lrkdb3:/etc,m-lrkdb3:/u01/app/11.2.0/grid_3,m-lrkdb3:/sbin,m-lrkdb3:/tmp"
Check:
User existence for "grid" 
Node
Name     Status                    Comment                 
------------
 ------------------------  ------------------------ 
m-lrkdb1
     passed                    exists(502)             
m-lrkdb3
     passed                    exists(502)             
Checking
for multiple users with UID value 502 
Result:
Check for multiple users with UID value 502 passed 
Result:
User existence check passed for "grid" 
Check:
Group existence for "oinstall" 
Node
Name     Status                    Comment                 
------------
 ------------------------  ------------------------ 
m-lrkdb1
     passed                    exists                  
m-lrkdb3
     passed                    exists                  
Result:
Group existence check passed for "oinstall" 
Check:
Group existence for "dba" 
Node
Name     Status                    Comment                 
------------
 ------------------------  ------------------------ 
m-lrkdb1
     passed                    exists                  
m-lrkdb3
     passed                    exists                  
Result:
Group existence check passed for "dba" 
Check:
Membership of user "grid" in group "oinstall" [as
Primary] 
Node
Name         User Exists   Group Exists  User in Group  Primary      
Status      
----------------
 ------------  ------------  ------------  ------------  ------------
m-lrkdb1
         yes           yes           yes           yes          
passed      
m-lrkdb3
         yes           yes           yes           yes          
passed      
Result:
Membership check for user "grid" in group "oinstall"
[as Primary] passed 
Check:
Membership of user "grid" in group "dba" 
Node
Name         User Exists   Group Exists  User in Group  Status       
  
----------------
 ------------  ------------  ------------  ---------------- 
m-lrkdb1
         yes           yes           yes           passed          
m-lrkdb3
         yes           yes           yes           passed          
Result:
Membership check for user "grid" in group "dba"
passed 
Check:
Run level 
Node
Name     run level                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     3                         3,5                       passed    
m-lrkdb3
     3                         3,5                       passed    
Result:
Run level check passed 
Check:
Hard limits for "maximum open file descriptors" 
Node
Name         Type          Available     Required      Status        
 
----------------
 ------------  ------------  ------------  ---------------- 
m-lrkdb1
         hard          65536         65536         passed          
m-lrkdb3
         hard          65536         65536         passed          
Result:
Hard limits check passed for "maximum open file descriptors"
Check:
Soft limits for "maximum open file descriptors" 
Node
Name         Type          Available     Required      Status        
 
----------------
 ------------  ------------  ------------  ---------------- 
m-lrkdb1
         soft          65536         1024          passed          
m-lrkdb3
         soft          65536         1024          passed          
Result:
Soft limits check passed for "maximum open file descriptors"
Check:
Hard limits for "maximum user processes" 
Node
Name         Type          Available     Required      Status        
 
----------------
 ------------  ------------  ------------  ---------------- 
m-lrkdb1
         hard          16384         16384         passed          
m-lrkdb3
         hard          96203         16384         passed          
Result:
Hard limits check passed for "maximum user processes" 
Check:
Soft limits for "maximum user processes" 
Node
Name         Type          Available     Required      Status        
 
----------------
 ------------  ------------  ------------  ---------------- 
m-lrkdb1
         soft          2047          2047          passed          
m-lrkdb3
         soft          2047          2047          passed          
Result:
Soft limits check passed for "maximum user processes" 
Check:
System architecture 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     x86_64                    x86_64                    passed    
m-lrkdb3
     x86_64                    x86_64                    passed    
Result:
System architecture check passed 
Check:
Kernel version 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     2.6.18-164.el5            2.6.18                    passed    
m-lrkdb3
     2.6.39-200.24.1.el6uek.x86_64  2.6.18                    passed 
  
WARNING:
PRVF-7524
: Kernel version is not consistent across all the nodes. 
Kernel
version = "2.6.18-164.el5" found on nodes: m-lrkdb1. 
Kernel
version = "2.6.39-200.24.1.el6uek.x86_64" found on nodes:
m-lrkdb3. 
Result:
Kernel version check passed 
Check:
Kernel parameter for "semmsl" 
Node
Name         Current       Configured    Required      Status       
Comment     
----------------
 ------------  ------------  ------------  ------------  ------------
m-lrkdb1
         250           250           250           passed          
m-lrkdb3
         250           250           250           passed          
Result:
Kernel parameter check passed for "semmsl" 
Check:
Kernel parameter for "semmns" 
Node
Name         Current       Configured    Required      Status       
Comment     
----------------
 ------------  ------------  ------------  ------------  ------------
m-lrkdb1
         32000         32000         32000         passed          
m-lrkdb3
         32000         32000         32000         passed          
Result:
Kernel parameter check passed for "semmns" 
Check:
Kernel parameter for "semopm" 
Node
Name         Current       Configured    Required      Status       
Comment     
----------------
 ------------  ------------  ------------  ------------  ------------
m-lrkdb1
         100           100           100           passed          
m-lrkdb3
         100           100           100           passed          
Result:
Kernel parameter check passed for "semopm" 
Check:
Kernel parameter for "semmni" 
Node
Name         Current       Configured    Required      Status       
Comment     
----------------
 ------------  ------------  ------------  ------------  ------------
m-lrkdb1
         128           128           128           passed          
m-lrkdb3
         128           128           128           passed          
Result:
Kernel parameter check passed for "semmni" 
Check:
Kernel parameter for "shmmax" 
Node
Name         Current       Configured    Required      Status       
Comment     
----------------
 ------------  ------------  ------------  ------------  ------------
m-lrkdb1
         68719476736   68719476736   6297552896    passed          
m-lrkdb3
         68719476736   68719476736   6313486336    passed          
Result:
Kernel parameter check passed for "shmmax" 
Check:
Kernel parameter for "shmmni" 
Node
Name         Current       Configured    Required      Status       
Comment     
----------------
 ------------  ------------  ------------  ------------  ------------
m-lrkdb1
         4096          4096          4096          passed          
m-lrkdb3
         4096          4096          4096          passed          
Result:
Kernel parameter check passed for "shmmni" 
Check:
Kernel parameter for "shmall" 
Node
Name         Current       Configured    Required      Status       
Comment     
----------------
 ------------  ------------  ------------  ------------  ------------
m-lrkdb1
         4294967296    4294967296    1229990       passed          
m-lrkdb3
         4294967296    4294967296    1233102       passed          
Result:
Kernel parameter check passed for "shmall" 
Check:
Kernel parameter for "file-max" 
Node
Name         Current       Configured    Required      Status       
Comment     
----------------
 ------------  ------------  ------------  ------------  ------------
m-lrkdb1
         6815744       6815744       6815744       passed          
m-lrkdb3
         6815744       6815744       6815744       passed          
Result:
Kernel parameter check passed for "file-max" 
Check:
Kernel parameter for "ip_local_port_range" 
Node
Name         Current       Configured    Required      Status       
Comment     
----------------
 ------------  ------------  ------------  ------------  ------------
m-lrkdb1
         between 9000 & 65500  between 9000 & 65500  between
9000 & 65535  passed          
m-lrkdb3
         between 9000 & 65500  between 9000 & 65500  between
9000 & 65535  passed          
Result:
Kernel parameter check passed for "ip_local_port_range" 
Check:
Kernel parameter for "rmem_default" 
Node
Name         Current       Configured    Required      Status       
Comment     
----------------
 ------------  ------------  ------------  ------------  ------------
m-lrkdb1
         262144        262144        262144        passed          
m-lrkdb3
         262144        262144        262144        passed          
Result:
Kernel parameter check passed for "rmem_default" 
Check:
Kernel parameter for "rmem_max" 
Node
Name         Current       Configured    Required      Status       
Comment     
----------------
 ------------  ------------  ------------  ------------  ------------
m-lrkdb1
         4194304       4194304       4194304       passed          
m-lrkdb3
         4194304       4194304       4194304       passed          
Result:
Kernel parameter check passed for "rmem_max" 
Check:
Kernel parameter for "wmem_default" 
Node
Name         Current       Configured    Required      Status       
Comment     
----------------
 ------------  ------------  ------------  ------------  ------------
m-lrkdb1
         262144        262144        262144        passed          
m-lrkdb3
         262144        262144        262144        passed          
Result:
Kernel parameter check passed for "wmem_default" 
Check:
Kernel parameter for "wmem_max" 
Node
Name         Current       Configured    Required      Status       
Comment     
----------------
 ------------  ------------  ------------  ------------  ------------
m-lrkdb1
         1048576       1048576       1048576       passed          
m-lrkdb3
         1048576       1048576       1048576       passed          
Result:
Kernel parameter check passed for "wmem_max" 
Check:
Kernel parameter for "aio-max-nr" 
Node
Name         Current       Configured    Required      Status       
Comment     
----------------
 ------------  ------------  ------------  ------------  ------------
m-lrkdb1
         1048576       1048576       1048576       passed          
m-lrkdb3
         1048576       1048576       1048576       passed          
Result:
Kernel parameter check passed for "aio-max-nr" 
Check:
Package existence for "make" 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     make-3.81-3.el5           make-3.81                 passed    
m-lrkdb3
     make-3.81-20.el6          make-3.81                 passed    
Result:
Package existence check passed for "make" 
Check:
Package existence for "binutils" 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     binutils-2.17.50.0.6-14.el5  binutils-2.17.50.0.6      passed   
m-lrkdb3
     binutils-2.20.51.0.2-5.34.el6  binutils-2.17.50.0.6      passed 
  
Result:
Package existence check passed for "binutils" 
Check:
Package existence for "gcc(x86_64)" 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     gcc(x86_64)-4.1.2-51.el5  gcc(x86_64)-4.1.2         passed    
m-lrkdb3
     gcc(x86_64)-4.4.6-4.el6   gcc(x86_64)-4.1.2         passed    
Result:
Package existence check passed for "gcc(x86_64)" 
Check:
Package existence for "libaio(x86_64)" 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     libaio(x86_64)-0.3.106-5  libaio(x86_64)-0.3.106    passed    
m-lrkdb3
     libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.106    passed 
  
Result:
Package existence check passed for "libaio(x86_64)" 
Check:
Package existence for "glibc(x86_64)" 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     glibc(x86_64)-2.5-65      glibc(x86_64)-2.5-58      passed    
m-lrkdb3
     glibc(x86_64)-2.12-1.132.el6_5.2  glibc(x86_64)-2.5-58     
passed    
Result:
Package existence check passed for "glibc(x86_64)" 
Check:
Package existence for "compat-libstdc++-33(x86_64)" 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     compat-libstdc++-33(x86_64)-3.2.3-61 
compat-libstdc++-33(x86_64)-3.2.3  passed    
m-lrkdb3
     compat-libstdc++-33(x86_64)-3.2.3-69.el6 
compat-libstdc++-33(x86_64)-3.2.3  passed    
Result:
Package existence check passed for "compat-libstdc++-33(x86_64)"
Check:
Package existence for "glibc-devel(x86_64)" 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     glibc-devel(x86_64)-2.5-65  glibc-devel(x86_64)-2.5   passed    
m-lrkdb3
     glibc-devel(x86_64)-2.12-1.132.el6_5.2  glibc-devel(x86_64)-2.5 
 passed    
Result:
Package existence check passed for "glibc-devel(x86_64)" 
Check:
Package existence for "gcc-c++(x86_64)" 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     gcc-c++(x86_64)-4.1.2-51.el5  gcc-c++(x86_64)-4.1.2     passed  
 
m-lrkdb3
     gcc-c++(x86_64)-4.4.6-4.el6  gcc-c++(x86_64)-4.1.2     passed   
Result:
Package existence check passed for "gcc-c++(x86_64)" 
Check:
Package existence for "libaio-devel(x86_64)" 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     libaio-devel(x86_64)-0.3.106-5  libaio-devel(x86_64)-0.3.106 
passed    
m-lrkdb3
     libaio-devel(x86_64)-0.3.107-10.el6 
libaio-devel(x86_64)-0.3.106  passed    
Result:
Package existence check passed for "libaio-devel(x86_64)" 
Check:
Package existence for "libgcc(x86_64)" 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     libgcc(x86_64)-4.1.2-51.el5  libgcc(x86_64)-4.1.2      passed   
m-lrkdb3
     libgcc(x86_64)-4.4.6-4.el6  libgcc(x86_64)-4.1.2      passed    
Result:
Package existence check passed for "libgcc(x86_64)" 
Check:
Package existence for "libstdc++(x86_64)" 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     libstdc++(x86_64)-4.1.2-51.el5  libstdc++(x86_64)-4.1.2   passed
   
m-lrkdb3
     libstdc++(x86_64)-4.4.6-4.el6  libstdc++(x86_64)-4.1.2   passed 
  
Result:
Package existence check passed for "libstdc++(x86_64)" 
Check:
Package existence for "libstdc++-devel(x86_64)" 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     libstdc++-devel(x86_64)-4.1.2-51.el5 
libstdc++-devel(x86_64)-4.1.2  passed    
m-lrkdb3
     libstdc++-devel(x86_64)-4.4.6-4.el6 
libstdc++-devel(x86_64)-4.1.2  passed    
Result:
Package existence check passed for "libstdc++-devel(x86_64)"
Check:
Package existence for "sysstat" 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     sysstat-7.0.2-11.el5      sysstat-7.0.2             passed    
m-lrkdb3
     sysstat-9.0.4-20.el6      sysstat-7.0.2             passed    
Result:
Package existence check passed for "sysstat" 
Check:
Package existence for "ksh" 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     ksh-20100202-1.el5_6.6    ksh-...                   passed    
m-lrkdb3
     ksh-20120801-10.el6_5.6   ksh-...                   passed    
Result:
Package existence check passed for "ksh" 
Check:
Package existence for "nfs-utils" 
Node
Name     Available                 Required                  Status  
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     nfs-utils-1.0.9-66.el5    nfs-utils-1.0.9-60        passed    
m-lrkdb3
     nfs-utils-1.2.3-39.el6_5.3  nfs-utils-1.0.9-60        passed    
Result:
Package existence check passed for "nfs-utils" 
Checking
for multiple users with UID value 0 
Result:
Check for multiple users with UID value 0 passed 
Check:
Current group ID 
Result:
Current group ID check passed 
Starting
check for consistency of primary group of root user 
Node
Name                             Status                  
------------------------------------
 ------------------------ 
m-lrkdb1
                             passed                  
m-lrkdb3
                             passed                  
Check
for consistency of root user's primary group passed 
Starting
Clock synchronization checks using Network Time Protocol(NTP)... 
Checking
existence of NTP configuration file "/etc/ntp.conf" across
nodes 
Node
Name                             File exists?            
------------------------------------
 ------------------------ 
m-lrkdb1
                             yes                     
m-lrkdb3
                             yes                     
The
NTP configuration file "/etc/ntp.conf" is available on all
nodes 
NTP
configuration file "/etc/ntp.conf" existence check passed 
Checking
daemon liveness... 
Check:
Liveness for "ntpd" 
Node
Name                             Running?                
------------------------------------
 ------------------------ 
m-lrkdb1
                             yes                     
m-lrkdb3
                             yes                     
Result:
Liveness check passed for "ntpd" 
Check
for NTP daemon or service alive passed on all nodes 
Checking
whether NTP daemon or service is using UDP port 123 on all nodes 
Check
for NTP daemon or service using UDP port 123 
Node
Name                             Port Open?              
------------------------------------
 ------------------------ 
m-lrkdb1
                             yes                     
m-lrkdb3
                             yes                     
NTP
common Time Server Check started... 
NTP
Time Server "193.67.79.202" is common to all nodes on which
the NTP daemon is running 
NTP
Time Server "37.34.57.190" is common to all nodes on which
the NTP daemon is running 
Check
of common NTP Time Server passed 
Clock
time offset check from NTP Time Server started... 
Checking
on nodes "[m-lrkdb1, m-lrkdb3]"... 
Check:
Clock time offset from NTP Time Server 
Time
Server: 193.67.79.202 
Time
Offset Limit: 1000.0 msecs 
Node
Name     Time Offset               Status                  
------------
 ------------------------  ------------------------ 
m-lrkdb1
     -0.518                    passed                  
m-lrkdb3
     -0.234                    passed                  
Time
Server "193.67.79.202" has time offsets that are within
permissible limits for nodes "[m-lrkdb1, m-lrkdb3]". 
Time
Server: 37.34.57.190 
Time
Offset Limit: 1000.0 msecs 
Node
Name     Time Offset               Status                  
------------
 ------------------------  ------------------------ 
m-lrkdb1
     0.614                     passed                  
m-lrkdb3
     0.302                     passed                  
Time
Server "37.34.57.190" has time offsets that are within
permissible limits for nodes "[m-lrkdb1, m-lrkdb3]". 
Clock
time offset check passed 
Result:
Clock synchronization check using Network Time Protocol(NTP) passed 
Checking
Core file name pattern consistency... 
Core
file name pattern consistency check passed. 
Checking
to make sure user "grid" is not in "root" group 
Node
Name     Status                    Comment                 
------------
 ------------------------  ------------------------ 
m-lrkdb1
     passed                    does not exist          
m-lrkdb3
     passed                    does not exist          
Result:
User "grid" is not part of "root" group. Check
passed 
Check
default user file creation mask 
Node
Name     Available                 Required                  Comment 
 
------------
 ------------------------  ------------------------  ---------- 
m-lrkdb1
     0022                      0022                      passed    
m-lrkdb3
     0022                      0022                      passed    
Result:
Default user file creation mask check passed 
Checking
integrity of file "/etc/resolv.conf" across nodes 
Checking
the file "/etc/resolv.conf" to make sure only one of domain
and search entries is defined 
"domain"
and "search" entries do not coexist in any 
"/etc/resolv.conf" file 
Checking
if domain entry in file "/etc/resolv.conf" is consistent
across the nodes... 
"domain"
entry does not exist in any "/etc/resolv.conf" file 
Checking
if search entry in file "/etc/resolv.conf" is consistent
across the nodes... 
Checking
file "/etc/resolv.conf" to make sure that only one search
entry is defined 
More
than one "search" entry does not exist in any
"/etc/resolv.conf" file 
All
nodes have same "search" order defined in file
"/etc/resolv.conf" 
Checking
DNS response time for an unreachable node 
Node
Name                             Status                  
------------------------------------
 ------------------------ 
m-lrkdb1
                             passed                  
m-lrkdb3
                             passed                  
The
DNS response time for an unreachable node is within acceptable limit
on all nodes 
Check
for integrity of file "/etc/resolv.conf" passed 
Check:
Time zone consistency 
Result:
Time zone consistency check passed 
Checking
integrity of name service switch configuration file
"/etc/nsswitch.conf" ... 
Checking
if "hosts" entry in file "/etc/nsswitch.conf" is
consistent across nodes... 
Checking
file "/etc/nsswitch.conf" to make sure that only one
"hosts" entry is defined 
More
than one "hosts" entry does not exist in any
"/etc/nsswitch.conf" file 
All
nodes have same "hosts" entry defined in file
"/etc/nsswitch.conf" 
Check
for integrity of name service switch configuration file
"/etc/nsswitch.conf" passed 
Checking
daemon "avahi-daemon" is not configured and running 
Check:
Daemon "avahi-daemon" not configured 
Node
Name     Configured                Status                  
------------
 ------------------------  ------------------------ 
m-lrkdb1
     no                        passed                  
m-lrkdb3
     no                        passed                  
Daemon
not configured check passed for process "avahi-daemon" 
Check:
Daemon "avahi-daemon" not running 
Node
Name     Running?                  Status                  
------------
 ------------------------  ------------------------ 
m-lrkdb1
     no                        passed                  
m-lrkdb3
     no                        passed                  
Daemon
not running check passed for process "avahi-daemon" 
Starting
check for /dev/shm mounted as temporary file system ... 
Check
for /dev/shm mounted as temporary file system passed 
Starting
check for /boot mount ... 
Check
for /boot mount passed 
Starting
check for zeroconf check ... 
Check
for zeroconf check passed 
NOTE:
No
fixable verification failures to fix 
Pre-check
for cluster services setup was unsuccessful on all the nodes.
13 Appendix 3: Output: addNode.sh
[grid@m-lrkdb2:bin]$
./addNode.sh -silent "CLUSTER_NEW_NODES={m-lrkdb3}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={m-lrkdb3-vip}" 
Starting
Oracle Universal Installer... 
Checking
swap space: must be greater than 500 MB.   Actual 9820 MB    Passed 
Oracle
Universal Installer, Version 11.2.0.3.0 Production 
Copyright
(C) 1999, 2011, Oracle. All rights reserved. 
Performing
tests to see whether nodes m-lrkdb1,m-lrkdb3 are available 
...............................................................
100% Done. 
.
-----------------------------------------------------------------------------
Cluster
Node Addition Summary 
Global
Settings 
Source:
/u01/app/11.2.0/grid_3 
New
Nodes 
Space
Requirements 
New
Nodes 
m-lrkdb3
/u01:
Required 17.54GB : Available 36.32GB 
Installed
Products 
Product
Names 
Oracle
Grid Infrastructure 11.2.0.3.0 
Sun
JDK 1.5.0.30.03 
Installer
SDK Component 11.2.0.3.0 
Oracle
One-Off Patch Installer 11.2.0.1.7 
Oracle
Universal Installer 11.2.0.3.0 
Oracle
USM Deconfiguration 11.2.0.3.0 
Oracle
Configuration Manager Deconfiguration 10.3.1.0.0 
Enterprise
Manager Common Core Files 10.2.0.4.4 
Oracle
DBCA Deconfiguration 11.2.0.3.0 
Oracle
RAC Deconfiguration 11.2.0.3.0 
Oracle
Quality of Service Management (Server) 11.2.0.3.0 
Installation
Plugin Files 11.2.0.3.0 
Universal
Storage Manager Files 11.2.0.3.0 
Oracle
Text Required Support Files 11.2.0.3.0 
Automatic
Storage Management Assistant 11.2.0.3.0 
Oracle
Database 11g Multimedia Files 11.2.0.3.0 
Oracle
Multimedia Java Advanced Imaging 11.2.0.3.0 
Oracle
Globalization Support 11.2.0.3.0 
Oracle
Multimedia Locator RDBMS Files 11.2.0.3.0 
Oracle
Core Required Support Files 11.2.0.3.0 
Bali
Share 1.1.18.0.0 
Oracle
Database Deconfiguration 11.2.0.3.0 
Oracle
Quality of Service Management (Client) 11.2.0.3.0 
Expat
libraries 2.0.1.0.1 
Oracle
Containers for Java 11.2.0.3.0 
Perl
Modules 5.10.0.0.1 
Secure
Socket Layer 11.2.0.3.0 
Oracle
JDBC/OCI Instant Client 11.2.0.3.0 
Oracle
Multimedia Client Option 11.2.0.3.0 
LDAP
Required Support Files 11.2.0.3.0 
Character
Set Migration Utility 11.2.0.3.0 
Perl
Interpreter 5.10.0.0.2 
PL/SQL
Embedded Gateway 11.2.0.3.0 
OLAP
SQL Scripts 11.2.0.3.0 
Database
SQL Scripts 11.2.0.3.0 
Oracle
Extended Windowing Toolkit 3.4.47.0.0 
SSL
Required Support Files for InstantClient 11.2.0.3.0 
SQL*Plus
Files for Instant Client 11.2.0.3.0 
Oracle
Net Required Support Files 11.2.0.3.0 
Oracle
Database User Interface 2.2.13.0.0 
RDBMS
Required Support Files for Instant Client 11.2.0.3.0 
RDBMS
Required Support Files Runtime 11.2.0.3.0 
XML
Parser for Java 11.2.0.3.0 
Oracle
Security Developer Tools 11.2.0.3.0 
Oracle
Wallet Manager 11.2.0.3.0 
Enterprise
Manager plugin Common Files 11.2.0.3.0 
Platform
Required Support Files 11.2.0.3.0 
Oracle
JFC Extended Windowing Toolkit 4.2.36.0.0 
RDBMS
Required Support Files 11.2.0.3.0 
Oracle
Ice Browser 5.2.3.6.0 
Oracle
Help For Java 4.2.9.0.0 
Enterprise
Manager Common Files 10.2.0.4.3 
Deinstallation
Tool 11.2.0.3.0 
Oracle
Java Client 11.2.0.3.0 
Cluster
Verification Utility Files 11.2.0.3.0 
Oracle
Notification Service (eONS) 11.2.0.3.0 
Oracle
LDAP administration 11.2.0.3.0 
Cluster
Verification Utility Common Files 11.2.0.3.0 
Oracle
Clusterware RDBMS Files 11.2.0.3.0 
Oracle
Locale Builder 11.2.0.3.0 
Oracle
Globalization Support 11.2.0.3.0 
Buildtools
Common Files 11.2.0.3.0 
Oracle
RAC Required Support Files-HAS 11.2.0.3.0 
SQL*Plus
Required Support Files 11.2.0.3.0 
XDK
Required Support Files 11.2.0.3.0 
Agent
Required Support Files 10.2.0.4.3 
Parser
Generator Required Support Files 11.2.0.3.0 
Precompiler
Required Support Files 11.2.0.3.0 
Installation
Common Files 11.2.0.3.0 
Required
Support Files 11.2.0.3.0 
Oracle
JDBC/THIN Interfaces 11.2.0.3.0 
Oracle
Multimedia Locator 11.2.0.3.0 
Oracle
Multimedia 11.2.0.3.0 
HAS
Common Files 11.2.0.3.0 
Assistant
Common Files 11.2.0.3.0 
PL/SQL
11.2.0.3.0 
HAS
Files for DB 11.2.0.3.0 
Oracle
Recovery Manager 11.2.0.3.0 
Oracle
Database Utilities 11.2.0.3.0 
Oracle
Notification Service 11.2.0.3.0 
SQL*Plus
11.2.0.3.0 
Oracle
Netca Client 11.2.0.3.0 
Oracle
Net 11.2.0.3.0 
Oracle
JVM 11.2.0.3.0 
Oracle
Internet Directory Client 11.2.0.3.0 
Oracle
Net Listener 11.2.0.3.0 
Cluster
Ready Services Files 11.2.0.3.0 
Oracle
Database 11g 11.2.0.3.0 
-----------------------------------------------------------------------------
Instantiating
scripts for add node (Tuesday, July 22, 2014 12:55:07 PM CEST) 
.
                                                                1%
Done. 
Instantiation
of add node scripts complete 
Copying
to remote nodes (Tuesday, July 22, 2014 12:55:11 PM CEST) 
...............................................................................................
                                96% Done. 
Home
copied to new nodes 
Saving
inventory on nodes (Tuesday, July 22, 2014 1:33:55 PM CEST) 
.
                                                              100%
Done. 
Save
inventory complete 
WARNING:A
new inventory has been created on one or more nodes in this session.
However, it has not yet been registered as the central inventory of
this system. 
To
register the new inventory please run the script at
'/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes
'm-lrkdb3'. 
If
you do not register the inventory, you may not be able to update or
patch the products you installed. 
The
following configuration scripts need to be executed as the "root"
user in each new cluster node. Each script in the list below is
followed by a list of nodes. 
/u01/app/oraInventory/orainstRoot.sh
#On nodes m-lrkdb3 
/u01/app/11.2.0/grid_3/root.sh
#On nodes m-lrkdb3 
To
execute the configuration scripts: 
1.
Open a terminal window 
2.
Log in as "root" 
3.
Run the scripts in each cluster node
The
Cluster Node Addition of /u01/app/11.2.0/grid_3 was successful.
14 Appendix 4: Output: cluvfy stage -post nodeadd
[grid@m-lrkdb2:bin]$
./cluvfy
stage -post nodeadd -n m-lrkdb3 -verbose 
Performing
post-checks for node addition 
Checking
node reachability... 
Check:
Node reachability from node "m-lrkdb2" 
 Destination
Node                      Reachable?              
 ------------------------------------
 ------------------------ 
 m-lrkdb3
                             yes                     
Result:
Node reachability check passed from node "m-lrkdb2" 
Checking
user equivalence... 
Check:
User equivalence for user "grid" 
 Node
Name                             Status                  
 ------------------------------------
 ------------------------ 
 m-lrkdb3
                             passed                  
Result:
User equivalence check passed for user "grid" 
Checking
node connectivity... 
Checking
hosts config file... 
 Node
Name                             Status                  
 ------------------------------------
 ------------------------ 
 m-lrkdb2
                             passed                  
 m-lrkdb1
                             passed                  
 m-lrkdb3
                             passed                  
Verification
of the hosts config file successful 
Interface
information for node "m-lrkdb2" 
Name
  IP Address      Subnet          Gateway         Def. Gateway    HW
Address        MTU   
------
--------------- --------------- --------------- ---------------
----------------- ------ 
eth0
  10.19.62.65     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3D 1500  
eth0
  10.19.62.51     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3D 1500  
eth0
  10.19.62.68     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3D 1500  
eth1
  192.168.1.134   192.168.1.0     0.0.0.0         10.19.62.254   
00:50:56:A6:00:40 1500  
eth1
  169.254.177.108 169.254.0.0     0.0.0.0         10.19.62.254   
00:50:56:A6:00:40 1500  
eth2
  192.168.0.134   192.168.0.0     0.0.0.0         10.19.62.254   
00:50:56:A6:00:41 1500  
Interface
information for node "m-lrkdb1" 
Name
  IP Address      Subnet          Gateway         Def. Gateway    HW
Address        MTU   
------
--------------- --------------- --------------- ---------------
----------------- ------ 
eth0
  10.19.62.64     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3C 1500  
eth0
  10.19.62.67     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3C 1500  
eth0
  10.19.62.49     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3C 1500  
eth1
  192.168.1.133   192.168.1.0     0.0.0.0         10.19.62.254   
00:50:56:A6:00:3E 1500  
eth1
  169.254.162.124 169.254.0.0     0.0.0.0         10.19.62.254   
00:50:56:A6:00:3E 1500  
eth2
  192.168.0.133   192.168.0.0     0.0.0.0         10.19.62.254   
00:50:56:A6:00:3F 1500  
Interface
information for node "m-lrkdb3" 
Name
  IP Address      Subnet          Gateway         Def. Gateway    HW
Address        MTU   
------
--------------- --------------- --------------- ---------------
----------------- ------ 
eth0
  10.19.62.53     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:8F:44:FC 1500  
eth0
  10.19.62.86     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:8F:44:FC 1500  
eth0
  10.19.62.66     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:8F:44:FC 1500  
eth1
  192.168.1.135   192.168.1.0     0.0.0.0         10.19.62.254   
00:50:56:8F:40:F8 1500  
eth1
  169.254.55.52   169.254.0.0     0.0.0.0         10.19.62.254   
00:50:56:8F:40:F8 1500  
eth1
  169.254.134.165 169.254.0.0     0.0.0.0         10.19.62.254   
00:50:56:8F:40:F8 1500  
eth2
  192.168.0.135   192.168.0.0     0.0.0.0         10.19.62.254   
00:50:56:8F:F3:9A 1500  
Check:
Node connectivity for interface "eth0" 
 Source
                         Destination                     Connected?  
   
 ------------------------------
 ------------------------------  ---------------- 
 m-lrkdb2[10.19.62.65]
          m-lrkdb2[10.19.62.51]           yes             
 m-lrkdb2[10.19.62.65]
          m-lrkdb2[10.19.62.68]           yes             
 m-lrkdb2[10.19.62.65]
          m-lrkdb1[10.19.62.64]           yes             
 m-lrkdb2[10.19.62.65]
          m-lrkdb1[10.19.62.67]           yes             
 m-lrkdb2[10.19.62.65]
          m-lrkdb1[10.19.62.49]           yes             
 m-lrkdb2[10.19.62.65]
          m-lrkdb3[10.19.62.53]           yes             
 m-lrkdb2[10.19.62.65]
          m-lrkdb3[10.19.62.86]           yes             
 m-lrkdb2[10.19.62.65]
          m-lrkdb3[10.19.62.66]           yes             
 m-lrkdb2[10.19.62.51]
          m-lrkdb2[10.19.62.68]           yes             
 m-lrkdb2[10.19.62.51]
          m-lrkdb1[10.19.62.64]           yes             
 m-lrkdb2[10.19.62.51]
          m-lrkdb1[10.19.62.67]           yes             
 m-lrkdb2[10.19.62.51]
          m-lrkdb1[10.19.62.49]           yes             
 m-lrkdb2[10.19.62.51]
          m-lrkdb3[10.19.62.53]           yes             
 m-lrkdb2[10.19.62.51]
          m-lrkdb3[10.19.62.86]           yes             
 m-lrkdb2[10.19.62.51]
          m-lrkdb3[10.19.62.66]           yes             
 m-lrkdb2[10.19.62.68]
          m-lrkdb1[10.19.62.64]           yes             
 m-lrkdb2[10.19.62.68]
          m-lrkdb1[10.19.62.67]           yes             
 m-lrkdb2[10.19.62.68]
          m-lrkdb1[10.19.62.49]           yes             
 m-lrkdb2[10.19.62.68]
          m-lrkdb3[10.19.62.53]           yes             
 m-lrkdb2[10.19.62.68]
          m-lrkdb3[10.19.62.86]           yes             
 m-lrkdb2[10.19.62.68]
          m-lrkdb3[10.19.62.66]           yes             
 m-lrkdb1[10.19.62.64]
          m-lrkdb1[10.19.62.67]           yes             
 m-lrkdb1[10.19.62.64]
          m-lrkdb1[10.19.62.49]           yes             
 m-lrkdb1[10.19.62.64]
          m-lrkdb3[10.19.62.53]           yes             
 m-lrkdb1[10.19.62.64]
          m-lrkdb3[10.19.62.86]           yes             
 m-lrkdb1[10.19.62.64]
          m-lrkdb3[10.19.62.66]           yes             
 m-lrkdb1[10.19.62.67]
          m-lrkdb1[10.19.62.49]           yes             
 m-lrkdb1[10.19.62.67]
          m-lrkdb3[10.19.62.53]           yes             
 m-lrkdb1[10.19.62.67]
          m-lrkdb3[10.19.62.86]           yes             
 m-lrkdb1[10.19.62.67]
          m-lrkdb3[10.19.62.66]           yes             
 m-lrkdb1[10.19.62.49]
          m-lrkdb3[10.19.62.53]           yes             
 m-lrkdb1[10.19.62.49]
          m-lrkdb3[10.19.62.86]           yes             
 m-lrkdb1[10.19.62.49]
          m-lrkdb3[10.19.62.66]           yes             
 m-lrkdb3[10.19.62.53]
          m-lrkdb3[10.19.62.86]           yes             
 m-lrkdb3[10.19.62.53]
          m-lrkdb3[10.19.62.66]           yes             
 m-lrkdb3[10.19.62.86]
          m-lrkdb3[10.19.62.66]           yes             
Result:
Node connectivity passed for interface "eth0" 
Check:
TCP connectivity of subnet "10.19.62.0"
Source
                         Destination                     Connected?  
   
 ------------------------------
 ------------------------------  ---------------- 
 m-lrkdb2:10.19.62.65
           m-lrkdb2:10.19.62.51            passed          
 m-lrkdb2:10.19.62.65
           m-lrkdb2:10.19.62.68            passed          
 m-lrkdb2:10.19.62.65
           m-lrkdb1:10.19.62.64            passed          
 m-lrkdb2:10.19.62.65
           m-lrkdb1:10.19.62.67            passed          
 m-lrkdb2:10.19.62.65
           m-lrkdb1:10.19.62.49            passed          
 m-lrkdb2:10.19.62.65
           m-lrkdb3:10.19.62.53            passed          
 m-lrkdb2:10.19.62.65
           m-lrkdb3:10.19.62.86            passed          
 m-lrkdb2:10.19.62.65
           m-lrkdb3:10.19.62.66            passed          
Result:
TCP connectivity check passed for subnet "10.19.62.0" 
Checking
subnet mask consistency... 
Subnet
mask consistency check passed for subnet "10.19.62.0". 
Subnet
mask consistency check passed. 
Result:
Node connectivity check passed 
Checking
multicast communication... 
Checking
subnet "10.19.62.0" for multicast communication with
multicast group "230.0.1.0"... 
Check
of subnet "10.19.62.0" for multicast communication with
multicast group "230.0.1.0" passed. 
Check
of multicast communication passed. 
Checking
cluster integrity... 
 Node
Name                           
 ------------------------------------
 m-lrkdb2
                           
 m-lrkdb1
                           
 m-lrkdb3
                           
Cluster
integrity check passed 
Checking
CRS integrity... 
Clusterware
version consistency passed 
The
Oracle Clusterware is healthy on node "m-lrkdb2" 
The
Oracle Clusterware is healthy on node "m-lrkdb1" 
The
Oracle Clusterware is healthy on node "m-lrkdb3" 
CRS
integrity check passed 
Checking
shared resources... 
Checking
CRS home location... 
"/u01/app/11.2.0/grid_3"
is not shared 
Result:
Shared resources check for node addition passed 
Checking
node connectivity... 
Checking
hosts config file... 
 Node
Name                             Status                  
 ------------------------------------
 ------------------------ 
 m-lrkdb2
                             passed                  
 m-lrkdb1
                             passed                  
 m-lrkdb3
                             passed                  
Verification
of the hosts config file successful 
Interface
information for node "m-lrkdb2" 
Name
  IP Address      Subnet          Gateway         Def. Gateway    HW
Address        MTU   
------
--------------- --------------- --------------- ---------------
----------------- ------ 
eth0
  10.19.62.65     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3D 1500  
eth0
  10.19.62.51     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3D 1500  
eth0
  10.19.62.68     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3D 1500  
eth1
  192.168.1.134   192.168.1.0     0.0.0.0         10.19.62.254   
00:50:56:A6:00:40 1500  
eth1
  169.254.177.108 169.254.0.0     0.0.0.0         10.19.62.254   
00:50:56:A6:00:40 1500  
eth2
  192.168.0.134   192.168.0.0     0.0.0.0         10.19.62.254   
00:50:56:A6:00:41 1500  
Interface
information for node "m-lrkdb1" 
Name
  IP Address      Subnet          Gateway         Def. Gateway    HW
Address        MTU   
------
--------------- --------------- --------------- ---------------
----------------- ------ 
eth0
  10.19.62.64     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3C 1500  
eth0
  10.19.62.67     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3C 1500  
eth0
  10.19.62.49     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:A6:00:3C 1500  
eth1
  192.168.1.133   192.168.1.0     0.0.0.0         10.19.62.254   
00:50:56:A6:00:3E 1500  
eth1
  169.254.162.124 169.254.0.0     0.0.0.0         10.19.62.254   
00:50:56:A6:00:3E 1500  
eth2
  192.168.0.133   192.168.0.0     0.0.0.0         10.19.62.254   
00:50:56:A6:00:3F 1500  
Interface
information for node "m-lrkdb3" 
Name
  IP Address      Subnet          Gateway         Def. Gateway    HW
Address        MTU   
------
--------------- --------------- --------------- ---------------
----------------- ------ 
eth0
  10.19.62.53     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:8F:44:FC 1500  
eth0
  10.19.62.86     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:8F:44:FC 1500  
eth0
  10.19.62.66     10.19.62.0      0.0.0.0         10.19.62.254   
00:50:56:8F:44:FC 1500  
eth1
  192.168.1.135   192.168.1.0     0.0.0.0         10.19.62.254   
00:50:56:8F:40:F8 1500  
eth1
  169.254.55.52   169.254.0.0     0.0.0.0         10.19.62.254   
00:50:56:8F:40:F8 1500  
eth1
  169.254.134.165 169.254.0.0     0.0.0.0         10.19.62.254   
00:50:56:8F:40:F8 1500  
eth2
  192.168.0.135   192.168.0.0     0.0.0.0         10.19.62.254   
00:50:56:8F:F3:9A 1500  
Check:
Node connectivity for interface "eth1" 
 Source
                         Destination                     Connected?  
   
 ------------------------------
 ------------------------------  ---------------- 
 m-lrkdb2[192.168.1.134]
        m-lrkdb1[192.168.1.133]         yes             
 m-lrkdb2[192.168.1.134]
        m-lrkdb3[192.168.1.135]         yes             
 m-lrkdb1[192.168.1.133]
        m-lrkdb3[192.168.1.135]         yes             
Result:
Node connectivity passed for interface "eth1" 
Check:
TCP connectivity of subnet "192.168.1.0" 
 Source
                         Destination                     Connected?  
   
 ------------------------------
 ------------------------------  ---------------- 
m-lrkdb2:192.168.1.134
         m-lrkdb1:192.168.1.133          passed
m-lrkdb2:192.168.1.134
         m-lrkdb3:192.168.1.135          passed          
Result:
TCP connectivity check passed for subnet "192.168.1.0" 
Check:
Node connectivity for interface "eth0" 
 Source
                         Destination                     Connected?  
   
 ------------------------------
 ------------------------------  ---------------- 
 m-lrkdb2[10.19.62.65]
          m-lrkdb2[10.19.62.51]           yes             
 m-lrkdb2[10.19.62.65]
          m-lrkdb2[10.19.62.68]           yes             
 m-lrkdb2[10.19.62.65]
          m-lrkdb1[10.19.62.64]           yes             
 m-lrkdb2[10.19.62.65]
          m-lrkdb1[10.19.62.67]           yes             
 m-lrkdb2[10.19.62.65]
          m-lrkdb1[10.19.62.49]           yes             
 m-lrkdb2[10.19.62.65]
          m-lrkdb3[10.19.62.53]           yes             
 m-lrkdb2[10.19.62.65]
          m-lrkdb3[10.19.62.86]           yes             
 m-lrkdb2[10.19.62.65]
          m-lrkdb3[10.19.62.66]           yes             
 m-lrkdb2[10.19.62.51]
          m-lrkdb2[10.19.62.68]           yes             
 m-lrkdb2[10.19.62.51]
          m-lrkdb1[10.19.62.64]           yes             
 m-lrkdb2[10.19.62.51]
          m-lrkdb1[10.19.62.67]           yes             
 m-lrkdb2[10.19.62.51]
          m-lrkdb1[10.19.62.49]           yes             
 m-lrkdb2[10.19.62.51]
          m-lrkdb3[10.19.62.53]           yes             
 m-lrkdb2[10.19.62.51]
          m-lrkdb3[10.19.62.86]           yes             
 m-lrkdb2[10.19.62.51]
          m-lrkdb3[10.19.62.66]           yes             
 m-lrkdb2[10.19.62.68]
          m-lrkdb1[10.19.62.64]           yes             
 m-lrkdb2[10.19.62.68]
          m-lrkdb1[10.19.62.67]           yes             
 m-lrkdb2[10.19.62.68]
          m-lrkdb1[10.19.62.49]           yes             
 m-lrkdb2[10.19.62.68]
          m-lrkdb3[10.19.62.53]           yes             
 m-lrkdb2[10.19.62.68]
          m-lrkdb3[10.19.62.86]           yes             
 m-lrkdb2[10.19.62.68]
          m-lrkdb3[10.19.62.66]           yes             
 m-lrkdb1[10.19.62.64]
          m-lrkdb1[10.19.62.67]           yes             
 m-lrkdb1[10.19.62.64]
          m-lrkdb1[10.19.62.49]           yes             
 m-lrkdb1[10.19.62.64]
          m-lrkdb3[10.19.62.53]           yes             
 m-lrkdb1[10.19.62.64]
          m-lrkdb3[10.19.62.86]           yes             
 m-lrkdb1[10.19.62.64]
          m-lrkdb3[10.19.62.66]           yes             
 m-lrkdb1[10.19.62.67]
          m-lrkdb1[10.19.62.49]           yes             
 m-lrkdb1[10.19.62.67]
          m-lrkdb3[10.19.62.53]           yes             
 m-lrkdb1[10.19.62.67]
          m-lrkdb3[10.19.62.86]           yes             
 m-lrkdb1[10.19.62.67]
          m-lrkdb3[10.19.62.66]           yes             
 m-lrkdb1[10.19.62.49]
          m-lrkdb3[10.19.62.53]           yes             
 m-lrkdb1[10.19.62.49]
          m-lrkdb3[10.19.62.86]           yes             
 m-lrkdb1[10.19.62.49]
          m-lrkdb3[10.19.62.66]           yes             
 m-lrkdb3[10.19.62.53]
          m-lrkdb3[10.19.62.86]           yes             
 m-lrkdb3[10.19.62.53]
          m-lrkdb3[10.19.62.66]           yes             
 m-lrkdb3[10.19.62.86]
          m-lrkdb3[10.19.62.66]           yes             
Result:
Node connectivity passed for interface "eth0" 
Check:
TCP connectivity of subnet "10.19.62.0" 
 Source
                         Destination                     Connected?  
   
 ------------------------------
 ------------------------------  ---------------- 
 m-lrkdb2:10.19.62.65
           m-lrkdb2:10.19.62.51            passed          
 m-lrkdb2:10.19.62.65
           m-lrkdb2:10.19.62.68            passed          
 m-lrkdb2:10.19.62.65
           m-lrkdb1:10.19.62.64            passed          
 m-lrkdb2:10.19.62.65
           m-lrkdb1:10.19.62.67            passed          
 m-lrkdb2:10.19.62.65
           m-lrkdb1:10.19.62.49            passed          
 m-lrkdb2:10.19.62.65
           m-lrkdb3:10.19.62.53            passed          
 m-lrkdb2:10.19.62.65
           m-lrkdb3:10.19.62.86            passed          
 m-lrkdb2:10.19.62.65
           m-lrkdb3:10.19.62.66            passed          
Result:
TCP connectivity check passed for subnet "10.19.62.0" 
Checking
subnet mask consistency... 
Subnet
mask consistency check passed for subnet "10.19.62.0". 
Subnet
mask consistency check passed for subnet "192.168.1.0". 
Subnet
mask consistency check passed. 
Result:
Node connectivity check passed 
Checking
multicast communication... 
Checking
subnet "10.19.62.0" for multicast communication with
multicast group "230.0.1.0"... 
Check
of subnet "10.19.62.0" for multicast communication with
multicast group "230.0.1.0" passed. 
Checking
subnet "192.168.1.0" for multicast communication with
multicast group "230.0.1.0"... 
Check
of subnet "192.168.1.0" for multicast communication with
multicast group "230.0.1.0" passed. 
Check
of multicast communication passed. 
Checking
node application existence... 
Checking
existence of VIP node application (required) 
 Node
Name     Required                  Running?                  Comment 
 
 ------------
 ------------------------  ------------------------  ---------- 
 m-lrkdb2
     yes                       yes                       passed    
 m-lrkdb1
     yes                       yes                       passed    
 m-lrkdb3
     yes                       yes                       passed    
VIP
node application check passed 
Checking
existence of NETWORK node application (required) 
 Node
Name     Required                  Running?                  Comment 
 
 ------------
 ------------------------  ------------------------  ---------- 
 m-lrkdb2
     yes                       yes                       passed    
 m-lrkdb1
     yes                       yes                       passed    
 m-lrkdb3
     yes                       yes                       passed    
NETWORK
node application check passed 
Checking
existence of GSD node application (optional) 
 Node
Name     Required                  Running?                  Comment 
 
 ------------
 ------------------------  ------------------------  ---------- 
 m-lrkdb2
     no                        no                        exists    
 m-lrkdb1
     no                        no                        exists    
 m-lrkdb3
     no                        no                        exists    
GSD
node application is offline on nodes "m-lrkdb2,m-lrkdb1,m-lrkdb3"
Checking
existence of ONS node application (optional) 
 Node
Name     Required                  Running?                  Comment 
 
 ------------
 ------------------------  ------------------------  ---------- 
 m-lrkdb2
     no                        yes                       passed    
 m-lrkdb1
     no                        yes                       passed    
 m-lrkdb3
     no                        yes                       passed    
ONS
node application check passed 
Checking
Single Client Access Name (SCAN)... 
 SCAN
Name         Node          Running?      ListenerName  Port         
Running?    
 ----------------
 ------------  ------------  ------------  ------------  ------------
 m-lrkdbacc
       m-lrkdb2      true          LISTENER_SCAN1  1521          true
       
 m-lrkdbacc
       m-lrkdb3      true          LISTENER_SCAN2  1521          true
       
 m-lrkdbacc
       m-lrkdb1      true          LISTENER_SCAN3  1521          true
       
Checking
TCP connectivity to SCAN Listeners... 
 Node
         ListenerName              TCP connectivity?       
 ------------
 ------------------------  ------------------------ 
 m-lrkdb2
     LISTENER_SCAN1            yes                     
 m-lrkdb2
     LISTENER_SCAN2            yes                     
 m-lrkdb2
     LISTENER_SCAN3            yes                     
TCP
connectivity to SCAN Listeners exists on all cluster nodes 
Checking
name resolution setup for "m-lrkdbacc"... 
 SCAN
Name     IP Address                Status                    Comment 
 
 ------------
 ------------------------  ------------------------  ---------- 
 m-lrkdbacc
   10.19.62.66               passed                              
 m-lrkdbacc
   10.19.62.67               passed                              
 m-lrkdbacc
   10.19.62.68               passed                              
Verification
of SCAN VIP and Listener setup passed 
Checking
to make sure user "grid" is not in "root" group 
 Node
Name     Status                    Comment                 
 ------------
 ------------------------  ------------------------ 
 m-lrkdb3
     passed                    does not exist          
Result:
User "grid" is not part of "root" group. Check
passed 
Checking
if Clusterware is installed on all nodes... 
Check
of Clusterware install passed 
Checking
if CTSS Resource is running on all nodes... 
Check:
CTSS Resource running on all nodes 
 Node
Name                             Status                  
 ------------------------------------
 ------------------------ 
 m-lrkdb3
                             passed                  
Result:
CTSS resource check passed 
Querying
CTSS for time offset on all nodes... 
Result:
Query of CTSS for time offset passed 
Check
CTSS state started... 
Check:
CTSS state 
 Node
Name                             State                   
 ------------------------------------
 ------------------------ 
 m-lrkdb3
                             Observer                
CTSS
is in Observer state. Switching over to clock synchronization checks
using NTP 
Starting
Clock synchronization checks using Network Time Protocol(NTP)... 
NTP
Configuration file check started... 
The
NTP configuration file "/etc/ntp.conf" is available on all
nodes 
NTP
Configuration file check passed 
Checking
daemon liveness... 
Check:
Liveness for "ntpd" 
 Node
Name                             Running?                
 ------------------------------------
 ------------------------ 
 m-lrkdb3
                             yes                     
Result:
Liveness check passed for "ntpd" 
Check
for NTP daemon or service alive passed on all nodes 
Checking
NTP daemon command line for slewing option "-x" 
Check:
NTP daemon command line 
 Node
Name                             Slewing Option Set?     
 ------------------------------------
 ------------------------ 
 m-lrkdb3
                             yes                     
Result:
NTP
daemon slewing option check passed 
Checking
NTP daemon's boot time configuration, in file "/etc/sysconfig/ntpd",
for slewing option "-x" 
Check:
NTP daemon's boot time configuration 
 Node
Name                             Slewing Option Set?     
 ------------------------------------
 ------------------------ 
 m-lrkdb3
                             yes                     
Result:
NTP
daemon's boot time configuration check for slewing option passed 
Checking
whether NTP daemon or service is using UDP port 123 on all nodes 
Check
for NTP daemon or service using UDP port 123 
 Node
Name                             Port Open?              
 ------------------------------------
 ------------------------ 
 m-lrkdb3
                             yes                     
NTP
common Time Server Check started... 
NTP
Time Server "213.136.0.252" is common to all nodes on which
the NTP daemon is running 
NTP
Time Server "193.79.237.14" is common to all nodes on which
the NTP daemon is running 
NTP
Time Server "37.34.57.190" is common to all nodes on which
the NTP daemon is running 
Check
of common NTP Time Server passed 
Clock
time offset check from NTP Time Server started... 
Checking
on nodes "[m-lrkdb3]"... 
Check:
Clock time offset from NTP Time Server 
Time
Server: 213.136.0.252 
Time
Offset Limit: 1000.0 msecs 
 Node
Name     Time Offset               Status                  
 ------------
 ------------------------  ------------------------ 
 m-lrkdb3
     -0.258                    passed                  
Time
Server "213.136.0.252" has time offsets that are within
permissible limits for nodes "[m-lrkdb3]". 
Time
Server: 193.79.237.14 
Time
Offset Limit: 1000.0 msecs 
 Node
Name     Time Offset               Status                  
 ------------
 ------------------------  ------------------------ 
 m-lrkdb3
     -0.794                    passed                  
Time
Server "193.79.237.14" has time offsets that are within
permissible limits for nodes "[m-lrkdb3]". 
Time
Server: 37.34.57.190 
Time
Offset Limit: 1000.0 msecs 
 Node
Name     Time Offset               Status                  
 ------------
 ------------------------  ------------------------ 
 m-lrkdb3
     -250.64                   passed                  
Time
Server "37.34.57.190" has time offsets that are within
permissible limits for nodes "[m-lrkdb3]". 
Clock
time offset check passed 
Result:
Clock synchronization check using Network Time Protocol(NTP) passed 
Oracle
Cluster Time Synchronization Services check passed 
Post-check
for node addition was successful. 
Geen opmerkingen:
Een reactie posten