1 Introduction
1.1 What will we do
We have a two node RAC Cluster cluster on Oracle Linux 5.7 which we want to upgrade to
Oracle Linux 6.3.
- We will add a new node m-lrkdb3 with OEL 6.3 to the existing OEL 5.7 cluster which has two nodes m-lrkdb1 and m-lrkdb2.
- Next we will remove the old cluster nodes m-lrkdb1 and m-lrkdb2 sequentially.
- Then we will add another new node m-lrkdb4 with OEL 6.3 to the cluster.
1.2 Software versions
Oracle Enterprise
Linux versions:
Existing cluster nodes
m-lrkdb1 and m-lrkdb2:
[oracle@m-lrkdb1:bin]$
uname
-a
Linux
m-lrkdb1.lrk.org 2.6.18-164.el5 #1 SMP Thu Sep 3 04:15:13 EDT 2009
x86_64 x86_64 x86_64 GNU/Linux
[oracle@m-lrkdb1:bin]$
cat /etc/oracle-release
Oracle
Linux Server release 5.7
[oracle@m-lrkdb2:bin]$
uname -a
Linux
m-lrkdb2.lrk.org 2.6.18-164.el5 #1 SMP Thu Sep 3 04:15:13 EDT 2009
x86_64 x86_64 x86_64 GNU/Linux
[oracle@m-lrkdb2:bin]$
cat /etc/oracle-release
Oracle
Linux Server release 5.7
Nodes to be added
[oracle@m-lrkdb3:.ssh]$
uname
-a
Linux
m-lrkdb3.lrk.org 2.6.39-200.24.1.el6uek.x86_64 #1 SMP Sat Jun 23
02:39:07 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux
[oracle@m-lrkdb3:.ssh]$
cat /etc/oracle-release
Oracle
Linux Server release 6.3
Oracle Grid
Infrastructure an Real Application Servers versions:
[grid@m-lrkdb1:OPatch]$
/u01/app/11.2.0/grid_3/OPatch/opatch lsinventory
<...>
Oracle
Grid Infrastructure 11.2.0.3.0
There
are 1 products installed in this Oracle Home.
Interim
patches (2) :
Patch
16056266 : applied on Thu Apr 18 12:05:35 CEST 2013
Unique
Patch ID: 15962803
Patch
description: "Database Patch Set Update : 11.2.0.3.6
(16056266)"
Created
on 12 Mar 2013, 02:14:47 hrs PST8PDT
Sub-patch
14727310; "Database Patch Set Update : 11.2.0.3.5 (14727310)"
Sub-patch
14275605; "Database Patch Set Update : 11.2.0.3.4 (14275605)"
Sub-patch
13923374; "Database Patch Set Update : 11.2.0.3.3 (13923374)"
Sub-patch
13696216; "Database Patch Set Update : 11.2.0.3.2 (13696216)"
Sub-patch
13343438; "Database Patch Set Update : 11.2.0.3.1 (13343438)"
<...>
Unique
Patch ID: 15966967
Patch
description: "Grid Infrastructure Patch Set Update : 11.2.0.3.6
(16083653)"
Created
on 1 Apr 2013, 03:41:20 hrs PST8PDT
1.3 Is it possible
In
MyOracle Support note “RAC: Frequently Asked Questions (Doc ID
220970.1)” the following is noted:
The Oracle Clusterware
and Oracle Real Application Clusters both support rolling upgrades of
the OS software when the version of the Oracle
Database is certified on both releases of the OS (and the OS is the
same, no Linux and Windows or AIX and Solaris, or 32 and 64 bit
etc.). This can apply a patch to the operating system, a patchset
(such as EL4u4 to EL4u6) or a release (EL4 to EL5).
Oracle
Database - Enterprise Edition - Version 9.2.0.1 to 12.1.0.1 [Release
9.2 to 12.1]
Information in this document applies to any platform.
Information in this document applies to any platform.
1.4 Documentation
Adding a Cluster Node
on Linux and UNIX Systems
Oracle® Grid
Infrastructure Installation Guide 11g Release 2 (11.2) for Linux
E41961-05
|
Deleting a Cluster Node
on Linux and UNIX Systems
Oracle® Grid
Infrastructure Installation Guide 11g Release 2 (11.2) for Linux
E41961-05
Add a Node to an
Existing Oracle RAC 11g R2 Cluster on Linux - (RHEL 5)
by Jeff Hunter, Sr.
Database Administrator
Remove a Node from an
Existing Oracle RAC 11g R2 Cluster on Linux - (RHEL 5)
by Jeff Hunter, Sr.
Database Administrator
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml
In
MyOracle Support note “RAC: Frequently Asked Questions (Doc ID
220970.1)”
2 Prepare the new node
2.1 Install Oracle Linux 6
The new nodes to be
added must be prepared and preinstalled with Oracle Enterprise Linux
6.3. You should not run an update, because this wil update it to
version 6.5.
[grid@m-lrkdb3:etc]$
cat /etc/oracle-release
Oracle
Linux Server release 6.3
2.2 Configure Network
Also prepare and add
network configuration for the new nodes. Update the /etc/hosts
on all nodes in the cluster and on the new nodes to be added.
[grid@m-lrkdb3:etc]$
cat /etc/hosts
127.0.0.1
localhost localhost.localdomain localhost4 localhost4.localdomain4
::1
localhost localhost.localdomain localhost6
localhost6.localdomain6
10.19.62.64
m-lrkdb1.lrk.org m-lrkdb1
10.19.62.65
m-lrkdb2.lrk.org m-lrkdb2
10.19.62.53 m-lrkdb3.lrk.org
m-lrkdb3
10.19.62.54 m-lrkdb4.lrk.org
m-lrkdb4
10.19.62.49
m-lrkdb1-vip.lrk.org m-lrkdb1-vip
10.19.62.51
m-lrkdb2-vip.lrk.org m-lrkdb2-vip
10.19.62.86
m-lrkdb3-vip.lrk.org m-lrkdb3-vip
10.19.62.87
m-lrkdb4-vip.lrk.org m-lrkdb4-vip
192.168.1.133
m-lrkdb1-priv.lrk.org m-lrkdb1-priv
192.168.1.134
m-lrkdb2-priv.lrk.org m-lrkdb2-priv
192.168.1.135 m-lrkdb3-priv.lrk.org
m-lrkdb3-priv
192.168.1.136 m-lrkdb4-priv.lrk.org
m-lrkdb4-priv
Also add the hostname
m-lrkdb3.lrk.org
and the host vip m-lrkdb3-vip.lrk.org
to DNS
[grid@m-lrkdb3:etc]$
nslookup m-lrkdb3.lrk.org
Server: 10.19.55.1
Address: 10.19.55.1#53
Name: m-lrkdb3.lrk.org
Address:
10.19.62.53
[grid@m-lrkdb3:etc]$
nslookup m-lrkdb3-vip.lrk.org
Server: 10.19.55.1
Address: 10.19.55.1#53
Name: m-lrkdb3-vip.lrk.org
Address:
10.19.62.86
2.3 Configure Access to ASM
Make sure the nodes to
be added can access the ASM disks.
[grid@m-lrkdb3:etc]$
mount
...
192.168.0.126:/oracleasm
on /u02 type nfs
(rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0,addr=192.168.0.126)
192.168.0.126:/oracleasm2
on /u03 type nfs
(rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0,addr=192.168.0.126)
192.168.0.126:/oracleasm3
on /u04 type nfs
(rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0,addr=192.168.0.126)
192.168.0.126:/oracleasm4
on /u05 type nfs
(rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0,addr=192.168.0.126)
[grid@m-lrkdb3:etc]$
df -h
Filesystem
Size Used Avail Use% Mounted on
...
192.168.0.126:/oracleasm
30G 30G 0 100% /u02
192.168.0.126:/oracleasm2
9.9G 9.9G 0 100% /u03
192.168.0.126:/oracleasm3
20G 16G 2.9G 85% /u04
192.168.0.126:/oracleasm4
31G 30G 0 100% /u05
2.4 Create users and directories
Create the oracle home
software owner user oracle
and the grid infrastructure software owner user grid
with the correct groups identical to those on the existing
cluster nodes:
[oracle@m-lrkdb1:oracle]$
id oracle
uid=501(oracle)
gid=502(oinstall) groups=501(dba),504(asmdba),502(oinstall)
[oracle@m-lrkdb1:oracle]$
id grid
uid=502(grid)
gid=502(oinstall)
groups=502(oinstall),501(dba),504(asmdba),503(asmadmin)
Create directories with
the correct privileges:
[root@m-lrkdb3
~]# mkdir -p /u01/app/grid
[root@m-lrkdb3
~]# mkdir -p /u01/app/11.2.0/grid_3
[root@m-lrkdb3
~]# chown -R grid:oinstall /u01
[root@m-lrkdb3
~]# mkdir -p /u01/app/oracle
[root@m-lrkdb3
~]# chown oracle:oinstall /u01/app/oracle
[root@m-lrkdb3
~]# chmod -R 775 /u01
3 Cluster Verification
We downloaded the most recent version of cluvfy
(Linux (x86-64), Readme (December 2013)) from:
http://www.oracle.com/technetwork/database/options/clustering/downloads/cvu-download-homepage-099973.html
[grid@m-lrkdb2]$
cd /staging/oracle-sw
[grid@m-lrkdb2]$
unzip cvupack_Linux_x86_64.zip
[grid@m-lrkdb2]$
cd cluvfy/bin
[grid@m-lrkdb2]$
./cluvfy stage -pre nodeadd -n m-lrkdb3
[grid@m-lrkdb2]$
./cluvfy stage -pre crsinst -n m-lrkdb2,m-lrkdb3 -verbose
See output in appendix
1 and 2
3.1 Issues running cluvfy
Running cluvfy to add th new node m-lrkdb3
resulted in many failures. In these paragraphs we explain how we
solved these failures / issues.
Appendix 1 and 2 lists
the output of the cluvfy after the issues where solved
3.1.1 Issue 1: Ssh must work to all nodes without password
Enable ssh without
password for users oracle and grid from and to all nodes also the
local node.
$
ssh-keygen
Generating
public/private rsa key pair.
Enter
file in which to save the key (/home/oracle/.ssh/id_rsa):
/home/oracle/.ssh/id_rsa
already exists.
Overwrite
(y/n)? y
Enter
passphrase (empty for no passphrase):
Enter
same passphrase again:
Your
identification has been saved in /home/oracle/.ssh/id_rsa.
Your
public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The
key fingerprint is:
3e:d7:73:29:53:7f:25:19:91:58:14:1b:20:23:92:a2
oracle@m-lrkdb3
The
key's randomart image is:
+--[
RSA 2048]----+
|
... o .=*o |
|
. .. . o. .+ |
|
. . o |
|
E o |
|
S + .|
|
. . . +.|
|
o . = o o|
|
o = .|
|
|
+-----------------+
$
ssh-copy-id -i ~/.ssh/id_rsa.pub grid@m-lrkdb1.lrk.org
Not sure if this was
nessessary but, on all nodes as root:
#
ln -s /usr/bin/ssh /usr/local/bin/ssh
3.1.2 Issue 2: Do nut run cluvfy as root
#
cluvfy
stage -pre crsinst -n m-lrkdb1,m-lrkdb3 -fixup -verbose
You must NOT be logged in as root (uid=0) when
running ./cluvfy.sh.
3.1.3 Issue 3: Ssh without password even within the same node
[grid@m-lrkdb3:bin]$
cluvfy stage -pre crsinst -n m-lrkdb2,m-lrkdb3 -fixup -verbose
Checking
user equivalence...
Check:
User equivalence for user "grid"
Node
Name Status
------------------------------------
------------------------
m-lrkdb2
passed
m-lrkdb3
failed
PRVG-2019
: Check for equivalence of user "grid" from node "m-lrkdb3"
to node "m-lrkdb3" failed
PRKC-1044
: Failed to check remote command execution setup for node m-lrkdb3
using shells /usr/bin/ssh and /usr/bin/rsh
File
"/usr/bin/rsh" does not exist on node "m-lrkdb3"
Permission
denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Solution
User grid must be
able to ssh without password from m-lrkdb3 to m-lrkdb3
[grid@m-lrkdb3:bin]$
ssh-copy-id -i ~/.ssh/id_rsa.pub grid@m-lrkdb3.lrk.org
3.1.4 Issue 4: Run cluvfy from an existing cluster node.
[grid@m-lrkdb3:bin]$
./cluvfy
stage -pre crsinst -n m-lrkdb2,m-lrkdb3 -fixup -verbose
Check:
Kernel version
Node
Name Available Required Status
------------
------------------------ ------------------------ ----------
m-lrkdb2
2.6.18-164.el5 2.6.32 failed
m-lrkdb3
2.6.39-200.24.1.el6uek.x86_64 2.6.32 passed
WARNING:
PRVF-7524
: Kernel version is not consistent across all the nodes.
Kernel
version = "2.6.18-164.el5" found on nodes: m-lrkdb2.
Kernel
version = "2.6.39-200.24.1.el6uek.x86_64" found on nodes:
m-lrkdb3.
Result:
Kernel version check failed
Solution
Now we run this from
m-lrkdb1 (existing cluster node):
[grid@m-lrkdb1:bin]$
./cluvfy stage -pre crsinst -n m-lrkdb2,m-lrkdb3 -fixup -verbose
Check:
Kernel version
Node
Name Available Required Status
------------
------------------------ ------------------------ ----------
m-lrkdb1
2.6.18-164.el5 2.6.18 passed
m-lrkdb3
2.6.39-200.24.1.el6uek.x86_64 2.6.18 passed
WARNING:
PRVF-7524
: Kernel version is not consistent across all the nodes.
Kernel
version = "2.6.18-164.el5" found on nodes: m-lrkdb1.
Kernel
version = "2.6.39-200.24.1.el6uek.x86_64" found on nodes:
m-lrkdb3.
Result:
Kernel version check passed
Remark:
Now we get no failures and the Kernel version check only gives a
warning that Kernel version is not consistent across all the nodes.
So this is oke.
3.1.5 Issue 5: Daemon avahi-daemon should not be running / configured
[grid@m-lrkdb3:bin]$
./cluvfy stage -pre crsinst -n m-lrkdb2,m-lrkdb3 -fixup -verbose
Checking
daemon "avahi-daemon" is not configured and running
Check:
Daemon "avahi-daemon" not configured
Node
Name Configured Status
------------
------------------------ ------------------------
m-lrkdb1
yes failed
m-lrkdb3
yes failed
Daemon
not configured check failed for process "avahi-daemon"
Check:
Daemon "avahi-daemon" not running
Node
Name Running? Status
------------
------------------------ ------------------------
m-lrkdb1
no passed
m-lrkdb3
yes failed
Daemon
not running check failed for process "avahi-daemon"
Solution
Disable the avahi-daemon
[root@m-lrkdb1
~]# service
avahi-daemon stop
Shutting
down Avahi daemon: Failed to kill daemon: No such file or directory
[FAILED]
[root@m-lrkdb1
~]# chkconfig --list avahi-daemon
avahi-daemon
0:off 1:off 2:off 3:off 4:on 5:on 6:off
[root@m-lrkdb1
~]# chkconfig avahi-daemon off
[root@m-lrkdb1
~]# chkconfig --list avahi-daemon
avahi-daemon
0:off 1:off 2:off 3:off 4:off 5:off 6:off
[root@m-lrkdb2
~]# service avahi-daemon stop
Shutting
down Avahi daemon: Failed to kill daemon: No such file or directory
[FAILED]
[root@m-lrkdb2
~]# chkconfig avahi-daemon off
[root@m-lrkdb2
~]# chkconfig --list avahi-daemon
avahi-daemon
0:off 1:off 2:off 3:off 4:off 5:off 6:off
[root@m-lrkdb2
~]#
[root@m-lrkdb3
~]# service avahi-daemon stop
Shutting
down Avahi daemon: [ OK ]
[root@m-lrkdb3
~]# chkconfig avahi-daemon off
[root@m-lrkdb3
~]# chkconfig --list avahi-daemon
avahi-daemon
0:off 1:off 2:off 3:off 4:off 5:off 6:off
[root@m-lrkdb3
~]#
3.1.6 Issue 6: Parameter NOZEROCONF must be set to YES
Parameter NOZEROCONF must be set to YES in
/etc/sysconfig/network
[grid@m-lrkdb1:bin]$
./cluvfy stage -pre crsinst -n m-lrkdb2,m-lrkdb3 -fixup -verbose
ERROR:
PRVE-10077
: NOZEROCONF parameter was not specified or was not set to yes in
file "/etc/sysconfig/network" on node "m-lrkdb3"
Check
for zeroconf check failed
[root@m-lrkdb3
~]# /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=m-lrkdb3
[root@m-lrkdb1
~]# cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=m-lrkdb1.lrk.org
GATEWAY=10.19.62.254
NOZEROCONF=yes
Solution
[root@m-lrkdb3
~]# cat
/etc/sysconfig/network
NETWORKING=yes
HOSTNAME=m-lrkdb3.lrk.org
NOZEROCONF=yes
[root@m-lrkdb3
~]# screen
[root@m-lrkdb3
~]# /etc/init.d/network restart
3.1.7 Issue 7: File /etc/resolv.conf must be identical on all nodes
Checking
the file "/etc/resolv.conf" to make sure only one of domain
and search entries is defined
"domain"
and "search" entries do not coexist in any
"/etc/resolv.conf" file
Checking
if domain entry in file "/etc/resolv.conf" is consistent
across the nodes...
"domain"
entry does not exist in any "/etc/resolv.conf" file
Checking
if search entry in file "/etc/resolv.conf" is consistent
across the nodes...
PRVF-5622
: search entry does not exist in file "/etc/resolv.conf" on
nodes: "m-lrkdb3"
Checking
file "/etc/resolv.conf" to make sure that only one search
entry is defined
More
than one "search" entry does not exist in any
"/etc/resolv.conf" file
Checking
DNS response time for an unreachable node
Node
Name Status
------------------------------------
------------------------
m-lrkdb1
passed
m-lrkdb3
passed
The
DNS response time for an unreachable node is within acceptable limit
on all nodes
Check
for integrity of file "/etc/resolv.conf" failed
Solution
[grid@m-lrkdb1:grid]$
cat /etc/resolv.conf
search
lrk.org
nameserver
10.19.55.1
nameserver 10.19.55.20
[grid@m-lrkdb3:grid]$
cat /etc/resolv.conf
nameserver
10.19.55.1
nameserver 10.19.55.20
search added to
/etc/resolv.conf on m-lrkdb3.lrk.org
Checking
the file "/etc/resolv.conf" to make sure only one of domain
and search entries is defined
"domain"
and "search" entries do not coexist in any
"/etc/resolv.conf" file
Checking
if domain entry in file "/etc/resolv.conf" is consistent
across the nodes...
"domain"
entry does not exist in any "/etc/resolv.conf" file
Checking
if search entry in file "/etc/resolv.conf" is consistent
across the nodes...
Checking
file "/etc/resolv.conf" to make sure that only one search
entry is defined
More
than one "search" entry does not exist in any
"/etc/resolv.conf" file
All
nodes have same "search" order defined in file
"/etc/resolv.conf"
Checking
DNS response time for an unreachable node
Node
Name Status
------------------------------------
------------------------
m-lrkdb1
passed
m-lrkdb3
passed
The
DNS response time for an unreachable node is within acceptable limit
on all nodes
Check
for integrity of file "/etc/resolv.conf" passed
3.1.8 Issue 8: We ignore free disk space messages
Check:
Swap space
Node
Name Available Required Status
------------
------------------------ ------------------------ ----------
m-lrkdb1
9.9968GB (1.0482372E7KB) 11.7301GB (1.2299908E7KB) failed
m-lrkdb3
9.999GB (1.0484732E7KB) 11.7598GB (1.2331028E7KB) failed
Result:
Swap space check failed
Check:
Free disk space for
"m-lrkdb1:/usr,m-lrkdb1:/var,m-lrkdb1:/etc,m-lrkdb1:/u01/app/11.2.0/grid_3,m-lrkdb1:/sbin,m-lrkdb1:/tmp"
Path
Node Name Mount point Available Required
Status
----------------
------------ ------------ ------------ ------------ ------------
/usr
m-lrkdb1 / 4.1719GB 7.9635GB
failed
/var
m-lrkdb1 / 4.1719GB 7.9635GB
failed
/etc
m-lrkdb1 / 4.1719GB 7.9635GB
failed
/u01/app/11.2.0/grid_3
m-lrkdb1 / 4.1719GB 7.9635GB failed
/sbin
m-lrkdb1 / 4.1719GB 7.9635GB
failed
/tmp
m-lrkdb1 / 4.1719GB 7.9635GB
failed
Result:
Free disk space check failed for
"m-lrkdb1:/usr,m-lrkdb1:/var,m-lrkdb1:/etc,m-lrkdb1:/u01/app/11.2.0/grid_3,m-lrkdb1:/sbin,m-lrkdb1:/tmp"
Check:
Free disk space for
"m-lrkdb3:/usr,m-lrkdb3:/var,m-lrkdb3:/etc,m-lrkdb3:/u01/app/11.2.0/grid_3,m-lrkdb3:/sbin,m-lrkdb3:/tmp"
Path
Node Name Mount point Available Required
Status
----------------
------------ ------------ ------------ ------------ ------------
/usr
m-lrkdb3 / 6.4492GB 7.9635GB
failed
/var
m-lrkdb3 / 6.4492GB 7.9635GB
failed
/etc
m-lrkdb3 / 6.4492GB 7.9635GB
failed
/u01/app/11.2.0/grid_3
m-lrkdb3 / 6.4492GB 7.9635GB failed
/sbin
m-lrkdb3 / 6.4492GB 7.9635GB
failed
/tmp
m-lrkdb3 / 6.4492GB 7.9635GB
failed
Result:
Free disk space check failed for
"m-lrkdb3:/usr,m-lrkdb3:/var,m-lrkdb3:/etc,m-lrkdb3:/u01/app/11.2.0/grid_3,m-lrkdb3:/sbin,m-lrkdb3:/tmp"
Solution
We decided to ignore
these disk space errors. Lucky for us we indeed did not get any disk
space issues.
4 Add node to cluster
See metaling node:
ID 1267569.1]
If the error is
preventing a node from being added to a cluster, please set
environment variable prior to start addNode.sh: $
export IGNORE_PREADDNODE_CHECKS=Y
We did not need to set
this when running addNode.sh
4.1 Pre Validation
[grid@m-lrkdb2:bin]$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA_GIR.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.DATA_KTB.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.DATA_LRK.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.DATA_OA_GIRR.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.DATA_OA_KTBR.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.DATA_OA_LRKR.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.DATA_OCR.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.DATA_TEST.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.FRA.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.LISTENER.lsnr
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.asm
ONLINE ONLINE m-lrkdb1 Started
ONLINE ONLINE m-lrkdb2 Started
ora.gsd
OFFLINE OFFLINE m-lrkdb1
OFFLINE OFFLINE m-lrkdb2
ora.net1.network
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.ons
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.registry.acfs
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE m-lrkdb2
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE m-lrkdb1
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE m-lrkdb1
ora.cvu
1 ONLINE ONLINE m-lrkdb1
ora.m-lrkdb1.vip
1 ONLINE ONLINE m-lrkdb1
ora.m-lrkdb2.vip
1 ONLINE ONLINE m-lrkdb2
ora.mgirupg.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
ora.mktbupg.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
ora.mlrkupg.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
ora.oagirr.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
ora.oaktbr.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
ora.oalrkr.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
ora.oc4j
1 ONLINE ONLINE m-lrkdb1
ora.scan1.vip
1 ONLINE ONLINE m-lrkdb2
ora.scan2.vip
1 ONLINE ONLINE m-lrkdb1
ora.scan3.vip
1 ONLINE ONLINE m-lrkdb1
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA_GIR.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.DATA_KTB.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.DATA_LRK.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.DATA_OA_GIRR.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.DATA_OA_KTBR.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.DATA_OA_LRKR.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.DATA_OCR.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.DATA_TEST.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.FRA.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.LISTENER.lsnr
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.asm
ONLINE ONLINE m-lrkdb1 Started
ONLINE ONLINE m-lrkdb2 Started
ora.gsd
OFFLINE OFFLINE m-lrkdb1
OFFLINE OFFLINE m-lrkdb2
ora.net1.network
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.ons
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ora.registry.acfs
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE m-lrkdb2
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE m-lrkdb1
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE m-lrkdb1
ora.cvu
1 ONLINE ONLINE m-lrkdb1
ora.m-lrkdb1.vip
1 ONLINE ONLINE m-lrkdb1
ora.m-lrkdb2.vip
1 ONLINE ONLINE m-lrkdb2
ora.mgirupg.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
ora.mktbupg.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
ora.mlrkupg.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
ora.oagirr.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
ora.oaktbr.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
ora.oalrkr.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
ora.oc4j
1 ONLINE ONLINE m-lrkdb1
ora.scan1.vip
1 ONLINE ONLINE m-lrkdb2
ora.scan2.vip
1 ONLINE ONLINE m-lrkdb1
ora.scan3.vip
1 ONLINE ONLINE m-lrkdb1
4.2 Issues running addNode.sh
Running addNode.sh resulted in a few issues befor
it ran successfully. In this paragraph we highlight these issues:
[grid@m-lrkdb2:bin]$
./addNode.sh -silent "CLUSTER_NEW_NODES={m-lrkdb3}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={m-lrkdb3-vip}"
Appendix 3 lists the
output of the addNode.sh after the issues where solved.
4.2.1 Issue 1: Must be able to write in /u01
"/u01/app/oraInventory/logs/oraInstall2014-07-22_11-17-05AM.err"
oracle.ops.mgmt.cluster.SharedDeviceException:
PRKC-1025 : Failed to create a file under the
filepath /u01 because the filepath is not executable or writable
at
oracle.ops.mgmt.nativesystem.UnixSystem.isSharedPath(UnixSystem.java:1623)
at
oracle.ops.mgmt.cluster.Cluster.isSharedPath(Cluster.java:1109)
Solution
Not sure if this was nessessary, but the problem
was gone.
[root@m-lrkdb3
/]# chown root:oinstall /u01
[root@m-lrkdb3
/]# chmod -R 777 /u01
"The
Oracle Clusterware and Oracle Real Application Clusters both support
rolling upgrades
of the OS software when the version of the Oracle Database is
certified on both
releases of the OS (and the OS is the same, no Linux and Windows or
AIX and Solaris, or
32 and 64 bit etc.). This can apply a patch to the operating system,
a patchset (such
as EL4u4 to EL4u6) or a release (EL4 to EL5). Stay within a 24 hours
of upgrade
window
and fully test this path as it's not possible for Oracle to test all
these different paths and combinations".
So,
Red Hat and Oracle are telling me this is possible, but that I should
test it first (planned
that) and that I should keep the window on the updates as short as
possible.
Note: After successfull
installation all files under /u01 must have the correct privileges
[grid@m-lrkdb3:grid]$
ls -l /
...
drwxr-xr-x.
4 root oinstall 4096 Jul 21 12:37 u01
[grid@m-lrkdb3:grid]$
ls -l /u01
total
20
drwxr-xr-x
6 root oinstall 4096 Jul 22 13:33 app
drwxrwxrwx.
2 grid oinstall 16384 Jun 11 10:59 lost+found
4.2.2 Issue 2: Out of memory. Java needs more memory
[grid@m-lrkdb2:bin]$
more
/u01/app/oraInventory/logs/oraInstall2014-07-22_12-05-01PM.err
oracle.ops.mgmt.cluster.SharedDeviceException:
PRKC-1025 : Failed to create a file under the filepath /u01 because
the filepath is not executable or writable
at
oracle.ops.mgmt.nativesystem.UnixSystem.isSharedPath(UnixSystem.java:1623)
at
oracle.ops.mgmt.cluster.Cluster.isSharedPath(Cluster.java:1109)
at
oracle.sysman.oii.oiip.oiipg.OiipgCFSDriveCheck.isDriveOnCFS(OiipgCFSDriveCheck.java:655)
at
oracle.sysman.oii.oiic.OiicAddNodeSummaryInformation.isVolumeOnCFS(OiicAddNodeSummaryInformation.java:164)
at
oracle.sysman.oii.oiic.OiicAddNodeSummaryInformation.computeSpaceInfo(OiicAddNodeSummaryInformation.java:459)
at
oracle.sysman.oii.oiic.OiicAddNodeSummaryInformation.initializeAddNodeSession(OiicAddNodeSummaryInformation.java:383)
at
oracle.sysman.oii.oiic.OiicAddNodeSummaryInformation.<init>(OiicAddNodeSummaryInformation.java:140)
at
oracle.sysman.oii.oiif.oiifw.OiifwAddNodeSummaryWCDE.writeSummaryInformation(OiifwAddNodeSummaryWCDE.java:212)
at
oracle.sysman.oii.oiif.oiifw.OiifwAddNodeSummaryWCDE.logDialog(OiifwAddNodeSummaryWCDE.java:204)
at
oracle.sysman.oii.oiif.oiifb.OiifbWizChainDlgElem.doOperation(OiifbWizChainDlgElem.java:702)
at
oracle.sysman.oii.oiif.oiifw.OiifwAddNodeSummaryWCDE.doOperation(OiifwAddNodeSummaryWCDE.java:180)
at
oracle.sysman.oii.oiif.oiifb.OiifbCondIterator.iterate(OiifbCondIterator.java:171)
at
oracle.sysman.oii.oiic.OiicPullSession.doOperation(OiicPullSession.java:1380)
at
oracle.sysman.oii.oiic.OiicSessionWrapper.doOperation(OiicSessionWrapper.java:294)
at
oracle.sysman.oii.oiic.OiicInstaller.run(OiicInstaller.java:579)
at
oracle.sysman.oii.oiic.OiicInstaller.runInstaller(OiicInstaller.java:969)
at
oracle.sysman.oii.oiic.OiicInstaller.main(OiicInstaller.java:906)
Exception
in thread "Thread-39" java.lang.OutOfMemoryError: Java heap
space
at
java.lang.StringCoding$CharsetSE.encode(StringCoding.java:334)
at
java.lang.StringCoding.encode(StringCoding.java:378)
at
java.lang.String.getBytes(String.java:812)
at
java.io.UnixFileSystem.setLastModifiedTime(Native Method)
at
java.io.File.setLastModified(File.java:1227)
at
oracle.sysman.oii.oiit.OiitLockHeartbeat.touchFile(OiitLockHeartbeat.java:270)
at
oracle.sysman.oii.oiit.OiitLockHeartbeat.update(OiitLockHeartbeat.java:288)
at
oracle.sysman.oii.oiit.OiitLockHeartbeat$HeartBeatThread.run(OiitLockHeartbeat.java:136)
Solution
The
FIRST ERROR is the misleading oner for solving this problem. The real
cause of the problem is the SECOND ERROR OutOfMemoryError: Java heap
space
Increasing
of the JRE_MEMORY_OPTIONS will be solving the problem of the
OutOfMemoryError
Increase
<b>JRE_MEMORY_OPTIONS=" -mx1024m"</b> or
greater value in the oraparam.ini located in: $GRID_HOME/oui/
[grid@m-lrkdb2]$
vi $ORACLE_HOME/oui/oraparam.ini
...
#JRE_MEMORY_OPTIONS="
-mx150m"
JRE_MEMORY_OPTIONS="
-mx1024m"
...
4.3 Run root scripts
4.3.1 Issue 1: ASM disks mounted READ ONLY on node m-lrkdb3
[root@m-lrkdb3
grid_3]# /u01/app/oraInventory/orainstRoot.sh
...
The
commandline is not formed properly. Please type "asmca -h"
to get the command line syntax.
Configuration
of ASM ... failed
see
asmca logs at /u01/app/grid/cfgtoollogs/asmca for details
Did
not succssfully configure and start ASM at
/u01/app/11.2.0/grid_3/crs/install/crsconfig_lib.pm line 6833.
/u01/app/11.2.0/grid_3/perl/bin/perl
-I/u01/app/11.2.0/grid_3/perl/lib
-I/u01/app/11.2.0/grid_3/crs/install
/u01/app/11.2.0/grid_3/crs/install/rootcrs.pl execution failed
[root@m-lrkdb3
bin]# cd /u01/app/grid/cfgtoollogs/asmca
[root@m-lrkdb3
asmca]# ls -l
total
4
-rw-r-----
1 grid oinstall 3686 Jul 22 13:52 asmca-140722PM015242.log
oracle.sysman.assistants.usmca.cli.UsmcaCmdLineParser.process(UsmcaCmdLineParser.java:843)
oracle.sysman.assistants.usmca.Usmca.execute(Usmca.java:131)
oracle.sysman.assistants.usmca.Usmca.main(Usmca.java:369)
[main]
[ 2014-07-22 13:52:43.224 CEST ] [UsmcaLogger.logException:173]
SEVERE:method oracle.sysman.assistants.usmca.Usmca:execute
[main]
[ 2014-07-22 13:52:43.225 CEST ] [UsmcaLogger.logException:174] The
commandline is not formed properly. Please type "asmca -h"
to get the command line syntax.
[main]
[ 2014-07-22 13:52:43.226 CEST ] [UsmcaLogger.logException:175]
oracle.sysman.assistants.usmca.exception.InvalidCmdLineArgException:
The commandline is not formed properly. Please type "asmca -h"
to get the command line syntax.
[grid@m-lrkdb3:crsconfig]$
kfod
asm_diskstring='/u02/nfsdg/*,/u03/nfsdg/*,/u04/nfsdg/*,/u05/nfsdg/*'
disks=all
KFOD-00313:
No ASM instances available. CSS group services were successfully
initilized by kgxgncin
KFOD-00311:
Error scanning device /u05/nfsdg/disk10
ORA-15025:
could not open disk "/u05/nfsdg/disk10"
Linux-x86_64
Error: 30: Read-only file system
Additional
information: 42
Additional
information: 12252479
Additional
information: 1598903119
KFOD-00311:
Error scanning device /u05/nfsdg/disk9
[grid@m-lrkdb3:crsconfig]$
mount
192.168.0.126:/oracleasm
on /u02 type nfs
(ro,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0,addr=192.168.0.126)
192.168.0.126:/oracleasm2
on /u03 type nfs
(ro,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0,addr=192.168.0.126)
192.168.0.126:/oracleasm3
on /u04 type nfs
(ro,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0,addr=192.168.0.126)
192.168.0.126:/oracleasm4
on /u05 type nfs
(ro,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0,addr=192.168.0.126)
[grid@m-lrkdb2:bin]$
kfod
asm_diskstring='/u02/nfsdg/*,/u03/nfsdg/*,/u04/nfsdg/*,/u05/nfsdg/*'
disks=all
--------------------------------------------------------------------------------
Disk
Size Path User Group
================================================================================
1:
3000 Mb /u02/nfsdg/disk1 grid
oinstall
2:
3000 Mb /u02/nfsdg/disk2 grid
oinstall
3:
3000 Mb /u02/nfsdg/disk3 grid
oinstall
4:
7000 Mb /u02/nfsdg/disk4 grid
oinstall
5:
7000 Mb /u02/nfsdg/disk5 grid
oinstall
6:
7000 Mb /u02/nfsdg/disk6 grid
oinstall
7:
9900 Mb /u03/nfsdg/disk7 grid
oinstall
8:
3000 Mb /u04/nfsdg/disk21 grid
oinstall
9:
3000 Mb /u04/nfsdg/disk22 grid
oinstall
10:
3000 Mb /u04/nfsdg/disk23 grid
oinstall
11:
7000 Mb /u04/nfsdg/disk24 grid
oinstall
12:
10000 Mb /u05/nfsdg/disk10 grid
oinstall
13:
10000 Mb /u05/nfsdg/disk8 grid
oinstall
14:
10000 Mb /u05/nfsdg/disk9 grid
oinstall
--------------------------------------------------------------------------------
ORACLE_SID
ORACLE_HOME
================================================================================
[grid@m-lrkdb2:bin]$
mount
192.168.0.126:/oracleasm
on /u02 type nfs
(rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,nfsvers=3,timeo=600,actimeo=0,addr=192.168.0.126)
192.168.0.126:/oracleasm2
on /u03 type nfs
(rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,nfsvers=3,timeo=600,actimeo=0,addr=192.168.0.126)
192.168.0.126:/oracleasm3
on /u04 type nfs
(rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,nfsvers=3,timeo=600,actimeo=0,addr=192.168.0.126)
192.168.0.126:/oracleasm4
on /u05 type nfs
(rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,nfsvers=3,timeo=600,actimeo=0,addr=192.168.0.126)
Solution
The ASM disks which
are mounted Read Only on NFS on the new server m-lrkdb3 must be
remounted in Read Write mode
192.168.0.126:/oracleasm
on /u02 type nfs
(rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0,addr=192.168.0.126)
192.168.0.126:/oracleasm2
on /u03 type nfs
(rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0,addr=192.168.0.126)
192.168.0.126:/oracleasm3
on /u04 type nfs
(rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0,addr=192.168.0.126)
192.168.0.126:/oracleasm4
on /u05 type nfs (rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp
4.4 Rerun root.sh
By running
rootcrs.pl
-deconfig -force
on nodes where you encounter an installation
error, you can deconfigure Oracle Clusterware on those nodes, correct
the cause of the error, and then run root.sh
again.
[root@m-lrkdb3
bin]# ./crsctl stop crs -f
[root@m-lrkdb3
bin]# /u01/app/11.2.0/grid_3/crs/install/rootcrs.pl -deconfig
-force -verbose
Using
configuration parameter file:
/u01/app/11.2.0/grid_3/crs/install/crsconfig_params
PRCR-1119
: Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068
: Failed to query resources
Cannot
communicate with crsd
PRCR-1070
: Failed to check if resource ora.gsd is registered
Cannot
communicate with crsd
PRCR-1070
: Failed to check if resource ora.ons is registered
Cannot
communicate with crsd
CRS-2791:
Starting shutdown of Oracle High Availability Services-managed
resources on 'm-lrkdb3'
CRS-2673:
Attempting to stop 'ora.crf' on 'm-lrkdb3'
CRS-2677:
Stop of 'ora.crf' on 'm-lrkdb3' succeeded
CRS-2793:
Shutdown of Oracle High Availability Services-managed resources on
'm-lrkdb3' has completed
CRS-4133:
Oracle High Availability Services has been stopped.
error:
package cvuqdisk is not installed
Successfully
deconfigured Oracle clusterware stack on this node
Make sure ohasd is not running
[root@m-lrkdb3 bin]# ps -ef|grep init.ohasd
root 18831 1 0 14:52 ? 00:00:00 /bin/sh /etc/init.d/init.ohasd run
root 18831 1 0 14:52 ? 00:00:00 /bin/sh /etc/init.d/init.ohasd run
The High Availability
service ohasd is not stopped when running ./crsctl stop crs -f. It stopped only when we run rootcrs.pl -deconfig -force -verbose
[root@m-lrkdb3
bin]# ps
-ef|grep init.ohasd
Now run root.sh again
[root@m-lrkdb3
bin]# /u01/app/11.2.0/grid_3/root.sh
Performing
root user operation for Oracle 11g
The
following environment variables are set as:
ORACLE_OWNER=
grid
ORACLE_HOME=
/u01/app/11.2.0/grid_3
Enter
the full pathname of the local bin directory: [/usr/local/bin]:
The
contents of "dbhome" have not changed. No need to
overwrite.
The
contents of "oraenv" have not changed. No need to
overwrite.
The
contents of "coraenv" have not changed. No need to
overwrite.
Entries
will be added to the /etc/oratab file as needed by
Database
Configuration Assistant when a database is created
Finished
running generic part of root script.
Now
product-specific root actions will be performed.
Using
configuration parameter file:
/u01/app/11.2.0/grid_3/crs/install/crsconfig_params
User
ignored Prerequisites during installation
OLR
initialization - successful
Adding
Clusterware entries to upstart
CRS-4402:
The CSS daemon was started in exclusive mode but found an active CSS
daemon on node m-lrkdb2, number 1, and is terminating
An
active cluster was found during exclusive startup, restarting to join
the cluster
clscfg:
EXISTING configuration version 5 detected.
clscfg:
version 5 is 11g Release 2.
Successfully
accumulated necessary OCR keys.
Creating
OCR keys for user 'root', privgrp 'root'..
Operation
successful.
Preparing
packages for installation...
cvuqdisk-1.0.9-1
Configure
Oracle Grid Infrastructure for a Cluster ... succeeded
4.5 Cluster verification
[root@m-lrkdb3
bin]# ./crsctl check crs
CRS-4638:
Oracle High Availability Services is online
CRS-4537:
Cluster Ready Services is online
CRS-4529:
Cluster Synchronization Services is online
CRS-4533:
Event Manager is online
[grid@m-lrkdb3:bin]$
crs_stat -t -v
Name
Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora...._GIR.dg
ora....up.type 0/5 0/ ONLINE ONLINE m-lrkdb1
ora...._KTB.dg
ora....up.type 0/5 0/ ONLINE ONLINE m-lrkdb1
ora...._LRK.dg
ora....up.type 0/5 0/ ONLINE ONLINE m-lrkdb1
ora....GIRR.dg
ora....up.type 0/5 0/ ONLINE ONLINE m-lrkdb1
ora....KTBR.dg
ora....up.type 0/5 0/ ONLINE ONLINE m-lrkdb1
ora....LRKR.dg
ora....up.type 0/5 0/ ONLINE ONLINE m-lrkdb1
ora...._OCR.dg
ora....up.type 0/5 0/ ONLINE ONLINE m-lrkdb1
ora....TEST.dg
ora....up.type 0/5 0/ ONLINE ONLINE m-lrkdb1
ora.FRA.dg
ora....up.type 0/5 0/ ONLINE ONLINE m-lrkdb1
ora....ER.lsnr
ora....er.type 0/5 0/ ONLINE ONLINE m-lrkdb1
ora....N1.lsnr
ora....er.type 0/5 0/0 ONLINE ONLINE m-lrkdb2
ora....N2.lsnr
ora....er.type 0/5 0/0 ONLINE ONLINE m-lrkdb3
ora....N3.lsnr
ora....er.type 0/5 0/0 ONLINE ONLINE m-lrkdb1
ora.asm
ora.asm.type 0/5 0/ ONLINE ONLINE m-lrkdb1
ora.cvu
ora.cvu.type 0/5 0/0 ONLINE ONLINE m-lrkdb1
ora.gsd
ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora....SM2.asm
application 0/5 0/0 ONLINE ONLINE m-lrkdb1
ora....B1.lsnr
application 0/5 0/0 ONLINE ONLINE m-lrkdb1
ora....db1.gsd
application 0/5 0/0 OFFLINE OFFLINE
ora....db1.ons
application 0/3 0/0 ONLINE ONLINE m-lrkdb1
ora....db1.vip
ora....t1.type 0/0 0/0 ONLINE ONLINE m-lrkdb1
ora....SM1.asm
application 0/5 0/0 ONLINE ONLINE m-lrkdb2
ora....B2.lsnr
application 0/5 0/0 ONLINE ONLINE m-lrkdb2
ora....db2.gsd
application 0/5 0/0 OFFLINE OFFLINE
ora....db2.ons
application 0/3 0/0 ONLINE ONLINE m-lrkdb2
ora....db2.vip
ora....t1.type 0/0 0/0 ONLINE ONLINE m-lrkdb2
ora....SM3.asm
application 0/5 0/0 ONLINE ONLINE m-lrkdb3
ora....B3.lsnr
application 0/5 0/0 ONLINE ONLINE m-lrkdb3
ora....db3.gsd
application 0/5 0/0 OFFLINE OFFLINE
ora....db3.ons
application 0/3 0/0 ONLINE ONLINE m-lrkdb3
ora....db3.vip
ora....t1.type 0/0 0/0 ONLINE ONLINE m-lrkdb3
ora.mgirupg.db
ora....se.type 0/2 0/1 ONLINE ONLINE m-lrkdb2
ora.mktbupg.db
ora....se.type 0/2 0/1 ONLINE ONLINE m-lrkdb2
ora.mlrkupg.db
ora....se.type 0/2 0/1 ONLINE ONLINE m-lrkdb2
ora....network
ora....rk.type 0/5 0/ ONLINE ONLINE m-lrkdb1
ora.oagirr.db
ora....se.type 0/2 0/1 ONLINE ONLINE m-lrkdb2
ora.oaktbr.db
ora....se.type 0/2 0/1 ONLINE ONLINE m-lrkdb2
ora.oalrkr.db
ora....se.type 0/2 0/1 ONLINE ONLINE m-lrkdb2
ora.oc4j
ora.oc4j.type 0/5 0/0 ONLINE ONLINE m-lrkdb1
ora.ons
ora.ons.type 0/3 0/ ONLINE ONLINE m-lrkdb1
ora....ry.acfs
ora....fs.type 0/5 0/ ONLINE ONLINE m-lrkdb1
ora.scan1.vip
ora....ip.type 0/0 0/0 ONLINE ONLINE m-lrkdb2
ora.scan2.vip
ora....ip.type 0/0 0/0 ONLINE ONLINE m-lrkdb3
ora.scan3.vip
ora....ip.type 0/0 0/0 ONLINE ONLINE m-lrkdb1
[grid@m-lrkdb3:bin]$
olsnodes -n
m-lrkdb2 1
m-lrkdb1 2
m-lrkdb3 3
[grid@m-lrkdb3:bin]$
ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk
'{print $9}'
LISTENER_SCAN2
LISTENER
[grid@m-lrkdb3:bin]$
srvctl status asm -a
ASM
is running on m-lrkdb2,m-lrkdb3,m-lrkdb1
ASM
is enabled.
[grid@m-lrkdb3:bin]$
ocrcheck
Status
of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3488
Available space (kbytes) : 258632
ID : 982593816
Device/File Name : +DATA_OCR
Device/File
integrity check succeeded
Device/File
not configured
Device/File
not configured
Device/File
not configured
Device/File
not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
[grid@m-lrkdb3:bin]$
crsctl query css votedisk
##
STATE File Universal Id File Name Disk group
--
----- ----------------- --------- ---------
1.
ONLINE 1048c830b3614fbcbf9f97e4d5bd0e07 (/u02/nfsdg/disk1)
[DATA_OCR]
2.
ONLINE 520718ade42e4ffcbfc0d6e00fd87f06 (/u02/nfsdg/disk2)
[DATA_OCR]
3.
ONLINE 53ab808324774fe0bf7a421d687fca5d (/u02/nfsdg/disk3)
[DATA_OCR]
Located
3 voting disk(s).
[oracle@m-lrkdb3:oracle]$
ps -ef|grep pmon
grid
23064 1 0 15:22 ? 00:00:02 asm_pmon_+ASM3
oracle
29814 1 0 17:18 ? 00:00:00 ora_pmon_mgirupg3
oracle
30132 1 0 17:22 ? 00:00:00 ora_pmon_mktbupg3
oracle
30464 1 0 17:25 ? 00:00:00 ora_pmon_mlrkupg3
oracle
30897 1 0 17:33 ? 00:00:00 ora_pmon_oagirr3
oracle
31235 1 0 17:37 ? 00:00:00 ora_pmon_oaktbr3
oracle
31613 1 0 17:42 ? 00:00:00 ora_pmon_oalrkr3
oracle
31736 31700 0 17:42 pts/0 00:00:00 grep pmon
5 Clone Oracle binaries
5.1 Clone binaries with AddNode.sh
[oracle@m-lrkdb2:oracle]$
cd $ORACLE_HOME/oui/bin
[oracle@m-lrkdb2:bin]$
./addNode.sh -silent "CLUSTER_NEW_NODES={m-lrkdb3}"
Performing
pre-checks for node addition
Checking
node reachability...
Node
reachability check passed from node "m-lrkdb2"
Checking
user equivalence...
User
equivalence check passed for user "oracle"
WARNING:
Node
"m-lrkdb3" already appears to be part of cluster
Pre-check
for node addition was successful.
Starting
Oracle Universal Installer...
Checking
swap space: must be greater than 500 MB. Actual 9658 MB Passed
Oracle
Universal Installer, Version 11.2.0.3.0 Production
Copyright
(C) 1999, 2011, Oracle. All rights reserved.
Performing
tests to see whether nodes m-lrkdb1,m-lrkdb3 are available
...............................................................
100% Done.
..
-----------------------------------------------------------------------------
Cluster
Node Addition Summary
Global
Settings
Source:
/u01/app/oracle/11.2.0/db_3
New
Nodes
Space
Requirements
New
Nodes
m-lrkdb3
/u01:
Required 4.93GB : Available 27.90GB
Installed
Products
Product
Names
Oracle
Database 11g 11.2.0.3.0
Sun
JDK 1.5.0.30.03
Installer
SDK Component 11.2.0.3.0
Oracle
One-Off Patch Installer 11.2.0.1.7
Oracle
Universal Installer 11.2.0.3.0
Oracle
USM Deconfiguration 11.2.0.3.0
Oracle
Configuration Manager Deconfiguration 10.3.1.0.0
Oracle
DBCA Deconfiguration 11.2.0.3.0
Oracle
RAC Deconfiguration 11.2.0.3.0
Oracle
Database Deconfiguration 11.2.0.3.0
Oracle
Configuration Manager Client 10.3.2.1.0
Oracle
Configuration Manager 10.3.5.0.1
Oracle
ODBC Driverfor Instant Client 11.2.0.3.0
LDAP
Required Support Files 11.2.0.3.0
SSL
Required Support Files for InstantClient 11.2.0.3.0
Bali
Share 1.1.18.0.0
Oracle
Extended Windowing Toolkit 3.4.47.0.0
Oracle
JFC Extended Windowing Toolkit 4.2.36.0.0
Oracle
Real Application Testing 11.2.0.3.0
Oracle
Database Vault J2EE Application 11.2.0.3.0
Oracle
Label Security 11.2.0.3.0
Oracle
Data Mining RDBMS Files 11.2.0.3.0
Oracle
OLAP RDBMS Files 11.2.0.3.0
Oracle
OLAP API 11.2.0.3.0
Platform
Required Support Files 11.2.0.3.0
Oracle
Database Vault option 11.2.0.3.0
Oracle
RAC Required Support Files-HAS 11.2.0.3.0
SQL*Plus
Required Support Files 11.2.0.3.0
Oracle
Display Fonts 9.0.2.0.0
Oracle
Ice Browser 5.2.3.6.0
Oracle
JDBC Server Support Package 11.2.0.3.0
Oracle
SQL Developer 11.2.0.3.0
Oracle
Application Express 11.2.0.3.0
XDK
Required Support Files 11.2.0.3.0
RDBMS
Required Support Files for Instant Client 11.2.0.3.0
SQLJ
Runtime 11.2.0.3.0
Database
Workspace Manager 11.2.0.3.0
RDBMS
Required Support Files Runtime 11.2.0.3.0
Oracle
Globalization Support 11.2.0.3.0
Exadata
Storage Server 11.2.0.1.0
Provisioning
Advisor Framework 10.2.0.4.3
Enterprise
Manager Database Plugin -- Repository Support 11.2.0.3.0
Enterprise
Manager Repository Core Files 10.2.0.4.4
Enterprise
Manager Database Plugin -- Agent Support 11.2.0.3.0
Enterprise
Manager Grid Control Core Files 10.2.0.4.4
Enterprise
Manager Common Core Files 10.2.0.4.4
Enterprise
Manager Agent Core Files 10.2.0.4.4
RDBMS
Required Support Files 11.2.0.3.0
regexp
2.1.9.0.0
Agent
Required Support Files 10.2.0.4.3
Oracle
11g Warehouse Builder Required Files 11.2.0.3.0
Oracle
Notification Service (eONS) 11.2.0.3.0
Oracle
Text Required Support Files 11.2.0.3.0
Parser
Generator Required Support Files 11.2.0.3.0
Oracle
Database 11g Multimedia Files 11.2.0.3.0
Oracle
Multimedia Java Advanced Imaging 11.2.0.3.0
Oracle
Multimedia Annotator 11.2.0.3.0
Oracle
JDBC/OCI Instant Client 11.2.0.3.0
Oracle
Multimedia Locator RDBMS Files 11.2.0.3.0
Precompiler
Required Support Files 11.2.0.3.0
Oracle
Core Required Support Files 11.2.0.3.0
Sample
Schema Data 11.2.0.3.0
Oracle
Starter Database 11.2.0.3.0
Oracle
Message Gateway Common Files 11.2.0.3.0
Oracle
XML Query 11.2.0.3.0
XML
Parser for Oracle JVM 11.2.0.3.0
Oracle
Help For Java 4.2.9.0.0
Installation
Plugin Files 11.2.0.3.0
Enterprise
Manager Common Files 10.2.0.4.3
Expat
libraries 2.0.1.0.1
Deinstallation
Tool 11.2.0.3.0
Oracle
Quality of Service Management (Client) 11.2.0.3.0
Perl
Modules 5.10.0.0.1
JAccelerator
(COMPANION) 11.2.0.3.0
Oracle
Containers for Java 11.2.0.3.0
Perl
Interpreter 5.10.0.0.2
Oracle
Net Required Support Files 11.2.0.3.0
Secure
Socket Layer 11.2.0.3.0
Oracle
Universal Connection Pool 11.2.0.3.0
Oracle
JDBC/THIN Interfaces 11.2.0.3.0
Oracle
Multimedia Client Option 11.2.0.3.0
Oracle
Java Client 11.2.0.3.0
Character
Set Migration Utility 11.2.0.3.0
Oracle
Code Editor 1.2.1.0.0I
PL/SQL
Embedded Gateway 11.2.0.3.0
OLAP
SQL Scripts 11.2.0.3.0
Database
SQL Scripts 11.2.0.3.0
Oracle
Locale Builder 11.2.0.3.0
Oracle
Globalization Support 11.2.0.3.0
SQL*Plus
Files for Instant Client 11.2.0.3.0
Required
Support Files 11.2.0.3.0
Oracle
Database User Interface 2.2.13.0.0
Oracle
ODBC Driver 11.2.0.3.0
Oracle
Notification Service 11.2.0.3.0
XML
Parser for Java 11.2.0.3.0
Oracle
Security Developer Tools 11.2.0.3.0
Oracle
Wallet Manager 11.2.0.3.0
Cluster
Verification Utility Common Files 11.2.0.3.0
Oracle
Clusterware RDBMS Files 11.2.0.3.0
Oracle
UIX 2.2.24.6.0
Enterprise
Manager plugin Common Files 11.2.0.3.0
HAS
Common Files 11.2.0.3.0
Precompiler
Common Files 11.2.0.3.0
Installation
Common Files 11.2.0.3.0
Oracle
Help for the Web 2.0.14.0.0
Oracle
LDAP administration 11.2.0.3.0
Buildtools
Common Files 11.2.0.3.0
Assistant
Common Files 11.2.0.3.0
Oracle
Recovery Manager 11.2.0.3.0
PL/SQL
11.2.0.3.0
Generic
Connectivity Common Files 11.2.0.3.0
Oracle
Database Gateway for ODBC 11.2.0.3.0
Oracle
Programmer 11.2.0.3.0
Oracle
Database Utilities 11.2.0.3.0
Enterprise
Manager Agent 10.2.0.4.3
SQL*Plus
11.2.0.3.0
Oracle
Netca Client 11.2.0.3.0
Oracle
Multimedia Locator 11.2.0.3.0
Oracle
Call Interface (OCI) 11.2.0.3.0
Oracle
Multimedia 11.2.0.3.0
Oracle
Net 11.2.0.3.0
Oracle
XML Development Kit 11.2.0.3.0
Database
Configuration and Upgrade Assistants 11.2.0.3.0
Oracle
JVM 11.2.0.3.0
Oracle
Advanced Security 11.2.0.3.0
Oracle
Internet Directory Client 11.2.0.3.0
Oracle
Enterprise Manager Console DB 11.2.0.3.0
HAS
Files for DB 11.2.0.3.0
Oracle
Net Listener 11.2.0.3.0
Oracle
Text 11.2.0.3.0
Oracle
Net Services 11.2.0.3.0
Oracle
Database 11g 11.2.0.3.0
Oracle
OLAP 11.2.0.3.0
Oracle
Spatial 11.2.0.3.0
Oracle
Partitioning 11.2.0.3.0
Enterprise
Edition Options 11.2.0.3.0
-----------------------------------------------------------------------------
Instantiating
scripts for add node (Tuesday, July 22, 2014 4:40:38 PM CEST)
.
1%
Done.
Instantiation
of add node scripts complete
Copying
to remote nodes (Tuesday, July 22, 2014 4:40:44 PM CEST)
...............................................................................................
96% Done.
Home
copied to new nodes
Saving
inventory on nodes (Tuesday, July 22, 2014 4:47:02 PM CEST)
.
100%
Done.
Save
inventory complete
WARNING:A
new inventory has been created on one or more nodes in this session.
However, it has not yet been registered as the central inventory of
this system.
To
register the new inventory please run the script at
'/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes
'm-lrkdb3'.
If
you do not register the inventory, you may not be able to update or
patch the products you installed.
The
following configuration scripts need to be executed as the "root"
user in each new cluster node. Each script in the list below is
followed by a list of nodes.
/u01/app/oraInventory/orainstRoot.sh
#On nodes m-lrkdb3
/u01/app/oracle/11.2.0/db_3/root.sh
#On nodes m-lrkdb3
To
execute the configuration scripts:
1.
Open a terminal window
2.
Log in as "root"
3.
Run the scripts in each cluster node
The
Cluster Node Addition of /u01/app/oracle/11.2.0/db_3 was successful.
Please
check '/tmp/silentInstall.log' for more details.
5.2 execute root scripts
[root@m-lrkdb3
~]# /u01/app/oraInventory/orainstRoot.sh
Creating
the Oracle inventory pointer file (/etc/oraInst.loc)
Changing
permissions of /u01/app/oraInventory.
Adding
read,write permissions for group.
Removing
read,write,execute permissions for world.
Changing
groupname of /u01/app/oraInventory to oinstall.
The
execution of the script is complete.
[root@m-lrkdb3
~]# /u01/app/oracle/11.2.0/db_3/root.sh
Performing
root user operation for Oracle 11g
The
following environment variables are set as:
ORACLE_OWNER=
oracle
ORACLE_HOME=
/u01/app/oracle/11.2.0/db_3
Enter
the full pathname of the local bin directory: [/usr/local/bin]:
The
contents of "dbhome" have not changed. No need to
overwrite.
The
contents of "oraenv" have not changed. No need to
overwrite.
The
contents of "coraenv" have not changed. No need to
overwrite.
Entries
will be added to the /etc/oratab file as needed by
Database
Configuration Assistant when a database is created
Finished
running generic part of root script.
Now
product-specific root actions will be performed.
Finished
product-specific root actions.
5.3 Change Group Ownership of 'oracle' Binary
If your Oracle RAC is
configured using Job
Role Separation, the $ORACLE_HOME/bin/oracle
binary may not have the proper group ownership after extending the
Oracle Database software on the new node. This will prevent the
Oracle Database software owner (oracle) from
accessing the ASMLib driver or ASM disks on the new node as stated in
My Oracle Support [ID 1084186.1] and [ID 1054033.1]. For example,
after extending the Oracle Database software, the oracle
binary on the new node is owned by
On new node to be added
[oracle@m-lrkdb3:oracle]$
cd /u01/app/oracle/11.2.0/db_3 /bin
[oracle@m-lrkdb3:bin]$
ls -l oracle
-rwsr-s--x
1 oracle oinstall 232538606 Jul 22 16:45 oracle
On existing node
[oracle@m-lrkdb2:bin]$
cd /u01/app/oracle/11.2.0/db_3 /bin
[oracle@m-lrkdb2:bin]$
ls -l oracle
-rwsr-s--x
1 oracle asmadmin 232538606 Apr 18 2013 oracle
As
the grid
user, run the setasmgidwrap
command to set the $ORACLE_HOME/bin/oracle
binary to the proper group ownership.
[grid@m-lrkdb3:grid_3]$
cd
bin
[grid@m-lrkdb3:bin]$
./setasmgidwrap o=/u01/app/oracle/11.2.0/db_3/bin/oracle
[grid@m-lrkdb3:bin]$
ls -l /u01/app/oracle/11.2.0/db_3/bin/oracle
-rwsr-s--x
1 oracle asmadmin 232538606 Jul 22 16:45
/u01/app/oracle/11.2.0/db_3/bin/oracle
5.4 Post verification
See output of post nodeadd verification with
cluvfy in appendix 4.
[root@m-lrkdb3 bin]# ./crsctl status resource -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA_GIR.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.DATA_KTB.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.DATA_LRK.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.DATA_OA_GIRR.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.DATA_OA_KTBR.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.DATA_OA_LRKR.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.DATA_OCR.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.DATA_TEST.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.FRA.dg
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.LISTENER.lsnr
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.asm
ONLINE ONLINE m-lrkdb1 Started
ONLINE ONLINE m-lrkdb2 Started
ONLINE ONLINE m-lrkdb3 Started
ora.gsd
OFFLINE OFFLINE m-lrkdb1
OFFLINE OFFLINE m-lrkdb2
OFFLINE OFFLINE m-lrkdb3
ora.net1.network
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.ons
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.registry.acfs
ONLINE ONLINE m-lrkdb1
ONLINE ONLINE m-lrkdb2
OFFLINE OFFLINE m-lrkdb3
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE m-lrkdb2
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE m-lrkdb3
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE m-lrkdb1
ora.cvu
1 ONLINE ONLINE m-lrkdb1
ora.m-lrkdb1.vip
1 ONLINE ONLINE m-lrkdb1
ora.m-lrkdb2.vip
1 ONLINE ONLINE m-lrkdb2
ora.m-lrkdb3.vip
1 ONLINE ONLINE m-lrkdb3
ora.mgirupg.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
3 ONLINE ONLINE m-lrkdb3 Open
ora.mktbupg.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
3 ONLINE ONLINE m-lrkdb3 Open
ora.mlrkupg.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
3 ONLINE ONLINE m-lrkdb3 Open
ora.oagirr.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
3 ONLINE ONLINE m-lrkdb3 Open
ora.oaktbr.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
3 ONLINE ONLINE m-lrkdb3 Open
ora.oalrkr.db
1 ONLINE ONLINE m-lrkdb2 Open
2 ONLINE ONLINE m-lrkdb1 Open
3 ONLINE ONLINE m-lrkdb3 Open
ora.oc4j
1 ONLINE ONLINE m-lrkdb1
ora.scan1.vip
1 ONLINE ONLINE m-lrkdb2
ora.scan2.vip
1 ONLINE ONLINE m-lrkdb3
ora.scan3.vip
1 ONLINE ONLINE m-lrkdb1
[root@m-lrkdb3
bin]# ./crsctl check crs
CRS-4638:
Oracle High Availability Services is online
CRS-4537:
Cluster Ready Services is online
CRS-4529:
Cluster Synchronization Services is online
CRS-4533:
Event Manager is online
6 Test client connection
[tiabd@ubuntu.12.04:~/Dropbox/taot/sql]$
grep oaktbr3 tns*
tnsnames.ora.ictulrk_rac:oaktbr3=(DESCRIPTION=(ENABLE=BROKEN)(ADDRESS=(PROTOCOL=TCP)(HOST=m-lrkdb3-vip.lrk.org)(PORT=1521))(CONNECT_DATA=(UR=A)(SERVER=DEDICATED)(SERVICE_NAME=oaktbr.lrk.org)))
[tiabd@ubuntu.12.04:~/Dropbox/taot/sql]$
sqlplus ictu_dba@oaktbr3
SQL*Plus:
Release 11.2.0.3.0 Production on Wed Jul 23 08:56:34 2014
Copyright
(c) 1982, 2011, Oracle. All rights reserved.
Enter
password:
Connected
to:
Oracle
Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With
the Partitioning, Real Application Clusters, Automatic Storage
Management, OLAP,
Data
Mining and Real Application Testing options
SQL>
select instance_name, host_name from v$instance;
INSTANCE_NAME
HOST_NAME
----------------
----------------------------------------------------------------
oaktbr3
m-lrkdb3.lrk.org
1
row selected.