7 Remove cluster targets from Enterprise Manager
8 Remove cluster node
In this chapter we remove cluster node m-lrkdb1. All staps in this chapter where also successfully executed for node m-lrkdb2.
8.1 Remove database instances
First we remove the instances with DBCA as instructed in the Oracle documentation
-
Start DBCA.
Start DBCA on a node other than the node that hosts the instance that you want to delete. The database and the instance that you plan to delete should be running during this step.
-
On the DBCA Welcome page select Oracle Real Application Clusters Database, click Next. DBCA displays the Operations page.
-
On the DBCA Operations page, select Instance Management and click Next. DBCA displays the Instance Management page.
-
On the DBCA Instance Management page, select the instance to be deleted, select Delete Instance, and click Next.
-
On the List of Cluster Databases page, select the Oracle RAC database from which to delete the instance, as follows:
-
On the List of Cluster Database Instances page, DBCA displays the instances that are associated with the Oracle RAC database that you selected and the status of each instance. Select the cluster database from which you will delete the instance.
-
Enter a user name and password for the database user that has
SYSDBA
privileges. Click Next.
-
Click OK on the Confirmation dialog to proceed to delete the instance.
DBCA displays a progress dialog showing that DBCA is deleting the instance. During this operation, DBCA removes the instance and the instance's Oracle Net configuration. When DBCA completes this operation, DBCA displays a dialog asking whether you want to perform another operation.
Click No and exit DBCA or click Yes to perform another operation. If you click Yes, then DBCA displays the Operations page.
-
Verify that the dropped instance's redo thread has been removed by using SQL*Plus on an existing node to query the
GV$LOG
view. If the redo thread is not disabled, then disable the thread. For example:
SQL>
ALTER DATABASE DISABLE THREAD 2;
-
Verify that the instance has been removed from OCR by running the following command, where
db_unique_name
is the database unique name for your Oracle RAC database:
srvctl
config database -d db_unique_name
-
If you are deleting more than one node, then repeat these steps to delete the instances from all the nodes that you are going to delete.
SYS@mgirupg>select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES
---------- ---------- ---------- ----------
1 1 3305 52428800
2 1 3304 52428800
5 3 1 52428800
6 3 2 52428800
[oracle@m-lrkdb3:oracle]$ srvctl config database -d mgirupg
Database unique name: mgirupg
Database name: mgirupg
Oracle home: /u01/app/oracle/11.2.0/db_3
Oracle user: oracle
Spfile: +DATA_GIR/mgirupg/spfilemgirupg.ora
Domain: lrk.org
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: mgirupg
Database instances: mgirupg1,mgirupg3
Disk Groups: DATA_GIR,FRA
Mount point paths:
Services:
Type: RAC
Database is administrator managed
First we remove the instances with DBCA as instructed in the Oracle documentation
Start DBCA.
Start DBCA on a node other than the node that hosts the instance that you want to delete. The database and the instance that you plan to delete should be running during this step.
On the DBCA Welcome page select Oracle Real Application Clusters Database, click Next. DBCA displays the Operations page.
On the DBCA Operations page, select Instance Management and click Next. DBCA displays the Instance Management page.
On the DBCA Instance Management page, select the instance to be deleted, select Delete Instance, and click Next.
On the List of Cluster Databases page, select the Oracle RAC database from which to delete the instance, as follows:
- On the List of Cluster Database Instances page, DBCA displays the instances that are associated with the Oracle RAC database that you selected and the status of each instance. Select the cluster database from which you will delete the instance.
- Enter a user name and password for the database user that has
SYSDBA
privileges. Click Next. - Click OK on the Confirmation dialog to proceed to delete the instance.DBCA displays a progress dialog showing that DBCA is deleting the instance. During this operation, DBCA removes the instance and the instance's Oracle Net configuration. When DBCA completes this operation, DBCA displays a dialog asking whether you want to perform another operation.Click No and exit DBCA or click Yes to perform another operation. If you click Yes, then DBCA displays the Operations page.
Verify that the dropped instance's redo thread has been removed by using SQL*Plus on an existing node to query the
GV$LOG
view. If the redo thread is not disabled, then disable the thread. For example: SQL>
ALTER DATABASE DISABLE THREAD 2;
Verify that the instance has been removed from OCR by running the following command, where
db_unique_name
is the database unique name for your Oracle RAC database: srvctl
config database -d db_unique_name
If you are deleting more than one node, then repeat these steps to delete the instances from all the nodes that you are going to delete.
8.2 Remove Oracle binaries
8.2.1 Remove listeners
Remove local listeneres running from Oracle Home
[oracle@m-lrkdb1:oracle]$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
/u01/app/11.2.0/grid_3 on node(s) m-lrkdb1,m-lrkdb2,m-lrkdb3
End points: TCP:1521
There are no listeners running from Oracle home.
8.2.2 Update Oracle Inventory - (Node Being Removed)
[oracle@m-lrkdb1:bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={m-lrkdb1}" -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 10090 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
After executing runInstaller on the node to be deleted, the inventory.xml file on that node (/u01/app/oraInventory/ContentsXML/inventory.xml) will show only the node to be deleted under the Oracle home name.
[oracle@m-lrkdb1:bin]$ view /u01/app/oraInventory/ContentsXML/inventory.xml
...
<HOME NAME="OraDb11g_home3" LOC="/u01/app/oracle/11.2.0/db_3" TYPE="O" IDX="16">
<NODE_LIST>
<NODE NAME="m-lrkdb1"/>
</NODE_LIST>
</HOME>
...
[oracle@m-lrkdb3:bin]$ view /u01/app/oraInventory/ContentsXML/inventory.xml
...
<HOME NAME="OraDb11g_home3" LOC="/u01/app/oracle/11.2.0/db_3" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="m-lrkdb2"/>
<NODE NAME="m-lrkdb1"/>
<NODE NAME="m-lrkdb3"/>
</NODE_LIST>
</HOME>
...
[oracle@m-lrkdb1:bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={m-lrkdb1}" -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 10090 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
After executing runInstaller on the node to be deleted, the inventory.xml file on that node (/u01/app/oraInventory/ContentsXML/inventory.xml) will show only the node to be deleted under the Oracle home name.
[oracle@m-lrkdb1:bin]$ view /u01/app/oraInventory/ContentsXML/inventory.xml
...
<HOME NAME="OraDb11g_home3" LOC="/u01/app/oracle/11.2.0/db_3" TYPE="O" IDX="16">
<NODE_LIST>
<NODE NAME="m-lrkdb1"/>
</NODE_LIST>
</HOME>
...
[oracle@m-lrkdb3:bin]$ view /u01/app/oraInventory/ContentsXML/inventory.xml
...
<HOME NAME="OraDb11g_home3" LOC="/u01/app/oracle/11.2.0/db_3" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="m-lrkdb2"/>
<NODE NAME="m-lrkdb1"/>
<NODE NAME="m-lrkdb3"/>
</NODE_LIST>
</HOME>
...
8.3 De-install Oracle Home
Before attempting to de-install the Oracle Database software, review the /etc/oratab file on the node to be deleted and remove any entries that contain a database instance running out of the Oracle home being de-installed. Do not remove any +ASM entries.
+ASM3:/u01/app/11.2.0/grid_3:N
agent:/u01/app/oracle/agent12c/core/12.1.0.2.0:N
mktbupg2:/u01/app/oracle/11.2.0/db_3:N # line added by Agent
mlrkupg2/u01/app/oracle/11.2.0/db_3:N # line added by Agent
mgirupg2:/u01/app/oracle/11.2.0/db_3:N # line added by Agent
oagir2r:/u01/app/oracle/11.2.0/db_3:N # line added by Agent
oaktbr2:/u01/app/oracle/11.2.0/db_3:N # line added by Agent
oalrkr2:/u01/app/oracle/11.2.0/db_3:N # line added by Agent
When using a non-shared Oracle home (as is the case in this example guide), run deinstall as the Oracle Database software owner from the node to be removed in order to delete the Oracle Database software.
[oracle@m-lrkdb1]$ cd /u01/app/oracle/11.2.0/db_3/deinstall
[oracle@m-lrkdb1:deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
...
...
...
Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check9995.log
Oracle Configuration Manager check END
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid_3
The cluster node(s) on which the Oracle home deinstallation will be performed are:m-lrkdb1
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'm-lrkdb1', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/oracle/11.2.0/db_3
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.
No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists and CCR is configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-07-23_10-51-30-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-07-23_10-51-30-AM.err'
######################## CLEAN OPERATION START ########################
Enterprise Manager Configuration Assistant START
EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2014-07-23_10-52-04-AM.log
Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2014-07-23_10-53-23-AM.log
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2014-07-23_10-53-23-AM.log
De-configuring Local Net Service Names configuration file...
Local Net Service Names configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean9995.log
Oracle Configuration Manager clean END
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/oracle/11.2.0/db_3' from the central inventory on the local node : Done
Delete directory '/u01/app/oracle/11.2.0/db_3' on the local node : Done
The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/u01/app/oracle/11.2.0/db_1'.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2014-07-23_10-50-32AM' on node 'm-lrkdb1'
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
Cleaning the CCR configuration by executing its binaries
CCR clean is finished
Successfully detached Oracle home '/u01/app/oracle/11.2.0/db_3' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/11.2.0/db_3' on the local node.
Oracle Universal Installer cleanup was successful.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
Before attempting to de-install the Oracle Database software, review the /etc/oratab file on the node to be deleted and remove any entries that contain a database instance running out of the Oracle home being de-installed. Do not remove any +ASM entries.
+ASM3:/u01/app/11.2.0/grid_3:N
agent:/u01/app/oracle/agent12c/core/12.1.0.2.0:N
When using a non-shared Oracle home (as is the case in this example guide), run deinstall as the Oracle Database software owner from the node to be removed in order to delete the Oracle Database software.
[oracle@m-lrkdb1]$ cd /u01/app/oracle/11.2.0/db_3/deinstall
[oracle@m-lrkdb1:deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
...
...
...
Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check9995.log
Oracle Configuration Manager check END
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid_3
The cluster node(s) on which the Oracle home deinstallation will be performed are:m-lrkdb1
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'm-lrkdb1', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/oracle/11.2.0/db_3
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.
No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists and CCR is configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-07-23_10-51-30-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-07-23_10-51-30-AM.err'
######################## CLEAN OPERATION START ########################
Enterprise Manager Configuration Assistant START
EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2014-07-23_10-52-04-AM.log
Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2014-07-23_10-53-23-AM.log
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2014-07-23_10-53-23-AM.log
De-configuring Local Net Service Names configuration file...
Local Net Service Names configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean9995.log
Oracle Configuration Manager clean END
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/oracle/11.2.0/db_3' from the central inventory on the local node : Done
Delete directory '/u01/app/oracle/11.2.0/db_3' on the local node : Done
The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/u01/app/oracle/11.2.0/db_1'.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2014-07-23_10-50-32AM' on node 'm-lrkdb1'
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
Cleaning the CCR configuration by executing its binaries
CCR clean is finished
Successfully detached Oracle home '/u01/app/oracle/11.2.0/db_3' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/11.2.0/db_3' on the local node.
Oracle Universal Installer cleanup was successful.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
8.4 Update Oracle Inventory - (All Remaining Nodes)
From one of the nodes that is to remain part of the cluster, execute runInstaller (without the -local option) as the Oracle software owner to update the inventories with a list of the nodes that are to remain in the cluster. Use the CLUSTER_NODES option to specify the nodes that will remain in the cluster.
[oracle@m-lrkdb2:bin]$ cd $ORACLE_HOME /oui/bin
[oracle@m-lrkdb2:bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={m-lrkdb2,m-lrkdb3}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 9710 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@m-lrkdb2:bin]$
Review the inventory.xml file on each remaining node in the cluster to verify the Oracle home name does not include the node being removed.
[oracle@m-lrkdb2:bin]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
[oracle@m-lrkdb3:bin]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
...
<HOME NAME="OraDb11g_home3" LOC="/u01/app/oracle/11.2.0/db_3" TYPE="O" IDX="16">
<NODE_LIST>
<NODE NAME="m-lrkdb2"/>
<NODE NAME="m-lrkdb3"/>
</NODE_LIST>
</HOME>
...
From one of the nodes that is to remain part of the cluster, execute runInstaller (without the -local option) as the Oracle software owner to update the inventories with a list of the nodes that are to remain in the cluster. Use the CLUSTER_NODES option to specify the nodes that will remain in the cluster.
[oracle@m-lrkdb2:bin]$ cd $ORACLE_HOME /oui/bin
[oracle@m-lrkdb2:bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={m-lrkdb2,m-lrkdb3}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 9710 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@m-lrkdb2:bin]$
Review the inventory.xml file on each remaining node in the cluster to verify the Oracle home name does not include the node being removed.
[oracle@m-lrkdb2:bin]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
[oracle@m-lrkdb3:bin]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
...
<HOME NAME="OraDb11g_home3" LOC="/u01/app/oracle/11.2.0/db_3" TYPE="O" IDX="16">
<NODE_LIST>
<NODE NAME="m-lrkdb2"/>
<NODE NAME="m-lrkdb3"/>
</NODE_LIST>
</HOME>
...
9 Remove Cluster Node
9.1 Unpin Node
Run the following command as eitherroot
or the user that installed Oracle Clusterware to determine whether the node you want to delete is active and whether it is pinned:
[grid@m-lrkdb2:grid]$ olsnodes -s -t
m-lrkdb2 Active Unpinned
m-lrkdb1 Active Unpinned
m-lrkdb3 Active Unpinned
If the node is pinned, then run the crsctl unpin css
command. Otherwise, proceed to the next step.
Run the following command as either
root
or the user that installed Oracle Clusterware to determine whether the node you want to delete is active and whether it is pinned:
[grid@m-lrkdb2:grid]$ olsnodes -s -t
m-lrkdb2 Active Unpinned
m-lrkdb1 Active Unpinned
m-lrkdb3 Active Unpinned
If the node is pinned, then run the
crsctl unpin css
command. Otherwise, proceed to the next step.9.2 Disable applications and daemons
Note:
This step is required only if you are using Oracle Clusterware 11g release 2 (11.2.0.1) or 11g release 2 (11.2.0.2).
Disable the Oracle Clusterware applications and daemons running on the node. Run the rootcrs.pl
script as root
from the Grid_home/crs/install
directory on the node to be deleted, as follows:
Note:
Before you run this command, you must stop the EMAGENT
, as follows:
$ emctl
stop dbconsole
Note:
This step is required only if you are using Oracle Clusterware 11g release 2 (11.2.0.1) or 11g release 2 (11.2.0.2).
Disable the Oracle Clusterware applications and daemons running on the node. Run the
rootcrs.pl
script as root
from the Grid_home/crs/install
directory on the node to be deleted, as follows:
Note:
Before you run this command, you must stop the
EMAGENT
, as follows:$ emctl
stop dbconsole
9.3 Run rootcrs.pl -deconfig
If you are deleting multiple nodes, then run the rootcrs.pl script on each node that you are deleting.
[root@m-lrkdb1 ~]# cd /u01/app/11.2.0/grid_3
[root@m-lrkdb1 grid_3]# cd crs/install
[root@m-lrkdb1 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/10.19.62.0/255.255.255.0/eth0, type static
VIP exists: /10.19.62.49/10.19.62.49/10.19.62.0/255.255.255.0/eth0, hosting node m-lrkdb1
VIP exists: /10.19.62.51/10.19.62.51/10.19.62.0/255.255.255.0/eth0, hosting node m-lrkdb2
VIP exists: /m-lrkdb3-vip/10.19.62.86/10.19.62.0/255.255.255.0/eth0, hosting node m-lrkdb3
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.crsd' on 'm-lrkdb1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_GIR.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_KTB.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_LRK.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_OA_GIRR.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_OA_KTBR.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_OA_LRKR.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_OCR.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_TEST.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.DATA_OA_GIRR.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.DATA_LRK.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.DATA_KTB.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.DATA_GIR.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.DATA_OA_LRKR.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.DATA_OA_KTBR.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.DATA_TEST.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'm-lrkdb1' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'm-lrkdb3'
CRS-2676: Start of 'ora.oc4j' on 'm-lrkdb3' succeeded
CRS-2677: Stop of 'ora.DATA_OCR.dg' on 'm-lrkdb1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.asm' on 'm-lrkdb1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'm-lrkdb1' has completed
CRS-2677: Stop of 'ora.crsd' on 'm-lrkdb1' succeeded
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.evmd' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.asm' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.mdnsd' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.asm' on 'm-lrkdb1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'm-lrkdb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.cssd' on 'm-lrkdb1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.crf' on 'm-lrkdb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.gipcd' on 'm-lrkdb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.gpnpd' on 'm-lrkdb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'm-lrkdb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
If you are deleting multiple nodes, then run the rootcrs.pl script on each node that you are deleting.
[root@m-lrkdb1 ~]# cd /u01/app/11.2.0/grid_3
[root@m-lrkdb1 grid_3]# cd crs/install
[root@m-lrkdb1 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/10.19.62.0/255.255.255.0/eth0, type static
VIP exists: /10.19.62.49/10.19.62.49/10.19.62.0/255.255.255.0/eth0, hosting node m-lrkdb1
VIP exists: /10.19.62.51/10.19.62.51/10.19.62.0/255.255.255.0/eth0, hosting node m-lrkdb2
VIP exists: /m-lrkdb3-vip/10.19.62.86/10.19.62.0/255.255.255.0/eth0, hosting node m-lrkdb3
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.crsd' on 'm-lrkdb1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_GIR.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_KTB.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_LRK.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_OA_GIRR.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_OA_KTBR.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_OA_LRKR.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_OCR.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.DATA_TEST.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.DATA_OA_GIRR.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.DATA_LRK.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.DATA_KTB.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.DATA_GIR.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.DATA_OA_LRKR.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.DATA_OA_KTBR.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.DATA_TEST.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'm-lrkdb1' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'm-lrkdb3'
CRS-2676: Start of 'ora.oc4j' on 'm-lrkdb3' succeeded
CRS-2677: Stop of 'ora.DATA_OCR.dg' on 'm-lrkdb1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.asm' on 'm-lrkdb1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'm-lrkdb1' has completed
CRS-2677: Stop of 'ora.crsd' on 'm-lrkdb1' succeeded
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.evmd' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.asm' on 'm-lrkdb1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.mdnsd' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.asm' on 'm-lrkdb1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'm-lrkdb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.cssd' on 'm-lrkdb1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'm-lrkdb1' succeeded
CRS-2677: Stop of 'ora.crf' on 'm-lrkdb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.gipcd' on 'm-lrkdb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'm-lrkdb1'
CRS-2677: Stop of 'ora.gpnpd' on 'm-lrkdb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'm-lrkdb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
9.4 Delete Node from the Cluster
From any node that you are not deleting, run the following command from the Grid_home/bin
directory asroot
to delete the node from the cluster:
[root@m-lrkdb2 ~]# cd /u01/app/11.2.0/grid_3/bin
[root@m-lrkdb2 bin]# ./crsctl delete node -n m-lrkdb1
CRS-4661: Node m-lrkdb1 successfully deleted.
[root@m-lrkdb2 bin]# ./olsnodes -t -s
m-lrkdb2 Active Unpinned
m-lrkdb3 Active Unpinned
From any node that you are not deleting, run the following command from the
Grid_home/bin
directory asroot
to delete the node from the cluster:
[root@m-lrkdb2 ~]# cd /u01/app/11.2.0/grid_3/bin
[root@m-lrkdb2 bin]# ./crsctl delete node -n m-lrkdb1
CRS-4661: Node m-lrkdb1 successfully deleted.
[root@m-lrkdb2 bin]# ./olsnodes -t -s
m-lrkdb2 Active Unpinned
m-lrkdb3 Active Unpinned
9.5 Update Inventory
On the node you want to delete, run the following command as the user that installed Oracle Clusterware from the Grid_home/oui/bin
directory where node_to_be_deleted
is the name of the node that you are deleting:
[grid@m-lrkdb1:bin]$ cd /u01/app/11.2.0/grid_3/oui/bin
[grid@m-lrkdb1:bin]$
./runInstaller
-updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid_3
"CLUSTER_NODES={m-lrkdb1}" CRS=TRUE -local
Starting
Oracle Universal Installer...
Checking
swap space: must be greater than 500 MB. Actual 10236 MB Passed
The
inventory pointer is located at /etc/oraInst.loc
The
inventory is located at /u01/app/oraInventory
'UpdateNodeList'
was successful.
After executing runInstaller on the node to be deleted, the inventory.xml file on that node (/u01/app/oraInventory/ContentsXML/inventory.xml) will show only the node to be deleted under the Grid home name.
[grid@m-lrkdb1:bin]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
...
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid_3" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="m-lrkdb1"/>
</NODE_LIST>
</HOME>
...
Grid_home/oui/bin
directory where node_to_be_deleted
is the name of the node that you are deleting:9.6 De-install Oracle Grid Infrastructure Software
The inventory.xml file on the other nodes will still show all of the nodes in the cluster. The inventory on the remaining nodes will be updated after de-installing the Oracle Grid Infrastructure software.
[grid@m-lrkdb3:grid]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
...
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid_3" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="m-lrkdb2"/>
<NODE NAME="m-lrkdb1"/>
<NODE NAME="m-lrkdb3"/>
</NODE_LIST>
</HOME>
...
The Deinstallation Tool (deinstall
) stops Oracle software, and removes Oracle software and configuration files on the operating system. It is available in the installation media before installation, and is available in Oracle home directories after installation. It is located in the path $ORACLE_HOME/deinstall
. The Deinstallation tool command is also available for download from Oracle Technology Network (OTN) at the following URL:
Run deinstall
with the flag -local
, it deconfigures and deinstalls the Oracle software on the local node (the node where deinstall
is run) for non-shared home directories.
Oracle recommends that you run the deinstall
command as the Oracle Grid Infrastructure installation owner or the grid
user that owns Oracle Clusterware.
[grid@m-lrkdb1:bin]$ cd /u01/app/11.2.0/grid_3/deinstall/
[grid@m-lrkdb1:deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/11.2.0/grid_3
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: m-lrkdb1
Checking for sufficient temp space availability on node(s) : 'm-lrkdb1'
## [END] Install check configuration ##
Traces log file: /u01/app/oraInventory/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "m-lrkdb1"[m-lrkdb1-vip]
> ENTER
The following information can be collected by running "/sbin/ifconfig -a" on node "m-lrkdb1"
Enter the IP netmask of Virtual IP "10.19.62.49" on node "m-lrkdb1"[255.255.255.0]
> ENTER
Enter the network interface name on which the virtual IP address "10.19.62.49" is active
> ENTER
Enter an address or the name of the virtual IP[]
> ENTER
Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2014-07-23_11-34-57-AM.log
Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:LISTENER
At least one listener from the discovered listener list [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1] is missing in the specified listener list [LISTENER]. The Oracle home will be cleaned up, so all the listeners will not be available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2014-07-23_11-35-23-AM.log
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:m-lrkdb1
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'm-lrkdb1', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid_3
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-07-23_11-33-15-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-07-23_11-33-15-AM.err'
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2014-07-23_11-35-36-AM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2014-07-23_11-35-36-AM.log
De-configuring RAC listener(s): LISTENER
De-configuring listener: LISTENER
Stopping listener on node "m-lrkdb1": LISTENER
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
Run the following command as the root user or the administrator on node "m-lrkdb1".
/tmp/deinstall2014-07-23_11-32-31AM/perl/bin/perl -I/tmp/deinstall2014-07-23_11-32-31AM/perl/lib -I/tmp/deinstall2014-07-23_11-32-31AM/crs/install /tmp/deinstall2014-07-23_11-32-31AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-07-23_11-32-31AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Press Enter after you finish running the above commands
<----------------------------------------
Run command as root in another terminal
[root@m-lrkdb1 ~]# /tmp/deinstall2014-07-23_11-32-31AM/perl/bin/perl -I/tmp/deinstall2014-07-23_11-32-31AM/perl/lib -I/tmp/deinstall2014-07-23_11-32-31AM/crs/install /tmp/deinstall2014-07-23_11-32-31AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-07-23_11-32-31AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2014-07-23_11-32-31AM/response/deinstall_Ora11g_gridinfrahome1.rsp
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Modify failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Delete failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware #
################################################################
ACFS-9313: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
Continue
<----------------------------------------
Remove the directory: /tmp/deinstall2014-07-23_11-32-31AM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/11.2.0/grid_3' from the central inventory on the local node : Done
Delete directory '/u01/app/11.2.0/grid_3' on the local node : Done
The Oracle Base directory '/u01/app/grid' will not be removed on local node. The directory is not empty.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2014-07-23_11-32-31AM' on node 'm-lrkdb1'
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware is stopped and successfully de-configured on node "m-lrkdb1"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid_3' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid_3' on the local node.
Oracle Universal Installer cleanup was successful.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
Manually delete the
following files as root:
The Deinstallation Tool (
deinstall
) stops Oracle software, and removes Oracle software and configuration files on the operating system. It is available in the installation media before installation, and is available in Oracle home directories after installation. It is located in the path $ORACLE_HOME/deinstall
. The Deinstallation tool command is also available for download from Oracle Technology Network (OTN) at the following URL:
Run
deinstall
with the flag -local
, it deconfigures and deinstalls the Oracle software on the local node (the node where deinstall
is run) for non-shared home directories.
Oracle recommends that you run the
deinstall
command as the Oracle Grid Infrastructure installation owner or the grid
user that owns Oracle Clusterware.
[grid@m-lrkdb1:bin]$ cd /u01/app/11.2.0/grid_3/deinstall/
[grid@m-lrkdb1:deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/11.2.0/grid_3
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: m-lrkdb1
Checking for sufficient temp space availability on node(s) : 'm-lrkdb1'
## [END] Install check configuration ##
Traces log file: /u01/app/oraInventory/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "m-lrkdb1"[m-lrkdb1-vip]
> ENTER
The following information can be collected by running "/sbin/ifconfig -a" on node "m-lrkdb1"
Enter the IP netmask of Virtual IP "10.19.62.49" on node "m-lrkdb1"[255.255.255.0]
> ENTER
Enter the network interface name on which the virtual IP address "10.19.62.49" is active
> ENTER
Enter an address or the name of the virtual IP[]
> ENTER
Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2014-07-23_11-34-57-AM.log
Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:LISTENER
At least one listener from the discovered listener list [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1] is missing in the specified listener list [LISTENER]. The Oracle home will be cleaned up, so all the listeners will not be available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2014-07-23_11-35-23-AM.log
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:m-lrkdb1
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'm-lrkdb1', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid_3
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-07-23_11-33-15-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-07-23_11-33-15-AM.err'
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2014-07-23_11-35-36-AM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2014-07-23_11-35-36-AM.log
De-configuring RAC listener(s): LISTENER
De-configuring listener: LISTENER
Stopping listener on node "m-lrkdb1": LISTENER
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
Run the following command as the root user or the administrator on node "m-lrkdb1".
/tmp/deinstall2014-07-23_11-32-31AM/perl/bin/perl -I/tmp/deinstall2014-07-23_11-32-31AM/perl/lib -I/tmp/deinstall2014-07-23_11-32-31AM/crs/install /tmp/deinstall2014-07-23_11-32-31AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-07-23_11-32-31AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Press Enter after you finish running the above commands
<----------------------------------------
Run command as root in another terminal
[root@m-lrkdb1 ~]# /tmp/deinstall2014-07-23_11-32-31AM/perl/bin/perl -I/tmp/deinstall2014-07-23_11-32-31AM/perl/lib -I/tmp/deinstall2014-07-23_11-32-31AM/crs/install /tmp/deinstall2014-07-23_11-32-31AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-07-23_11-32-31AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2014-07-23_11-32-31AM/response/deinstall_Ora11g_gridinfrahome1.rsp
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Modify failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Delete failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware #
################################################################
ACFS-9313: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
Continue
<----------------------------------------
Remove the directory: /tmp/deinstall2014-07-23_11-32-31AM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/11.2.0/grid_3' from the central inventory on the local node : Done
Delete directory '/u01/app/11.2.0/grid_3' on the local node : Done
The Oracle Base directory '/u01/app/grid' will not be removed on local node. The directory is not empty.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2014-07-23_11-32-31AM' on node 'm-lrkdb1'
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware is stopped and successfully de-configured on node "m-lrkdb1"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid_3' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid_3' on the local node.
Oracle Universal Installer cleanup was successful.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
Manually delete the
following files as root:
[root@m-lrkdb1
~]# rm -rf /etc/oraInst.loc
[root@m-lrkdb1
~]# rm -rf /opt/ORCLfmap
[root@m-lrkdb1
~]# rm -rf /u01/app/11.2.0
[root@m-lrkdb1
~]# rm -rf /u01/app/oracle
[root@m-lrkdb1
~]# rm -rf /etc/oraInst.loc
[root@m-lrkdb1
~]# rm -rf /opt/ORCLfmap
[root@m-lrkdb1
~]# rm -rf /u01/app/11.2.0
[root@m-lrkdb1
~]# rm -rf /u01/app/oracle
9.7 Update Oracle Inventory
From
one of the nodes that is to remain part of the cluster, execute
runInstaller
(without the -local
option) as the Grid Infrastructure software owner to update the
inventories with a list of the nodes that are to remain in the
cluster. Use the CLUSTER_NODES
option to specify the nodes that will remain in the cluster.
[grid@m-lrkdb2:bin]$ cd /u01/app/11.2.0/grid_3/oui/bin
[grid@m-lrkdb2:bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid_3
"CLUSTER_NODES={m-lrkdb2,m-lrkdb3}" CRS=TRUE
StartingOracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 9711 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful.
From
one of the nodes that is to remain part of the cluster, execute
runInstaller
(without the -local
option) as the Grid Infrastructure software owner to update the
inventories with a list of the nodes that are to remain in the
cluster. Use the CLUSTER_NODES
option to specify the nodes that will remain in the cluster.
[grid@m-lrkdb2:bin]$ cd /u01/app/11.2.0/grid_3/oui/bin
[grid@m-lrkdb2:bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid_3
"CLUSTER_NODES={m-lrkdb2,m-lrkdb3}" CRS=TRUE
StartingOracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 9711 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful.
9.8 Verify New Cluster Configuration
Run
the following CVU command to verify that the specified node has been
successfully deleted from the cluster.
[grid@m-lrkdb2:grid]$ cluvfy stage -post nodedel -n m-lrkdb1 -verbose
Performing post-checks for node removal
Checking CRS integrity...
Clusterware version consistency passed
The Oracle Clusterware is healthy on node "m-lrkdb2"
The Oracle Clusterware is healthy on node "m-lrkdb3"
CRS integrity check passed
Result:
Node removal check passed
Post-check for node removal was successful.
[grid@m-lrkdb3:bin]$ ./crsctl status resource -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA_GIR.dg
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.DATA_KTB.dg
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.DATA_LRK.dg
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.DATA_OA_GIRR.dg
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.DATA_OA_KTBR.dg
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.DATA_OA_LRKR.dg
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.DATA_OCR.dg
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.DATA_TEST.dg
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.FRA.dg
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.LISTENER.lsnr
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.asm
ONLINE ONLINE m-lrkdb2 Started
ONLINE ONLINE m-lrkdb3 Started
ora.gsd
OFFLINE OFFLINE m-lrkdb2
OFFLINE OFFLINE m-lrkdb3
ora.net1.network
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.ons
ONLINE ONLINE m-lrkdb2
ONLINE ONLINE m-lrkdb3
ora.registry.acfs
ONLINE ONLINE m-lrkdb2
OFFLINE OFFLINE m-lrkdb3
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE m-lrkdb2
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE m-lrkdb3
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE m-lrkdb2
ora.cvu
1 ONLINE ONLINE m-lrkdb3
ora.m-lrkdb2.vip
1 ONLINE ONLINE m-lrkdb2
ora.m-lrkdb3.vip
1 ONLINE ONLINE m-lrkdb3
ora.mgirupg.db
1 ONLINE ONLINE m-lrkdb2 Open
3 ONLINE ONLINE m-lrkdb3 Open
ora.mktbupg.db
1 ONLINE ONLINE m-lrkdb2 Open
3 ONLINE ONLINE m-lrkdb3 Open
ora.mlrkupg.db
1 ONLINE ONLINE m-lrkdb2 Open
3 ONLINE ONLINE m-lrkdb3 Open
ora.oagirr.db
1 ONLINE ONLINE m-lrkdb2 Open
3 ONLINE ONLINE m-lrkdb3 Open
ora.oaktbr.db
1 ONLINE ONLINE m-lrkdb2 Open
3 ONLINE ONLINE m-lrkdb3 Open
ora.oalrkr.db
1 ONLINE ONLINE m-lrkdb2 Open
3 ONLINE ONLINE m-lrkdb3 Open
ora.oc4j
1 ONLINE ONLINE m-lrkdb3
ora.scan1.vip
1 ONLINE ONLINE m-lrkdb2
ora.scan2.vip
1 ONLINE ONLINE m-lrkdb3
ora.scan3.vip
1 ONLINE ONLINE m-lrkdb2
[root@m-lrkdb3 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@m-lrkdb3:bin]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3392
Available space (kbytes) : 258728
ID : 982593816
Device/File Name : +DATA_OCR
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
[grid@m-lrkdb3:bin]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 1048c830b3614fbcbf9f97e4d5bd0e07 (/u02/nfsdg/disk1) [DATA_OCR]
2. ONLINE 520718ade42e4ffcbfc0d6e00fd87f06 (/u02/nfsdg/disk2) [DATA_OCR]
3. ONLINE 53ab808324774fe0bf7a421d687fca5d (/u02/nfsdg/disk3) [DATA_OCR]
Located 3 voting disk(s).
10 Post Actions
10.1 Run DBCA to add instances
Run DBCA from an existing node to add new database instances to the new nodes.
See DBCA screendumps in document:
add_node_oel5-oel6_screenshots_add_instances.docx
[oracle@m-lrkdb3]$ dbca
10.2 Install EM Agent and discover cluster targets
In EMCC 12c we can easily deploy a new Oracle 12 management agent and discover all cluster targets again.
In EMCC 12c we can easily deploy a new Oracle 12 management agent and discover all cluster targets again.
Geen opmerkingen:
Een reactie posten