RAC Attack - Oracle Cluster Database at Home/RAC Attack 12c/Check Cluster Status After GI Install

From Wikibooks, open books for an open world
< RAC Attack - Oracle Cluster Database at Home‎ | RAC Attack 12c
Jump to navigation Jump to search


  1. Once your Grid Infrastructure installation is finished, you can get the status of the cluster components:
  2. [oracle@collabn1 ~]$ . oraenv ORACLE_SID = [oracle] ? +ASM1 [oracle@collabn1 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE collabn1 STABLE ONLINE ONLINE collabn2 STABLE ora.LISTENER.lsnr ONLINE ONLINE collabn1 STABLE ONLINE ONLINE collabn2 STABLE ora.asm ONLINE ONLINE collabn1 Started,STABLE ONLINE ONLINE collabn2 Started,STABLE ora.net1.network ONLINE ONLINE collabn1 STABLE ONLINE ONLINE collabn2 STABLE ora.ons ONLINE ONLINE collabn1 STABLE ONLINE ONLINE collabn2 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE collabn2 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE collabn1 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE collabn1 STABLE ora.collabn1.vip 1 ONLINE ONLINE collabn1 STABLE ora.collabn2.vip 1 ONLINE ONLINE collabn2 STABLE ora.cvu 1 ONLINE ONLINE collabn1 STABLE ora.oc4j 1 OFFLINE OFFLINE STABLE ora.scan1.vip 1 ONLINE ONLINE collabn2 STABLE ora.scan2.vip 1 ONLINE ONLINE collabn1 STABLE ora.scan3.vip 1 ONLINE ONLINE collabn1 STABLE --------------------------------------------------------------------------------
  3. Optional step: in order to increase the resistance of your nodes to the huge latency of a Virtualbox environment, you can increase the timeout of CRS before it causes a fencing (restart) of the node.
  4. You'll need to stop the second node while applying the configuration to the first node. [oracle@collabn1 ~]$ ssh collabn2 [oracle@collabn2 ~]$ su - Password: [root@collabn2 ~]# . oraenv ORACLE_SID = [root] ? +ASM2 The Oracle base has been set to /u01/app/oracle [root@collabn2 ~]# crsctl stop crs CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'collabn2' CRS-2673: Attempting to stop 'ora.crsd' on 'collabn2' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'collabn2' CRS-2673: Attempting to stop 'ora.DATA.dg' on 'collabn2' ... CRS-2677: Stop of 'ora.gipcd' on 'collabn2' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'collabn2' has completed CRS-4133: Oracle High Availability Services has been stopped. [root@collabn2 ~]# exit logout [oracle@collabn2 ~]$ exit logout Connection to collabn2 closed. [oracle@collabn1 ~]$ su - Password: [root@collabn1 ~]# . oraenv ORACLE_SID = [root] ? +ASM1 The Oracle base has been set to /u01/app/oracle [root@collabn1 ~]# crsctl get css misscount CRS-4678: Successful get misscount 30 for Cluster Synchronization Services. [root@collabn1 ~]# crsctl set css misscount 90 CRS-4684: Successful set of parameter misscount to 90 for Cluster Synchronization Services. [root@collabn1 ~]# crsctl get css disktimeout CRS-4678: Successful get disktimeout 200 for Cluster Synchronization Services. [root@collabn1 ~]# crsctl set css disktimeout 600 CRS-4684: Successful set of parameter disktimeout to 600 for Cluster Synchronization Services. [root@collabn1 ~]# ssh collabn2 root@collabn2's password: Last login: Tue Aug 6 16:19:56 2013 from 192.168.78.51 [root@collabn2 ~]# . oraenv ORACLE_SID = [root] ? +ASM2 The Oracle base has been set to /u01/app/oracle [root@collabn2 ~]# crsctl start crs CRS-4123: Oracle High Availability Services has been started. The start command returns the prompt in few seconds. However it can take minutes before the whole stack is started entirely.