程序師世界是廣大編程愛好者互助、分享、學習的平台,程序師世界有你更精彩!
首頁
編程語言
C語言|JAVA編程
Python編程
網頁編程
ASP編程|PHP編程
JSP編程
數據庫知識
MYSQL數據庫|SqlServer數據庫
Oracle數據庫|DB2數據庫
 程式師世界 >> 數據庫知識 >> Oracle數據庫 >> Oracle教程 >> RAC安裝時需要執行4個腳本及意義

RAC安裝時需要執行4個腳本及意義

編輯:Oracle教程

RAC安裝時需要執行4個腳本

1) $ORACLE_BASE/oraInventory/orainstRoot.sh (clusterware 結束時執行)

2) $CRS_HOME/root.sh (clusterware 結束時執行)

3) $CRS_HOME/bin/vipca.sh(該腳本是在第二個節點執行$CRS_HOME/root.sh時被自動調用)

4) $ORACLE_HOME/root.sh (安裝完數據庫以後執行)

1. orainstRoot.sh 腳本

1.1 orainstRoot.sh 腳本執行過程

root@node2 #/oracle/oraInventory/orainstRoot.sh

Changing permissions of /oracle/oraInventory to 770.

Changing groupname of /oracle/oraInventory to oinstall.

The execution of the script is complete

1.2 orainstRoot.sh 腳本內容

root@node1 # more /oracle/oraInventory/orainstRoot.sh

#!/bin/sh

if [ ! -d "/var/opt/oracle" ]; then

mkdir -p /var/opt/oracle;

fi

if [ -d "/var/opt/oracle" ]; then

chmod 755 /var/opt/oracle;

fi

if [ -f "/oracle/oraInventory/oraInst.loc" ]; then

cp /oracle/oraInventory/oraInst.loc /var/opt/oracle/oraInst.loc;

chmod 644 /var/opt/oracle/oraInst.loc;

else

INVPTR=/var/opt/oracle/oraInst.loc

INVLOC=/oracle/oraInventory

GRP=oinstall

PTRDIR="`dirname $INVPTR`";

# Create the software inventory location pointer file

if [ ! -d "$PTRDIR" ]; then

mkdir -p $PTRDIR;

fi

echo "Creating the Oracle inventory pointer file ($INVPTR)";

echo inventory_loc=$INVLOC > $INVPTR

echo inst_group=$GRP >> $INVPTR

chmod 644 $INVPTR

# Create the inventory directory if it doesn't exist

if [ ! -d "$INVLOC" ];then

echo "Creating the Oracle inventory directory ($INVLOC)";

mkdir -p $INVLOC;

fi

fi

echo "Changing permissions of /oracle/oraInventory to 770.";

chmod -R 770 /oracle/oraInventory;

if [ $? != 0 ]; then

echo "OUI-35086:WARNING: chmod of /oracle/oraInventory to 770 failed!";

fi

echo "Changing groupname of /oracle/oraInventory to oinstall.";

chgrp oinstall /oracle/oraInventory;

if [ $? != 0 ]; then

echo "OUI-10057:WARNING: chgrp of /oracle/oraInventory to oinstall failed!";

fi

echo "The execution of the script is complete"

從腳本我們可以看出,這個腳本主要是創建/var/opt/oracle目錄(如果不存在的話),再在該目錄下建oraInst.loc文件(該文件記錄orainventory的位置和組)。並改變orainventory的屬性。

root@node2 # ls –rlt /var/opt/oracle/

total 2

-rw-r--r-- 1 root root 55 Apr 2 14:42 oraInst.loc

root@node2 # more oraInst.loc

inventory_loc=/oracle/oraInventory

inst_group=oinstall

在另一個節點上運行該腳本

root@node1 #/oracle/oraInventory/orainstRoot.sh

Changing permissions of /oracle/oraInventory to 770.

Changing groupname of /oracle/oraInventory to oinstall.

The execution of the script is complete

2. Root.sh 腳本

2.1 root.sh 腳本執行過程

root@node2 #/oracle/crs/root.sh

WARNING: directory '/oracle' is not owned by root

Checking to see if Oracle CRS stack is already configured

Checking to see if any 9i GSD is up

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory '/oracle' is not owned by root

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node <nodenumber>: <nodename> <private interconnect name> <hostname>

node 0: node2 node2-priv node2

node 1: node1 node1-priv node1

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Now formatting voting device: /oracle/ocrcfg1

Format of 1 voting devices complete.

Startup will be queued to init within 30 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

node2

CSS is inactive on these nodes.

node1

Local node checking complete.

Run root.sh on remaining nodes to start CRS daemons.

從輸出我們可以看出,該腳本主要執行crs的配置,格式化ocr disk,更新/etc/inittab文件,啟動css進程,在/var/opt/oracle/新建了ocr.loc文件及,scls_scr,oprocd文件夾。

2.2 查看crs進程及/etc/inittab文件可以看出節點的變化。

root@node2 # ps -ef|grep crs|grep –v grep

oracle 18212 18211 0 14:47:28 ? 0:00 /oracle/crs/bin/ocssd.bin

oracle 18191 18180 0 14:47:28 ? 0:00 /oracle/crs/bin/oclsmon.bin

oracle 17886 1 0 14:47:27 ? 0:00 /oracle/crs/bin/evmd.bin

oracle 18180 18092 0 14:47:28 ? 0:00 /bin/sh -c cd /oracle/crs/log/node2/cssd/oclsmon; ulimit -c unlimited; /ora

root 17889 1 0 14:47:27 ? 0:00 /oracle/crs/bin/crsd.bin reboot

oracle 18211 18093 0 14:47:28 ? 0:00 /bin/sh -c ulimit -c unlimited; cd /oracle/crs/log/node2/cssd; /oracle/crs

root@node2 # ls –rlt /var/opt/oracle/

total 8

-rw-r--r-- 1 root root 55 Apr 2 14:42 oraInst.loc

drwxrwxr-x 5 root root 512 Apr 2 14:47 oprocd

drwxr-xr-x 3 root root 512 Apr 2 14:47 scls_scr

-rw-r--r-- 1 root oinstall 48 Apr 2 14:47 ocr.loc

注意:新創建了ocr.loc,scls_scr,oprocd,但沒有創建/var/opt/oracle/oratab。

root@node1 # more inittab

# Copyright 2004 Sun Microsystems, Inc. All rights reserved.

# Use is subject to license terms.

#

# The /etc/inittab file controls the configuration of init(1M); for more

# information refer to init(1M) and inittab(4). It is no longer

# necessary to edit inittab(4) directly; administrators should use the

# Solaris Service Management Facility (SMF) to define services instead.

# Refer to smf(5) and the System Administration Guide for more

# information on SMF.

#

# For modifying parameters passed to ttymon, use svccfg(1m) to modify

# the SMF repository. For example:

#

# # svccfg

# svc:> select system/console-login

# svc:/system/console-login> setprop ttymon/terminal_type = "xterm"

# svc:/system/console-login> exit

#

#ident "@(#)inittab 1.41 04/12/14 SMI"

ap::sysinit:/sbin/autopush -f /etc/iu.ap

sp::sysinit:/sbin/soconfig -f /etc/sock2path

smf::sysinit:/lib/svc/bin/svc.startd >/dev/msglog 2<>/dev/msglog </dev/console

p3:s1234:powerfail:/usr/sbin/shutdown -y -i5 -g0 >/dev/msglog 2<>/dev/msglog

h1:3:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/null

h2:3:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/null

h3:3:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null

root@node1 # ls -rlt /etc/inittab*

-rw-r--r-- 1 root root 1072 Nov 2 12:39 inittab.cssd

-rw-r--r-- 1 root root 1206 Mar 21 17:15 inittab.pre10203

-rw-r--r-- 1 root root 1006 Mar 21 17:15 inittab.nocrs10203

-rw-r--r-- 1 root root 1040 Apr 2 14:50 inittab.orig

-rw-r--r-- 1 root root 1040 Apr 2 14:50 inittab.no_crs

-rw-r--r-- 1 root root 1240 Apr 2 14:50 inittab

-rw-r--r-- 1 root root 1240 Apr 2 14:50 inittab.crs

該腳本會將inittab復制為inittab.no_crs,修改後的inittab另復制一份為inittab.crs.

2.3 在另外一個節點執行$CRS_HOME/root.sh

root@node1 #/oracle/crs/root.sh

WARNING: directory '/oracle' is not owned by root

Checking to see if Oracle CRS stack is already configured

Checking to see if any 9i GSD is up

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory '/oracle' is not owned by root

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is 10G Release 2.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node <nodenumber>: <nodename> <private interconnect name> <hostname>

node 0: node2 node2-priv node2

node 1: node1 node1-priv node1

clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.

-force is destructive and will destroy any previous cluster configuration.

Oracle Cluster Registry for cluster has already been initialized

Startup will be queued to init within 30 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

node2

node1

CSS is active on all nodes.

Waiting for the Oracle CRSD and EVMD to start

Oracle CRS stack installed and running under init(1M)

Running vipca(silent) for configuring nodeapps

Creating VIP application resource on (2) nodes...

Creating GSD application resource on (2) nodes...

Creating ONS application resource on (2) nodes...

Starting VIP application resource on (2) nodes...

Starting GSD application resource on (2) nodes...

Starting ONS application resource on (2) nodes...

Done.

3. 在第二個節點上運行時會多比在第一個節點上運行多執行一個任務

-------運行$CRS_HOME/bin/vipca.sh


VIPCA.sh主要是配置vip並啟動crs的默認資源(未建庫時默認為6個),多啟動三個後台進程。

root@node1 # ps -ef|grep crs|grep -v grep

oracle 18347 17447 0 14:51:06 ? 0:00 /oracle/crs/bin/evmlogger.bin -o /oracle/crs/evm/log/evmlogger.info -l /oracle/

oracle 17447 1 0 14:50:47 ? 0:00 /oracle/crs/bin/evmd.bin

oracle 17763 17756 0 14:50:48 ? 0:00 /oracle/crs/bin/ocssd.bin

oracle 17756 17643 0 14:50:48 ? 0:00 /bin/sh -c ulimit -c unlimited; cd /oracle/crs/log/node1/cssd; /oracle/crs

oracle 21216 1 0 14:52:28 ? 0:00 /oracle/crs/opmn/bin/ons -d

oracle 21217 21216 0 14:52:28 ? 0:00 /oracle/crs/opmn/bin/ons -d

oracle 17771 17642 0 14:50:48 ? 0:00 /bin/sh -c cd /oracle/crs/log/node1/cssd/oclsmon; ulimit -c unlimited; /ora

oracle 17773 17771 0 14:50:48 ? 0:00 /oracle/crs/bin/oclsmon.bin

root 17449 1 0 14:50:47 ? 0:01 /oracle/crs/bin/crsd.bin reboot

root@node2 # ps -ef|grep crs|grep -v grep

oracle 18212 18211 0 14:47:28 ? 0:00 /oracle/crs/bin/ocssd.bin

oracle 27467 27466 0 14:52:25 ? 0:00 /oracle/crs/opmn/bin/ons -d

oracle 25252 17886 0 14:51:16 ? 0:00 /oracle/crs/bin/evmlogger.bin -o /oracle/crs/evm/log/evmlogger.info -l /oracle/

oracle 27466 1 0 14:52:25 ? 0:00 /oracle/crs/opmn/bin/ons -d

oracle 18191 18180 0 14:47:28 ? 0:00 /oracle/crs/bin/oclsmon.bin

oracle 17886 1 0 14:47:27 ? 0:00 /oracle/crs/bin/evmd.bin

oracle 18180 18092 0 14:47:28 ? 0:00 /bin/sh -c cd /oracle/crs/log/node2/cssd/oclsmon; ulimit -c unlimited; /ora

root 17889 1 0 14:47:27 ? 0:00 /oracle/crs/bin/crsd.bin reboot

oracle 18211 18093 0 14:47:28 ? 0:00 /bin/sh -c ulimit -c unlimited; cd /oracle/crs/log/node2/cssd; /oracle/crs

從現在node2上的進程就能看出,執行完vipca.sh後,會多出三個後台進程。

root@node1 # crs_stat -t

Name Type Target State Host

------------------------------------------------------------

ora....c03.gsd application ONLINE ONLINE node1

ora....c03.ons application ONLINE ONLINE node1

ora....c03.vip application ONLINE ONLINE node1

ora....c04.gsd application ONLINE ONLINE node2

ora....c04.ons application ONLINE ONLINE node2

ora....c04.vip application ONLINE ONLINE node1

4. 安裝數據庫軟件(binary)時需在最後一步:執行$ORACLE_HOME/root.sh

root@node2 #$ORACLE_HOME/root.sh

Running Oracle10 root.sh script...

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /oracle/10g

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y

Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y

Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y

Copying coraenv to /usr/local/bin ...

Creating /var/opt/oracle/oratab file...

Entries will be added to the /var/opt/oracle/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

該腳本的作用在於在指定的目錄(默認為/usr/local/bin)下創建dbhome,oraenv,coraenv,在/var/opt/oracle/裡創建oratab文件。

root@node2# ls –rlt /usr/local/bin

total 18

-rwxr-xr-x 1 oracle root 2428 Apr 2 15:07 dbhome

-rwxr-xr-x 1 oracle root 2560 Apr 2 15:07 oraenv

-rwxr-xr-x 1 oracle root 2857 Apr 2 15:07 coraenv

root@node2 # ls –rlt /var/opt/oracle/

total 10

-rw-r--r-- 1 root root 55 Apr 2 14:42 oraInst.loc

drwxrwxr-x 5 root root 512 Apr 2 14:47 oprocd

drwxr-xr-x 3 root root 512 Apr 2 14:47 scls_scr

-rw-r--r-- 1 root oinstall 48 Apr 2 14:47 ocr.loc

-rw-rw-r-- 1 oracle root 678 Apr 2 15:07 oratab

root@node1 # /oracle/10g/root.sh

Running Oracle10 root.sh script...

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /oracle/10g

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y

Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y

Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y

Copying coraenv to /usr/local/bin ...

Creating /var/opt/oracle/oratab file...

Entries will be added to the /var/opt/oracle/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

  1. 上一頁:
  2. 下一頁:
Copyright © 程式師世界 All Rights Reserved