程序師世界是廣大編程愛好者互助、分享、學習的平台,程序師世界有你更精彩!
首頁
編程語言
C語言|JAVA編程
Python編程
網頁編程
ASP編程|PHP編程
JSP編程
數據庫知識
MYSQL數據庫|SqlServer數據庫
Oracle數據庫|DB2數據庫
 程式師世界 >> 數據庫知識 >> Oracle數據庫 >> Oracle數據庫基礎 >> 在Oracle VM虛擬機上安裝Oracle Clusterware 11g

在Oracle VM虛擬機上安裝Oracle Clusterware 11g

編輯:Oracle數據庫基礎

很多用戶都想了解Oracle RAC ,但是又缺少硬件條件來安裝和了解RAC。這裡我們應用Oracle VM,在XEN虛擬機上來實現安裝。

Oracle VM於2007年11月12日正式推出,目前最新的版本是2.1.1。它是一款基於開源Xen管理器的虛擬化軟件,支持Oracle和非Oracle的應用程序。在OTN 上可以免費下載到相關資源。用戶可以在OVM中通過多種方式快捷地創建虛擬機和虛擬磁盤。

1 創建虛擬機

這裡我們創建2台虛擬機作為集群裡的2個節點。

·通過Oracle Virtual Machine Template創建虛擬機 RAC1_13 和 RAC2_13。

·虛擬機的內存至少為 1G

·每台機器應創建 2 塊虛擬網卡,如下圖所示:

·作為RAC節點的虛擬機的OS 版本應一致,這裡我們都選擇 Oracle Enterprise Linux Release 4 Update 5.

·創建完畢,“Power On”所有的節點。

2 安裝Clusterware前的准備

2.1 檢查系統硬件環境 (在所有節點上)

系統硬件條件至少應滿足

·1G RAM

# grep MemTotal /proc/meminfo

·Swap 1.5G

# grep SwapTotal /proc/meminfo

·/tmp >400MB

# df -k /tmp

·650MB的磁盤空間作為Oracle Clusterware home

·1G磁盤空間用來放Oracle Clusterware file

如果考慮冗余的話,需要再增加分區

·至少4G磁盤空間作為Oracle Database home

·虛擬機的磁盤空間不夠的話,可以通過增加虛擬磁盤的方法解決

2.2 配置和檢查系統軟件環境 (在所有節點上)

檢查系統是否已經安裝以下的包

binutils-2.15.92.0.2-18

elfutils-libelf-0.97-5

elfutils-libelf-devel-0.97.5

glibc-2.3.9.4-2.19

glibc-common-2.3.9.4-2.19

glibc-devel-2.3.9.4-2.19

gcc-3.4.5-2

gcc-c++-3.4.5-2

libaio-devel-0.3.105-2

libaio-0.3.105-2

libgcc-3.4.5

libstdc++-3.4.5-2

libstdc++-devel-3.4.5-2

make-3.80-5

通過模板創建的虛擬機,OS可能沒有安裝全部需要的包。

用戶在安裝前請參照Oracle官方文檔檢查系統是否已經安裝所需的包。

2.3 配置和檢查網絡 (在所有節點上)

RAC1_13 eth0 10.182.108.86 eth1 192.168.0.11

RAC2_13 eth0 10.182.108.88 eth1 192.168.0.12

·修改節點的/etc/hosts文件

127.0.0.1 localhost.localdomain localhost

10.182.108.86 rac1_13.cn.Oracle.com rac1_13

10.182.108.87 rac1_13-vip.cn.Oracle.com rac1_13-vip

192.168.0.11 rac1_13-priv.cn.Oracle.com rac1_13-priv

192.168.0.12 rac2_13-priv.cn.Oracle.com rac2_13-priv

10.182.108.88 rac2_13.cn.Oracle.com rac2_13

10.182.108.89 rac2_13-vip.cn.Oracle.com rac2_13-vip

·修改節點的hostname

vi /etc/sysconfig/network

設置節點的hostname分別為RAC1_13和RAC2_13。

2.4 配置內核參數 (在所有節點上)

編輯/etc/sysctl.conf

kernel.core_uses_pid = 1

fs.file-max=327679

kernel.msgmni=2878

kernel.msgmax=8192

kernel.msgmnb=65536

kernel.sem=250 32000 100 142

kernel.shmmni=4096

kernel.shmall=3279547

kernel.sysrq=1

net.core.rmem_default=262144

net.core.rmem_max=2097152

net.core.wmem_default=262144

net.core.wmem_max=262144

fs.aio-max-nr=3145728

net.ipv4.ip_local_port_range=1024 65000

vm.lower_zone_protection=100

kernel.shmmax=536934400

2.5 創建用於安裝Oracle的用戶和用戶組 (在所有節點上)

首先確認系統中是否已創建oinstall,dba用戶組和Oracle用戶,

#id Oracle

如果沒有創建,請用命令創建

# /usr/sbin/groupadd –g 501 dba

# /usr/sbin/groupadd –g 502 dba

# /usr/sbin/useradd –g oinstall –G dba Oracle

2.6 配置ssh/rsh協議 (在所有節點上)

這裡我們介紹了ssh/rsh協議的配置。實際安裝中,用戶只需要配置其中的一個協議(推薦使用SSH 協議)。

2.6.1 SSH 協議

在每個節點上創建.ssh目錄並生成RSA Key

1) 以Oracle用戶登錄

2) 檢查在在/home/Oracle/下是否已有.ssh目錄

如果沒有.ssh目錄,請創建該目錄

mkdir ~/.ssh

創建後修改目錄權限

[Oracle@rac1_13 ~]$ chmod 700 ~/.ssh

3) 生成rsa key

[Oracle@rac1_13 ~]$ /usr/bin/ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/Oracle/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/Oracle/.ssh/id_rsa.

Your public key has been saved in /home/Oracle/.ssh/id_rsa.pub.

The key fingerprint is:

3f:d2:e4:a3:ee:a1:58:e5:73:92:39:0d:8e:3f:9b:11 Oracle@rac1_13

4) 在每個節點上重復以上步驟

將所有的RSA Key添加到文authorized_keys

1) 在節點rac1_13上面,將RSA Key添加到文件authorized_keys

[Oracle@rac1_13 ~]$ cd .ssh

[Oracle@rac1_13 .ssh]$ cat id_rsa.pub >> authorized_keys

[Oracle@rac1_13 .ssh]$ ls

authorized_keys id_rsa id_rsa.pub

2) 將節點rac1_13上的 authorized_keys 抄送到節點rac2_13

[oracle@rac1_13 .ssh]$ scp authorized_keys rac2_13:/home/Oracle/.ssh/

The authenticity of host 'rac2_13 (10.182.108.88)' can't be established.

RSA key fingerprint is e6:dc:07:c3:d5:2a:45:43:66:72:d3:44:17:4d:54:42.

Are you sure you want to continue connecting (yes/no) yes

Warning: Permanently added 'rac2_13,10.182.108.88' (RSA) to the list of known hosts.

Oracle@rac2_13's passWord:

authorized_keys 100% 224 0.2KB/s 00:00

3) 在節點rac2_13上,將該節點的RSA Key也添加到authorized_keys

[Oracle@rac2_13 .ssh]$ cat id_rsa.pub >> authorized_keys

4) 當所有節點的RSA Key都添加到authorized_keys時,將authorized_keys文件抄送到每個節點

在節點上啟用SSH協議

1) 在每個節點上執行 SSH hostname date

[Oracle@rac1_13 .ssh]$ ssh rac1_13 date

The authenticity of host 'rac1_13 (10.182.108.86)' can't be established.

RSA key fingerprint is e6:dc:07:c3:d5:2a:45:43:66:72:d3:44:17:4d:54:42.

Are you sure you want to continue connecting (yes/no) yes

Warning: Permanently added 'rac1_13,10.182.108.86' (RSA) to the list of known hosts.

Enter passphrase for key '/home/Oracle/.ssh/id_rsa':

Sun Apr 20 23:31:06 EDT 2008

[Oracle@rac1_13 .ssh]$ ssh rac2_13 date

在節點rac2_13上重復以上步驟

2) 在每個節點上啟動SSH Agent,並將SSH keys裝載到內存

[Oracle@rac1_13 .ssh]$ exec /usr/bin/ssh-agent $SHELL

[Oracle@rac1_13 .ssh]$ /usr/bin/ssh-add

[Oracle@rac2_13 ~]$ exec /usr/bin/ssh-agent $SHELL

[Oracle@rac2_13 ~]$ /usr/bin/ssh-add

·驗證SSH 協議

[Oracle@rac1_13 .ssh]$ ssh rac1_13 date

Sun Apr 20 23:40:04 EDT 2008

[Oracle@rac1_13 .ssh]$ ssh rac2_13 date

Sun Apr 20 23:40:09 EDT 2008

[Oracle@rac1_13 .ssh]$ ssh rac2_13-priv date

Sun Apr 20 23:41:20 EDT 2008

到這裡SSH信任訪問協議配置完畢。

2.6.2 RSH 協議

·檢查系統是否已經安裝rsh協議所需的包

[root@rac1_13 rpm]# rpm -q rsh rsh-server

rsh-0.17-25.4

rsh-server-0.17-25.4

確認 Disable SELinux

執行 system-config-securitylevel

編輯/etc/xinetd.d/rsh文件,將 disable 屬性設置為 no

運行以下命令重新裝載xinetd

[root@rac1_13 rpm]# chkconfig rsh on

[root@rac1_13 rpm]# chkconfig rlogin on

[root@rac1_13 rpm]# service xinetd reload

Reloading configuration: [ OK ]

創建/etc/hosts.equiv文件,將可信節點信息加入到文件中

[root@rac1_13 rpm]# more /etc/hosts.equiv

+rac1_13 Oracle

+rac1_13-priv Oracle

+rac2_13 Oracle

+rac2_13-priv Oracle

修改/etc/hosts.equiv文件的屬性

[root@rac1_13 rpm]# chown root:root /etc/hosts.equiv

[root@rac1_13 rpm]# chmod 775 /etc/hosts.equiv

修改rsh的路徑

[root@rac1_13 rpm]# which rsh

/usr/kerberos/bin/rsh

[root@rac1_13 rpm]# cd /usr/kerberos/bin

[root@rac1_13 bin]# mv rsh rsh.original

[root@rac1_13 bin]# which rsh

/usr/bin/rsh

驗證RSH協議,以Oracle 用戶

[Oracle@rac1_13 ~]$ rsh rac1_13 date

Wed Apr 16 22:13:32 EDT 2008

[Oracle@rac1_13 ~]$ rsh rac1_13-priv date

Wed Apr 16 22:13:40 EDT 2008

[Oracle@rac1_13 ~]$ rsh rac2_13 date

Wed Apr 16 22:13:48 EDT 2008

[Oracle@rac1_13 ~]$ rsh rac2_13-priv date

Wed Apr 16 22:13:56 EDT 2008

[Oracle@rac2_13 ~]$ rsh rac1_13 date

Wed Apr 16 22:14:33 EDT 2008

[Oracle@rac2_13 ~]$ rsh rac1_13-priv date

Wed Apr 16 22:14:41 EDT 2008

[Oracle@rac2_13 ~]$ rsh rac2_13 date

Wed Apr 16 22:14:47 EDT 2008

[Oracle@rac2_13 ~]$ rsh rac2_13-priv date

Wed Apr 16 22:14:54 EDT 2008

2.7 配置用戶環境 (在所有節點上)

root 用戶

編輯/etc/bashrc 文件,加入以下語句

if [ -t 0 ]; then

stty intr ^C

fi

Oracle用戶環境配置

編輯文件 /etc/security/limits.conf,加入以下內容

Oracle soft nproc 2047

Oracle hard nproc 16384

Oracle soft nofile 1024

Oracle hard nofile 65536

編輯文件/etc/pam.d/login 文件,加入以下內容

session required pam_limits.so

編輯/etc/profile,加入以下內容

if [ $USER = "Oracle" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -u 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

umask 022

fi

2.8 NFS 服務設置

我們計劃將Clusterware和RAC DB的相關文件都放在NFS目錄中。

NFS服務器 style="COLOR: #000000" href="http://server.it168.com/" target=_blank>服務器端設置

1) 10.182.108.27 作為NFS服務器

2) 在NFS服務器的本地磁盤上創建共享目錄

/crs_13

/racdb_13

3) 編輯/etc/exports文件

/crs_13 10.182.108.0/255.255.255.0(rw,sync,no_root_squash)

/racdb_13 10.182.108.0/255.255.255.0(rw,sync,no_root_squash)

在RAC節點上創建安裝目錄

[root@rac1_13 etc]# mkdir /crs_13

[root@rac1_13 etc]# chown -R root:oinstall /crs_13/

[root@rac1_13 etc]# chmod -R 775 /crs_13/

[root@rac1_13 etc]# mkdir /racdb_13

[root@rac1_13 etc]# chown -R Oracle:dba /racdb_13/

[root@rac1_13 etc]# chmod -R 775 /racdb_13/

[root@rac2_13 ~]# mkdir /crs_13

[root@rac2_13 ~]# chown -R root:oinstall /crs_13/

[root@rac2_13 ~]# chmod -R 775 /crs_13/

[root@rac2_13 ~]# mkdir /racdb_13

[root@rac2_13 ~]# chown -R Oracle:dba /racdb_13/

[root@rac2_13 ~]# chmod -R 775 /racdb_13/

在RAC節點上配置NFS服務

編輯/etc/fstab 文件,將NFS目錄加入文件

10.182.108.27:/crs_13 /crs_13 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo

=600

10.182.108.27:/racdb_13 /racdb_13 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo

=600

在NFS服務器端和客戶端重啟NFS服務

service nfs restart

df –h檢查NFS目錄是否已經mount上

[root@rac1_13 etc]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

3.9G 1.6G 2.1G 43% /

/dev/hda1 99M 8.3M 86M 9% /boot

none 513M 0 513M 0% /dev/shm

10.182.108.27:/crs_13

127G 7.8G 113G 7% /crs_13

10.182.108.27:/racdb_13

127G 7.8G 113G 7% /racdb_13

[root@rac2_13 ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

3.9G 1.6G 2.1G 43% /

/dev/hda1 99M 8.3M 86M 9% /boot

none 513M 0 513M 0% /dev/shm

10.182.108.27:/crs_13

127G 7.8G 113G 7% /crs_13

10.182.108.27:/racdb_13

127G 7.8G 113G 7% /racdb_13

2.9 創建安裝目錄

為虛擬機添加磁盤

通過模板創建的虛擬機磁盤空間不夠來安裝clusterware 和 database,需要增加磁盤空間。

我們可以通過OVM Manager Console來為每個節點增加一個名為data,大小為5000MB的磁盤。

磁盤創建完畢在每個節點上可以用 fdisk –l命令查看新增加的磁盤。

[root@rac1_13 ~]# fdisk -l

Disk /dev/hda: 6442 MB, 6442450944 bytes

255 heads, 63 sectors/track, 783 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/hda1 * 1 13 104391 83 Linux

/dev/hda2 14 783 6185025 8e Linux LVM

Disk /dev/hdb: 5242 MB, 5242880000 bytes

255 heads, 63 sectors/track, 637 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/hdb doesn't contain a valid partition table

可以看到新增加的磁盤為/dev/hdb,但是裡面還沒有創建磁盤分區

為新磁盤創建分區 hdb1

# fdisk /dev/hdb

# fdisk -l /dev/hdb

Disk /dev/hdb: 5242 MB, 5242880000 bytes

255 heads, 63 sectors/track, 637 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/hdb1 1 633 5084541 83 Linux

格式化分區/dev/hdb1

# mkfs.ext3 -b 1024 -i 8192 /dev/hdb1

創建Mount Point

mkdir /data

將分區掛載到安裝目錄下

mount /dev/hdb1 /data

將目錄信息寫人文件/etc/fstab

/dev/hdb1 /data ext3 defaults 0 0

創建安裝所需的目錄

mkdir –p /data/crs

chown –R Oracle:oinstall /data/crs

chmod –R 775 /data/crs

2.10 創建ocr和voting file文件

ocr和voting file文件必須放在NFS目錄。在其中一個節點創建ocr文件和voting file文件即可。

[root@rac1_13 crs_13]# chown root:oinstall ocrfile

[root@rac1_13 crs_13]# chmod 775 ocrfile

[root@rac1_13 crs_13]# touch votingfile

[root@rac1_13 crs_13]# chown Oracle:dba votingfile

[root@rac1_13 crs_13]# chmod 775 votingfile

3 安裝 Clusterware

3.1 下載並解壓 Oracle Clusterware 安裝軟件

# unzip Linux_11gR1_clusterware.zip –d /stage

3.2 安裝Clusterware

Step 1 到解壓目錄執行 ./runInstaller

Step 2 指定inventory目錄以及安裝用戶組

Step 3 指定CRS Home

Step 4 輸入集群中的節點信息,與/etc/hosts裡面的信息保持一致

Step 5 指定節點公網/私網信息

Step 6 指定OCR 文件的路徑

Step 7 指定Voting File文件的位置

Step 8 開始安裝

Step 9 在所有節點上依次執行這些腳本(請注意一個節點上所有腳本執行完畢,才能去另外的節點執行)

[root@rac1_13 crs_13]# /data/crs/root.sh

Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory

Setting up Network socket directorIEs

Oracle Cluster Registry configuration upgraded successfully

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node :

node 1: rac1_13 rac1_13-priv rac1_13

node 2: rac2_13 rac2_13-priv rac2_13

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Now formatting voting device: /crs_13/votingfile

Format of 1 voting devices complete.

Startup will be queued to init within 30 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

Cluster Synchronization Services is active on these nodes.

rac1_13

Cluster Synchronization Services is inactive on these nodes.

rac2_13

Local node checking complete. Run root.sh on remaining nodes to start CRS daemons

[root@rac2_13 crs]# sh root.sh

Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory

Setting up Network socket directorIEs

Oracle Cluster Registry configuration upgraded successfully

clscfg: EXISTING configuration version 4 detected.

clscfg: version 4 is 11 Release 1.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node :

node 1: rac1_13 rac1_13-priv rac1_13

node 2: rac2_13 rac2_13-priv rac2_13

clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.

-force is destructive and will destroy any previous cluster

configuration.

Oracle Cluster Registry for cluster has already been initialized

Startup will be queued to init within 30 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

Cluster Synchronization Services is active on these nodes.

rac1_13

rac2_13

Cluster Synchronization Services is active on all the nodes.

Waiting for the Oracle CRSD and EVMD to start

Oracle CRS stack installed and running under init(1M)

Running vipca(silent) for configuring nodeaPPS

Creating VIP application resource on (2) nodes...

Creating GSD application resource on (2) nodes...

Creating ONS application resource on (2) nodes...

Starting VIP application resource on (2) nodes...

Starting GSD application resource on (2) nodes...

Starting ONS application resource on (2) nodes...

Done.

腳本都執行完畢後,進入下一步。

Step 10 配置確認

Step 11 結束安裝

3.3 查看CRS狀態

[root@rac1_13 bin]# ./crs_stat -t

Name Type Target State Host

------------------------------------------------------------

ora...._13.gsd application ONLINE ONLINE rac1_13

ora...._13.ons application ONLINE ONLINE rac1_13

ora...._13.vip application ONLINE ONLINE rac1_13

ora...._13.gsd application ONLINE ONLINE rac2_13

ora...._13.ons application ONLINE ONLINE rac2_13

ora...._13.vip application ONLINE ONLINE rac2_13

[root@rac2_13 bin]# ./crs_stat -t

Name Type Target State Host

------------------------------------------------------------

ora...._13.gsd application ONLINE ONLINE rac1_13

ora...._13.ons application ONLINE ONLINE rac1_13

ora...._13.vip application ONLINE ONLINE rac1_13

ora...._13.gsd application ONLINE ONLINE rac2_13

ora...._13.ons application ONLINE ONLINE rac2_13

ora...._13.vip application ONLINE ONLINE rac2_13

[root@rac1_13 bin]# ps -ef|grep d.bin

Oracle 20999 20998 0 06:45 00:00:00 /data/crs/bin/evmd.bin

root 21105 20310 0 06:45 00:00:00 /data/crs/bin/crsd.bin reboot

Oracle 21654 21176 0 06:45 00:00:00 /data/crs/bin/oCSSd.bin

root 26087 5276 0 06:54 pts/0 00:00:00 grep d.bin

到這裡Oracle Clusterware的安裝完畢。

Oracel Real Application Cluster的安裝和數據庫的創建在這裡就不介介紹了。

  1. 上一頁:
  2. 下一頁:
Copyright © 程式師世界 All Rights Reserved