程序師世界是廣大編程愛好者互助、分享、學習的平台,程序師世界有你更精彩!
首頁
編程語言
C語言|JAVA編程
Python編程
網頁編程
ASP編程|PHP編程
JSP編程
數據庫知識
MYSQL數據庫|SqlServer數據庫
Oracle數據庫|DB2數據庫
 程式師世界 >> 數據庫知識 >> MYSQL數據庫 >> MySQL綜合教程 >> MySQL高可用MMM安裝部署以及故障轉移詳細資料匯總

MySQL高可用MMM安裝部署以及故障轉移詳細資料匯總

編輯:MySQL綜合教程

MySQL高可用MMM安裝部署以及故障轉移詳細資料匯總


 

1, mmm簡介

MMM(Master-Masterreplication manager for MySQL)是一套支持雙主故障切換和雙主日常管理的腳本程序。MMM使用Perl語言開發,主要用來監控和管理MySQL Master-Master(雙主)復制,雖然叫做雙主復制,但是業務上同一時刻只允許對一個主進行寫入,另一台備選主上提供部分讀服務,以加速在主主切換時刻備選主的預熱,可以說MMM這套腳本程序一方面實現了故障切換的功能,另一方面其內部附加的工具腳本也可以實現多個slave的read負載均衡。

 

MMM提供了自動和手動兩種方式移除一組服務器中復制延遲較高的服務器的虛擬ip,同時它還可以備份數據,實現兩節點之間的數據同步等。由於MMM無法完全的保證數據一致性,所以MMM適用於對數據的一致性要求不是很高,但是又想最大程度的保證業務可用性的場景。對於那些對數據的一致性要求很高的業務,非常不建議采用MMM這種高可用架構。

 

如下圖所示:

\

 

 

2,准備好3台mysql服務器

2.1 服務器准備

可以用虛擬機來安裝mysql,因為以前做mha實驗的時候,已經安裝好了mysql,所以這個可以直接在現在已經有的3台mysql上面來部署mmm軟件。

Package name

Description

mysql-mmm-agent

MySQL-MMM Agent

mysql-mmm-monitor

MySQL-MMM Monitor

mysql-mmm-tools

MySQL-MMM Tools

 

Mysql主從搭建:http://www.Bkjia.com/database/201503/386265.html

 

2.2 配置數據庫配置

db1(192.168.52.129): vim/etc/my.cnf

server-id=129

log_slave_updates = 1

auto-increment-increment = 2 #每次增長2

auto-increment-offset = 1 #設置自動增長的字段的偏移量,即初始值為1

 

db2(192.168.52.129): vim/etc/my.cnf

server-id=230

log_slave_updates = 1

auto-increment-increment = 2 #每次增長2

auto-increment-offset = 2 #設置自動增長的字段的偏移量,即初始值為1

 

db3(192.168.52.128): vim/etc/my.cnf

server-id=331

log_slave_updates = 1

 

3,安裝mmm

3.1 下載mmm

3.2 准備好perl以及lib包環境

yum install -yperl-*
yum install -y libart_lgpl.x86_64
yum install -y mysql-mmm.noarch fail
yum install -y rrdtool.x86_64
yum install -y rrdtool-perl.x86_64

 

可以可以直接運行install_mmm.sh腳本來安裝

3.3 開始安裝mmm

按照官網標准配置,mmm是需要安裝在一台單獨服務器上面,但是這裡實驗為了節省資源,所以就可以將mmm安裝部署在一台slave上面,部署在192.168.52.131上面。

 

mv :mmm2:mysql-mmm-2.2.1.tar.gzmysql-mmm-2.2.1.tar.gz

tar -xvf mysql-mmm-2.2.1.tar.gz

cd mysql-mmm-2.2.1

make

make install

 

mmm安裝後的拓撲結構如下:

目錄                                                            介紹
/usr/lib64/perl5/vendor_perl/                   MMM使用的主要perl模塊
/usr/lib/mysql-mmm                                      MMM使用的主要腳本
/usr/sbin                                               MMM使用的主要命令的路徑
/etc/init.d/                                            MMM的agent和monitor啟動服務的目錄
/etc/mysql-mmm                                          MMM配置文件的路徑,默認所以的配置文件位於該目錄下
/var/log/mysql-mmm                                      默認的MMM保存日志的位置

 

 

到這裡已經完成了MMM的基本需求,接下來需要配置具體的配置文件,其中mmm_common.conf,mmm_agent.conf為agent端的配置文件,mmm_mon.conf為monitor端的配置文件。

 

將mmm_common.conf復制到另外db1、db2上面(因為db3和monitor是一台,所以不用復制了)。

 

4,配置mysql庫數據節點的mmm配置

4.1在db3上配置mmm_common.conf

需要在db1、db2、db3上分配配置agent端配置文件,我這裡在db3上安裝的mmm,所以直接在db3上編輯操作:

vim/etc/mysql-mmm/mmm_common.conf

 [root@oraclem1 ~]# cat /etc/mysql-mmm/mmm_common.conf 
active_master_role  writer
          cluster_interface              eth0
          pid_path                                 /var/run/mmm_agentd.pid
          bin_path                                 /usr/lib/mysql-mmm/
    replication_user        repl
    replication_password    repl_1234
          agent_user                                         mmm_agent
          agent_password                           mmm_agent_1234

          ip                                                           192.168.52.129
          mode                                               master
          peer                                               db2

          ip                                                           192.168.52.128
          mode                                               master
          peer                                               db1

          ip                                                           192.168.52.131
          mode                                               slave

 
          hosts                                              db1, db2
          ips                                                          192.168.52.120
          mode                                               exclusive

          hosts                                              db1, db2, db3
          ips                                                          192.168.52.129, 192.168.52.128, 192.168.52.131
          mode                                               balanced

[root@oraclem1 ~]#

 

其中 replication_user 用於檢查復制的用戶, agent_user 為agent的用戶, mode 標明是否為主或者備選主,或者從庫。 mode exclusive 主為獨占模式,同一時刻只能有一個主, 中hosts表示目前的主庫和備選主的真實主機ip或者主機名, ips 為對外提供的虛擬機ip地址, 中hosts代表從庫真實的ip和主機名, ips 代表從庫的虛擬ip地址。

 

由於db2和db3兩台主機也要配置agent配置文件,我們直接把mmm_common.conf從db1拷貝到db2和db3兩台主機的/etc/mysql-mmm下。

4.2 將mmm_common.conf復制到db1和db2上面

scp /etc/mysql-mmm/mmm_common.conf data01:/etc/mysql-mmm/mmm_common.conf

scp /etc/mysql-mmm/mmm_common.conf data02:/etc/mysql-mmm/mmm_common.conf

 

4.3 配置mmm_agent.conf文件

在db1、db2、db3上面配置agent

db1(192.168.52.129):

[root@data01 mysql-mmm]# cat/etc/mysql-mmm/mmm_agent.conf

include mmm_common.conf

this db1

[root@data01 mysql-mmm]#

 

db2(192.168.52.128):

[root@data02 mysql-mmm-2.2.1]# vim/etc/mysql-mmm/mmm_agent.conf

 

include mmm_common.conf

this db2

 

db3(192.168.52.131):

[root@oraclem1 vendor_perl]# cat /etc/mysql-mmm/mmm_agent.conf

include mmm_common.conf

this db3

[root@oraclem1 vendor_perl]#

 

4.4 配置monitor

 [root@oraclem1 vendor_perl]# cat  /etc/mysql-mmm/mmm_mon.conf
include mmm_common.conf
 
          ip                                                           127.0.0.1
          pid_path                                 /var/run/mmm_mond.pid
          bin_path                                 /usr/lib/mysql-mmm/
          status_path                                        /var/lib/misc/mmm_mond.status
          ping_ips                                 192.168.52.129, 192.168.52.128, 192.168.52.131

 
          monitor_user                             mmm_monitor
          monitor_password               mmm_monitor_1234

 
debug 0
[root@oraclem1 vendor_perl]#

 

這裡只在原有配置文件中的ping_ips添加了整個架構被監控主機的ip地址,而在中配置了用於監控的用戶。

 

4.5 創建監控用戶

用戶名

描述

權限

Monitor user

mmm的monitor端監控所有的mysql數據庫的狀態用戶

REPLICATION CLIENT

Agent user

主要是MMM客戶端用於改變的master的read_only狀態用

SUPER,REPLICATION CLIENT,PROCESS

repl復制賬號

用於復制的用戶

REPLICATION SLAVE

 

在3台服務器(db1,db2,db3)進行授權,因為我之前做mha實驗的時候mysql已經安裝好了,而且復制賬號repl也已經建立好了,repl賬號語句:GRANT REPLICATION SLAVE ON*.* TO 'repl'@'192.168.52.%' IDENTIFIED BY 'repl_1234';monitor用戶:GRANTREPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.0.%' IDENTIFIED BY'mmm_monitor_1234'; agent用戶:GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO'mmm_agent'@'192.168.0.%' IDENTIFIED BY'mmm_agent_1234';如下所示:

 mysql> GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.52.%' IDENTIFIED BY 'mmm_monitor_1234';
Query OK, 0 rows affected (0.26 sec)
 
mysql> GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'192.168.52.%'   IDENTIFIED BY 'mmm_agent_1234';
Query OK, 0 rows affected (0.02 sec)
 
mysql>

 

如果是從頭到尾從新搭建,則加上另外一個復制賬戶repl的grant語句(分別在3台服務器都需要執行這3條SQL):GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.52.%' IDENTIFIEDBY '123456'; 如下所示:

mysql> GRANT REPLICATION SLAVE ON *.* TO'repl'@'192.168.52.%' IDENTIFIED BY '123456';

Query OK, 0 rows affected (0.01 sec)

 

mysql>

 

5,搭建起db1、db2、db3之間的mms服務

db1、db2、db3已經安裝好了,現在只需要搭建起mms的架構就可以了。

db1(192.168.52.129)-->db2(192.168.52.128)

db1(192.168.52.129)-->db3(192.168.52.131)

db2(192.168.52.128)—> db1(192.168.52.129)

 

5.1建立db1到db2、db3的復制

先去查看db1(192.168.52.129)上面的master狀況:

mysql> show master status;

+------------------+----------+--------------+--------------------------------------------------+-------------------+

| File | Position | Binlog_Do_DB |Binlog_Ignore_DB | Executed_Gtid_Set |

+------------------+----------+--------------+--------------------------------------------------+-------------------+

| mysql-bin.000208 | 1248 | user_db |mysql,test,information_schema,performance_schema | |

+------------------+----------+--------------+--------------------------------------------------+-------------------+

1 row in set (0.03 sec)

 

mysql>

 

然後在db2和db3上建立復制鏈接,步驟如下:

STOP SLAVE;

RESET SLAVE;

CHANGE MASTER TOMASTER_HOST='192.168.52.129',MASTER_USER='repl',MASTER_PASSWORD='repl_1234',MASTER_LOG_FILE='mysql-bin.000208',MASTER_LOG_POS=1248;

START SLAVE;

SHOW SLAVE STATUS\G;

檢查到雙Yes和0就OK了。

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

Seconds_Behind_Master: 0

 

5.2 再建立db2到db1的復制

查看db2(192.168.52.128)的master狀況,然後建立復制鏈接

mysql> show master status;

+------------------+----------+--------------+--------------------------------------------------+-------------------+

| File | Position | Binlog_Do_DB |Binlog_Ignore_DB | Executed_Gtid_Set |

+------------------+----------+--------------+--------------------------------------------------+-------------------+

| mysql-bin.000066 | 1477 | user_db |mysql,test,information_schema,performance_schema | |

+------------------+----------+--------------+--------------------------------------------------+-------------------+

1 row in set (0.04 sec)

 

mysql>

然後建立db2到db1的復制鏈接:

STOP SLAVE;

CHANGE MASTER TO MASTER_HOST='192.168.52.128',MASTER_USER='repl',MASTER_PASSWORD='repl_1234',MASTER_LOG_FILE='mysql-bin.000066',MASTER_LOG_POS=1477;

START SLAVE;

SHOW SLAVE STATUS\G;

檢查到雙Yes和0就OK了。

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

Seconds_Behind_Master: 0

6,啟動agent服務

分別在db1、db2、db3上啟動agent服務

[root@data01 mysql-mmm]#/etc/init.d/mysql-mmm-agent start

Daemon bin: '/usr/sbin/mmm_agentd'

Daemon pid: '/var/run/mmm_agentd.pid'

Starting MMM Agent daemon... Ok

[root@data01 mysql-mmm]#

 

7,啟動monitor服務

 

在monit服務器192.168.52.131上啟動monitor服務:

添加進系統後台:chkconfig --add /etc/rc.d/init.d/mysql-mmm-monitor

開始啟動:service mysql-mmm-monitor start

[root@oraclem1 ~]# servicemysql-mmm-monitor start

Daemon bin: '/usr/sbin/mmm_mond'

Daemon pid: '/var/run/mmm_mond.pid'

Starting MMM Monitor daemon: Ok

[root@oraclem1 ~]#

 

8,一些管理操作

查看狀態:

[root@oraclem1 mysql-mmm]# mmm_control show

db1(192.168.52.129) master/AWAITING_RECOVERY. Roles:

db2(192.168.52.128) master/AWAITING_RECOVERY. Roles:

db3(192.168.52.131) slave/AWAITING_RECOVERY. Roles:

 

[root@oraclem1 mysql-mmm]#

 

8.1 設置online:

[root@oraclem1 mysql-mmm]# mmm_control show

db1(192.168.52.129) master/AWAITING_RECOVERY. Roles:

db2(192.168.52.128) master/AWAITING_RECOVERY. Roles:

db3(192.168.52.131) slave/AWAITING_RECOVERY. Roles:

 

[root@oraclem1 mysql-mmm]# mmm_controlset_online db1

OK: State of 'db1' changed to ONLINE. Nowyou can wait some time and check its new roles!

[root@oraclem1 mysql-mmm]# mmm_controlset_online db2

OK: State of 'db2' changed to ONLINE. Nowyou can wait some time and check its new roles!

[root@oraclem1 mysql-mmm]# mmm_controlset_online db3

OK: State of 'db3' changed to ONLINE. Nowyou can wait some time and check its new roles!

[root@oraclem1 mysql-mmm]#

 

8.2 check所有db

[root@oraclem1 mysql-mmm]# mmm_controlchecks

db2 ping [last change:2015/04/14 00:10:57] OK

db2 mysql [last change:2015/04/14 00:10:57] OK

db2 rep_threads [last change:2015/04/14 00:10:57] OK

db2 rep_backlog [last change:2015/04/14 00:10:57] OK: Backlog is null

db3 ping [last change:2015/04/14 00:10:57] OK

db3 mysql [last change:2015/04/14 00:10:57] OK

db3 rep_threads [last change:2015/04/14 00:10:57] OK

db3 rep_backlog [last change:2015/04/14 00:10:57] OK: Backlog is null

db1 ping [last change:2015/04/14 00:10:57] OK

db1 mysql [last change:2015/04/14 00:10:57] OK

db1 rep_threads [last change:2015/04/14 00:10:57] OK

db1 rep_backlog [last change: 2015/04/1400:10:57] OK: Backlog is null

 

[root@oraclem1 mysql-mmm]#

 

8.3 查看mmm_control日志

[root@oraclem1 mysql-mmm]# tail -f/var/log/mysql-mmm/mmm_mond.log

2015/04/14 00:55:29 FATAL Admin changedstate of 'db1' from AWAITING_RECOVERY to ONLINE

2015/04/14 00:55:29 INFO Orphaned role 'writer(192.168.52.120)'has been assigned to 'db1'

2015/04/14 00:55:29 INFO Orphaned role 'reader(192.168.52.131)'has been assigned to 'db1'

2015/04/14 00:55:29 INFO Orphaned role 'reader(192.168.52.129)'has been assigned to 'db1'

2015/04/14 00:55:29 INFO Orphaned role 'reader(192.168.52.128)'has been assigned to 'db1'

2015/04/14 00:58:15 FATAL Admin changedstate of 'db2' from AWAITING_RECOVERY to ONLINE

2015/04/14 00:58:15 INFO Moving role 'reader(192.168.52.131)'from host 'db1' to host 'db2'

2015/04/14 00:58:15 INFO Moving role 'reader(192.168.52.129)'from host 'db1' to host 'db2'

2015/04/14 00:58:18 FATAL Admin changedstate of 'db3' from AWAITING_RECOVERY to ONLINE

2015/04/14 00:58:18 INFO Moving role 'reader(192.168.52.131)'from host 'db2' to host 'db3'

 

8.4 mmm_control的命令

[root@oraclem1 ~]# mmm_control --help
Invalid command '--help'
 
Valid commands are:
    help                              - show this message
    ping                              - ping monitor
    show                              - show status
    checks [|all [|all]] - show checks status
    set_online                  - set host  online
    set_offline                 - set host  offline
    mode                              - print current mode.
    set_active                        - switch into active mode.
    set_manual                        - switch into manual mode.
    set_passive                       - switch into passive mode.
    move_role [--force]   - move exclusive role  to host 
                                        (Only use --force if you know what you are doing!)
    set_ip                  - set role with ip  to host 
 
[root@oraclem1 ~]#

 

 

9,測試切換操作

9.1 mmm切換原理

slave agent收到new master發送的set_active_master信息後如何進行切換主庫

源碼 Agent\Helpers\Actions.pm

set_active_master($new_master)

Try to catch up with the old master as faras possible and change the master to the new host.

(Syncs to the master log if the old masteris reachable. Otherwise syncs to the relay log.)

 

 

(1)、獲取同步的binlog和position

先獲取new master的show slavestatus中的 Master_Log_File、Read_Master_Log_Pos

wait_log=Master_Log_File, wait_pos=Read_Master_Log_Pos

 

如果old master可以連接,再獲取oldmaster的show master status中的File、Position

wait_log=File, wait_pos=Position # 覆蓋new master的slave信息,以old master 為准

 

(2)、slave追趕old master的同步位置

SELECT MASTER_POS_WAIT('wait_log', wait_pos);

 

#停止同步

STOP SLAVE;

 

(3)、設置new master信息

#此時 new master已經對外提供寫操作。

#(在main線程裡new master先接收到激活的消息,new master 轉換(包含vip操作)完成後,然後由_distribute_roles將master變動同步到slave上)

#在new master轉換完成後,如果能執行flush logs,更方便管理

#獲取new master的binlog數據。

SHOW MASTER STATUS;

 

#從配置文件/etc/mysql-mmm/mmm_common.conf 讀取同步帳戶、密碼

replication_user、replication_password

 

#設置新的同步位置

CHANGE MASTER TO MASTER_HOST='$new_peer_host',MASTER_PORT=$new_peer_port

,MASTER_USER='$repl_user', MASTER_PASSWORD='$repl_password'

,MASTER_LOG_FILE='$master_log', MASTER_LOG_POS=$master_pos;

 

#開啟同步

START SLAVE;

 

9.2 停止db1,看write是否會自動切換到db2

(1)在db1上執行service mysqlstop;

[root@data01 ~]# service mysql stop;

Shutting down MySQL..... SUCCESS!

[root@data01 ~]#

 

(2)在monitor用mmm_control查看狀態

[root@oraclem1 ~]# mmm_control show

db1(192.168.52.129) master/HARD_OFFLINE. Roles:

db2(192.168.52.128) master/ONLINE. Roles: reader(192.168.52.129),writer(192.168.52.120)

db3(192.168.52.131) slave/ONLINE. Roles: reader(192.168.52.128),reader(192.168.52.131)

 

[root@oraclem1 ~]#

Writer已經變成了db2了。

 

(3)在monitor上查看後台日志,可以看到如下描述

[root@oraclem1 mysql-mmm]# tail -f /var/log/mysql-mmm/mmm_mond.log 
......
2015/04/14 01:34:11  WARN Check 'rep_backlog' on 'db1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.52.129:3306, user = mmm_monitor)! Lost connection to MySQL server at 'reading initial communication packet', system error: 111
2015/04/14 01:34:11  WARN Check 'rep_threads' on 'db1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.52.129:3306, user = mmm_monitor)! Lost connection to MySQL server at 'reading initial communication packet', system error: 111
2015/04/14 01:34:21 ERROR Check 'mysql' on 'db1' has failed for 10 seconds! Message: ERROR: Connect error (host = 192.168.52.129:3306, user = mmm_monitor)! Lost connection to MySQL server at 'reading initial communication packet', system error: 111
2015/04/14 01:34:23 FATAL State of host 'db1' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK)
2015/04/14 01:34:23  INFO Removing all roles from host 'db1':
2015/04/14 01:34:23  INFO     Removed role 'reader(192.168.52.128)' from host 'db1'
2015/04/14 01:34:23  INFO     Removed role 'writer(192.168.52.120)' from host 'db1'
2015/04/14 01:34:23  INFO Orphaned role 'writer(192.168.52.120)' has been assigned to 'db2'
2015/04/14 01:34:23  INFO Orphaned role 'reader(192.168.52.128)' has been assigned to 'db3'

 

 

9.3 停止db2,看writer自動切換到db1

(1)啟動db1,並設置為online

[root@data01 ~]# service mysql start

Starting MySQL.................. SUCCESS!

[root@data01 ~]#

在monitor上設置db1為online

[root@oraclem1 ~]# mmm_control set_onlinedb1;

OK: State of 'db1' changed to ONLINE. Nowyou can wait some time and check its new roles!

[root@oraclem1 ~]#

在monitor上查看狀態

[root@oraclem1 ~]# mmm_control show

db1(192.168.52.129) master/ONLINE. Roles: reader(192.168.52.131)

db2(192.168.52.128) master/ONLINE. Roles: reader(192.168.52.129),writer(192.168.52.120)

db3(192.168.52.131) slave/ONLINE. Roles: reader(192.168.52.128)

 

[root@oraclem1 ~]#

OK,這裡要啟動db1,並且將db1設置成online,是因為mmm的配置裡面master只能在db1和db2之間切換,在自動切換成功的情況下,必須保證要切換的對象master是online的,不然切換就會失敗因為切換對象沒有online。

 

(2)停止db2

[root@data02 ~]# service mysql stop

Shutting down MySQL.. SUCCESS!

[root@data02 ~]#

(3)在monitor上查看master是否自動從db2切換到db1了

[root@oraclem1 ~]# mmm_control show

db1(192.168.52.129) master/ONLINE. Roles: reader(192.168.52.131),writer(192.168.52.120)

db2(192.168.52.128) master/HARD_OFFLINE. Roles:

db3(192.168.52.131)slave/ONLINE. Roles: reader(192.168.52.129), reader(192.168.52.128)

 

[root@oraclem1 ~]#

OK,writer已經自動變成db1了,db2處於HARD_OFFLINE狀態,自動切換成功了。

 

(4)去查看monitor後台日志

[root@oraclem1 mysql-mmm]# tail -f /var/log/mysql-mmm/mmm_mond.log
......
2015/04/14 01:56:13 ERROR Check 'mysql' on 'db2' has failed for 10 seconds! Message: ERROR: Connect error (host = 192.168.52.128:3306, user = mmm_monitor)! Lost connection to MySQL server at 'reading initial communication packet', system error: 111
2015/04/14 01:56:14 FATAL State of host 'db2' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK)
2015/04/14 01:56:14  INFO Removing all roles from host 'db2':
2015/04/14 01:56:14  INFO     Removed role 'reader(192.168.52.129)' from host 'db2'
2015/04/14 01:56:14  INFO     Removed role 'writer(192.168.52.120)' from host 'db2'
2015/04/14 01:56:14  INFO Orphaned role 'writer(192.168.52.120)' has been assigned to 'db1'
2015/04/14 01:56:14  INFO Orphaned role 'reader(192.168.52.129)' has been assigned to 'db3'

 

9.4 驗證vip

先獲得writer的vip地址,在db2上

[root@oraclem1 ~]# mmm_control show

db1(192.168.52.129) master/ONLINE. Roles: reader(192.168.52.230)

db2(192.168.52.130) master/ONLINE. Roles: reader(192.168.52.231),writer(192.168.52.120)

db3(192.168.52.131) slave/ONLINE. Roles: reader(192.168.52.229)

 

[root@oraclem1 ~]#

 

然後去db2上面,查看ip綁定:

[root@data02 mysql-mmm-2.2.1]# ip add

1: lo: mtu16436 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen1000

link/ether 00:0c:29:a7:26:fc brd ff:ff:ff:ff:ff:ff

inet 192.168.52.130/24 brd 192.168.52.255 scope global eth0

inet 192.168.52.231/32 scope global eth0

inet 192.168.52.120/32 scope global eth0

inet6 fe80::20c:29ff:fea7:26fc/64 scope link

valid_lft forever preferred_lft forever

3: pan0: mtu1500 qdisc noop state DOWN

link/ether 46:27:c6:ef:2b:95 brd ff:ff:ff:ff:ff:ff

[root@data02 mysql-mmm-2.2.1]#

 

Ping下writer的vip,是可以ping通的。

[root@oraclem1 ~]# ping 192.168.52.120

PING 192.168.52.120 (192.168.52.120) 56(84)bytes of data.

64 bytes from 192.168.52.120: icmp_seq=1ttl=64 time=2.37 ms

64 bytes from 192.168.52.120: icmp_seq=2ttl=64 time=0.288 ms

64 bytes from 192.168.52.120: icmp_seq=3ttl=64 time=0.380 ms

^C

--- 192.168.52.120 ping statistics ---

3 packets transmitted, 3 received, 0%packet loss, time 2717ms

rtt min/avg/max/mdev =0.288/1.015/2.377/0.963 ms

[root@oraclem1 ~]#

 

9.5 mmm擴展

  1. 上一頁:
  2. 下一頁:
Copyright © 程式師世界 All Rights Reserved