程序師世界是廣大編程愛好者互助、分享、學習的平台,程序師世界有你更精彩!
首頁
編程語言
C語言|JAVA編程
Python編程
網頁編程
ASP編程|PHP編程
JSP編程
數據庫知識
MYSQL數據庫|SqlServer數據庫
Oracle數據庫|DB2數據庫
 程式師世界 >> 數據庫知識 >> MYSQL數據庫 >> MySQL綜合教程 >> MySQL高可用之——keepalivd+互為主從

MySQL高可用之——keepalivd+互為主從

編輯:MySQL綜合教程

MySQL高可用之——keepalivd+互為主從


目標:配置一個keepalived雙機熱備架構,並配置主從復制

規劃:
master1 zlm177 192.168.17.177
master2 zlm188 192.168.17.188
vip 192.168.17.166

環境: Red Hat Enterprise Linux 6.4
Percona Server 5.6.15

一、軟件安裝

可以去官網http://www.keepalived.org/software/下載最新版本的keepalived,目前最新的是1.2.13版,如下:
keepalived-1.2.13.tar

cp keepalived-1.2.13.tar /usr/local/src
cd /usr/local/src
tar zxvf keepalived-1.2.13.tar -C /opt/
cd /opt/keepalived-1.2.13
./configure --prefix=/usr/local/keepalived
make;makeinstall 或 make && makeinstall

注意,在編譯過程中會提示缺少gcc和openssl包,用yum install裝一下就可以了
RHEL6.4也可以配置CentOS的yum,具體方法這裡就不講了
yum install gcc
yum install openssl openssl-devel
還會提示xxx依賴包也需要安裝,一並裝上,用yum的好處就是安裝方便,讓系統自動判斷需要哪些包,自動下載並安裝,完成編譯以後,軟件安裝就結束了

二、配置軟件參數(VRRP)

裝完軟件後,默認會配置文件的路徑為:
/usr/local/keepalived/etc/keepalived/keepalived.conf

在主端打開該配置文件,把原有內容清空,再添加以下內容:

! Configuration File for keepalived

global_defs { --全局配置
notification_email {
[email protected] --接收通知的郵箱
}
router_id aaron8219 --可以用字母,也可以使數字,可以一致,也可以不一致,只是一個標識
}

vrrp_instance my_177 {
state BACKUP --BACKUP從端模式
interface eth0
virtual_router_id 88 --默認為51,取值范圍在1~255的整數,主從兩端必須一致,才表示是同一個組
priority 90
advert_int 1 --檢查間隔,默認1s
nopreempt --設置非搶占模式,
authentication {
auth_type PASS
auth_pass 1234
}
virtual_ipaddress { --指定vip,
192.168.17.166
}
}

##關於vip的說明:vip隨著state的變化而增加刪除,當state為master的時候就添加,當state為backup的時候刪除,主要是由優先級來決定的,和state設置的值沒有多大關系(state相同的情況下),至於vip到底在主端還是從端,還和nopreempt有關,這裡vip可以設置多個IP地址

##關於nopreempt的說明:只能設置在state為backup的節點上,且這個節點的優先級必須比另外的高

virtual_server 192.168.17.166 3306 { --虛擬服務器ip和端口
delay_loop 2
lb_algo wrr --帶有權重的輪詢
lb_kind DR
persistence_timeout 60
protocol TCP

real_server 192.168.17.177 3306 { --真實服務器ip和端口
weight 3 --權重為3
notify_down /opt/mysql/mysql.sh --指定自殺腳本的路徑
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 3306
}
}
}

從端的配置文件大同小異,只要把IP和實例名改成自己的就可以了,並且設置從端的priority為90,而主端為100
有一點要注意的是,主從兩端的state,都配置成了backup,因為使用了nopreempt,即非搶占模式

舉個例子,當主端先啟動mysql實例和keepalived後,如果此時從端也啟動了mysql實例和keepalived,那麼vip不會跳到從端上去,即使它的優先級為100,要大於主端的90
而如果不設置nopreempt,那麼這個時候,又分2種情況:

1.state相同,即都是master或都是backup
優先級高的,會占有vip,和角色無關

2.state不同,即master->backup或backup->master
優先級高的,會占有vip,和角色無關

前提不同,結果都是一樣的,即優先級是主導,誰的優先級高,vip就漂到誰那裡

創建一個自殺腳本來判斷mysql進程是否啟動
touch /opt/mysql/mysql.sh
添加以下內容:
#!/bin.sh
pkill keepalived --表示kill掉keepalived進程

三、運行測試

/usr/local/keepalived/sbin/keepalived -f /usr/local/keepalived/etc/keepalived/keepalived.conf -D

-f 指定keepalived的參數文件
-D 表示在操作系統日志裡顯示詳細記錄


判斷keepalived進程是否正常啟動,只要查看/var/log/messages裡的日志就可以了
tail -30f /var/log/message
注意,如果沒有啟動mysql而先啟動了keepalived,那麼之前notify_down參數中指定的腳本就會被執行,表示沒有找到mysql進程,把keeplied自己的進程給kill掉。


Jul 25 02:51:22 zlm188 Keepalived[3440]: Starting Keepalived v1.2.13 (07/22,2014)
Jul 25 02:51:22 zlm188 Keepalived[3441]: Starting Healthcheck child process, pid=3442
Jul 25 02:51:22 zlm188 Keepalived[3441]: Starting VRRP child process, pid=3443
Jul 25 02:51:22 zlm188 Keepalived_vrrp[3443]: Netlink reflector reports IP 192.168.17.188 added
Jul 25 02:51:22 zlm188 Keepalived_healthcheckers[3442]: Netlink reflector reports IP 192.168.17.188 added
Jul 25 02:51:22 zlm188 Keepalived_healthcheckers[3442]: Netlink reflector reports IP fe80::a00:27ff:fe71:6b7b added
Jul 25 02:51:22 zlm188 Keepalived_healthcheckers[3442]: Registering Kernel netlink reflector
Jul 25 02:51:22 zlm188 Keepalived_vrrp[3443]: Netlink reflector reports IP fe80::a00:27ff:fe71:6b7b added
Jul 25 02:51:22 zlm188 Keepalived_healthcheckers[3442]: Registering Kernel netlink command channel
Jul 25 02:51:22 zlm188 Keepalived_vrrp[3443]: Registering Kernel netlink reflector
Jul 25 02:51:22 zlm188 Keepalived_vrrp[3443]: Registering Kernel netlink command channel
Jul 25 02:51:22 zlm188 Keepalived_vrrp[3443]: Registering gratuitous ARP shared channel
Jul 25 02:51:22 zlm188 Keepalived_vrrp[3443]: Opening file '/usr/local/keepalived/etc/keepalived/keepalived.conf'.
Jul 25 02:51:22 zlm188 Keepalived_healthcheckers[3442]: Opening file '/usr/local/keepalived/etc/keepalived/keepalived.conf'.
Jul 25 02:51:22 zlm188 Keepalived_healthcheckers[3442]: Configuration is using : 11566 Bytes
Jul 25 02:51:22 zlm188 Keepalived_vrrp[3443]: Configuration is using : 62964 Bytes
Jul 25 02:51:22 zlm188 Keepalived_vrrp[3443]: Using LinkWatch kernel netlink reflector...
Jul 25 02:51:22 zlm188 Keepalived_vrrp[3443]: VRRP_Instance(my_178) Entering BACKUP STATE
Jul 25 02:51:22 zlm188 Keepalived_vrrp[3443]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
Jul 25 02:51:22 zlm188 Keepalived_healthcheckers[3442]: Using LinkWatch kernel netlink reflector...
Jul 25 02:51:22 zlm188 Keepalived_healthcheckers[3442]: Activating healthchecker for service [192.168.17.188]:3306
Jul 25 02:51:25 zlm188 Keepalived_healthcheckers[3442]: TCP connection to [192.168.17.188]:3306 failed !!!
Jul 25 02:51:25 zlm188 Keepalived_healthcheckers[3442]: Removing service [192.168.17.188]:3306 from VS [192.168.17.166]:3306
Jul 25 02:51:25 zlm188 Keepalived_healthcheckers[3442]: Executing [/opt/mysql/mysql.sh] for service [192.168.17.188]:3306 in VS [192.168.17.166]:3306
Jul 25 02:51:25 zlm188 Keepalived_healthcheckers[3442]: Lost quorum 1-0=1 > 0 for VS [192.168.17.166]:3306
Jul 25 02:51:25 zlm188 Keepalived[3441]: Stopping Keepalived v1.2.13 (07/22,2014)

在從端啟動keepalived,跟蹤日志文件可以發現,跑了一遍以後自動stopping了,這也說明,之前的配置是ok的了,否則就要檢查一下,哪裡配置有誤,尤其要注意virtual_route_id必須保持一致,而route_id則不強行要求一致,實例名也不要重復

1.檢驗vip的情況
#ip address show或ip a show

2.用vip登錄mysql數據庫(前提是已開啟了mysqld和keepalived進程)
#mysql -h192.168.17.166 -uaaron8219 -pzlm

3.關閉某一端網卡後,測試vip的去向,以及是否能通過vip正常登陸
#ifdown eth0
#ip a show
#mysql -h192.168.17.166 -uaaron8219 -pzlm

4.重啟某一端(主機模擬主機故障),測試vip的去向,以及是否能通過vip正常登陸
#init 6
#ip a show
#mysql -h192.168.17.166 -uaaron8219 -pzlm

5.直接kill掉keepalived進程,測試vip的去向,以及是否能通過vip正常登陸
#pkill keepalived
#ip a show
#mysql -h192.168.17.166 -uaaron8219 -pzlm

四、配置數據庫同步

安裝完keepalived,只是保證了mysql數據庫的高可用性,但是要真正做到互為主從,還需要配置MySQL主從復制模式,使數據庫可以達到一致性狀態

1.兩端配置同步所需參數
確保server-id與slave不一致,通常server-id的格式可以設置成ip末尾2-3位+端口號
比如我的環境master的ip是192.168.17.177,端口是3306
那麼server-id可以設置成1773306,相應地,slave就設置成1883306
以下參數都是在/etc/my.cnf文件中配置
server-id=1773306
log-bin=percona-bin --啟用binlog
set-variable=binlog-ignore-db=mysql --不記錄數據庫mysql的更新日志,避免了Master上的權限設置等被同步到Slave上

2.兩端添加復制用戶
mysql> grant repliecation slave on *.* to 'rep'@'192.168.17.%' identified by 'rep';

如果想要在Slave上有權限執行 "LOAD TABLE FROM MASTER" 或 "LOAD DATA FROM MASTER" 語句的話,必須授予全局的 FILE 和 Select 權限:
mysql> GRANT FILE,Select,REPLICATION SLAVE ON *.* TO 'rep'@'%' IDENTIFIED BY 'rep';

3.設置同步
如果是新庫,兩邊直接重置master的binlog
mysql> reset master;

(否則,需要把master的庫用mysqldump導出(或直接打包壓縮),再復制到從庫主機,大致步驟如下:

①mysql> flush tables with read lock;
②mysql> mysqldump -uroot -p --all-databases -l -F >full_db.sql
scp full_db.sql [email protected]:/data/mysql/percona_3306/data

②cd /data/mysql/percona_3306
tar zcvf data.tar.gz ./data
scp data.tar.gz [email protected]

③mysql> unlock tables;
從庫導入主庫的數據庫
mysql> mysql -uroot -p </data/mysql/percona_3306/data/full_db.sql

mysql> source /data/mysql/percona_3306/data/full_db.sql
)

4.主庫查詢日志狀態
mysql> show master status\G

5.從庫根據主庫的binlog位置和position來執行同步

mysql> change master to master_host='192.168.17.177',master_user='rep',master_password='rep',

master_log_file='percona-bin.000001',master_log_pos='120';
6.啟動slave
mysql> start slave;

啟動後報錯
Slave I/O: Fatal error: The slave I/O thread stops because master and slave have equal MySQL server UUIDs; these UUIDs must be different for replication to work. Error_code: 1593

因為從庫是直接通過虛擬機拷貝鏡像的方式創建的,所以UUID重復了,UUID是存放在
/data/mysql/percona_3306/data/auto.cnf文件中的

可以把這個文件直接刪除,或者編輯該文件,修改裡面的UUID和主庫不同即可,正常以後,應該是以下的狀態:

(testing)root@localhost [(none)]> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.17.177
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: percona-bin.000011
Read_Master_Log_Pos: 540
Relay_Log_File: node78-relay-bin.000018
Relay_Log_Pos: 285
Relay_Master_Log_File: percona-bin.000011
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 540
Relay_Log_Space: 459
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 773306
Master_UUID: 917ecbfc-10dc-11e4-b624-080027267b03
Master_Info_File: /data/mysql/percona_3306/data/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0

7.測試
主庫測試數據庫zlm的tb_zlm表中執行插入一行數據:

(testing)root@localhost [(none)]> select * from zlm.tb_zlm;
+------+-----------+
| id | name |
+------+-----------+
| 1 | aaron8219 |
| 2 | zlm |
| 3 | abc |
+------+-----------+
3 rows in set (0.00 sec)

(testing)root@localhost [(none)]> insert into zlm.tb_zlm values(4,'def');
Query OK, 1 row affected (0.03 sec)

從庫查詢zlm.tb_zlm表:

(testing)root@localhost [(none)]> select * from zlm.tb_zlm;
+------+-----------+
| id | name |
+------+-----------+
| 1 | aaron8219 |
| 2 | zlm |
| 3 | abc |
| 4 | def |
+------+-----------+
4 rows in set (0.00 sec)

從庫上也進行同樣的配置,即可完成互為主從,只要從相應的master的binlog的pos位置開始change maseter就可以了

結論:只要配置的VRRP組裡面有一台機器開啟了mysqld和keepalived進程,任何通過vip實現的數據庫連接訪問,都是正常的,這樣無論是哪個節點down掉了,都不會影響mysql數據庫的可用性,是一個最簡單的mysql高可用架構。自此,通過keepalived來實現互為主從的雙機熱備架構就完成了,如果再復雜一點,安裝lvpsadm來實現虛擬服務器的配置,那麼就是一個經典的keepalived+lvs架構

  1. 上一頁:
  2. 下一頁:
Copyright © 程式師世界 All Rights Reserved