程序師世界是廣大編程愛好者互助、分享、學習的平台,程序師世界有你更精彩!
首頁
編程語言
C語言|JAVA編程
Python編程
網頁編程
ASP編程|PHP編程
JSP編程
數據庫知識
MYSQL數據庫|SqlServer數據庫
Oracle數據庫|DB2數據庫
 程式師世界 >> 數據庫知識 >> MYSQL數據庫 >> MySQL綜合教程 >> MySQL高可用MHA安裝部署以及故障轉移詳細資料匯總

MySQL高可用MHA安裝部署以及故障轉移詳細資料匯總

編輯:MySQL綜合教程

MySQL高可用MHA安裝部署以及故障轉移詳細資料匯總


 

1,簡介

1.1mha簡介

MHA,即MasterHigh Availability Manager and Tools for MySQL,是日本的一位MySQL專家采用Perl語言編寫的一個腳本管理工具,該工具僅適用於MySQLReplication(二層)環境,目的在於維持Master主庫的高可用性。

 

MHA(Master High Availability)是自動的master故障轉移和Slave提升的軟件包.它是基於標准的MySQL復制(異步/半同步).

MHA有兩部分組成:MHA Manager(管理節點)和MHA Node(數據節點).

MHA Manager可以單獨部署在一台獨立機器上管理多個master-slave集群,也可以部署在一台slave上.MHA Manager探測集群的node節點,當發現master出現故障的時候,它可以自動將具有最新數據的slave提升為新的master,然後將所有其它的slave導向新的master上.整個故障轉移過程對應用程序是透明的。

MHA node運行在每台MySQL服務器上(master/slave/manager),它通過監控具備解析和清理logs功能的腳本來加快故障轉移的。

 

1.2,mha特點

1. 10-30s實現master failover(9-12s可以檢測到主機故障,7-10s可以關閉主機避免SB,在用很短的時間應用差異日志)

 

2. 部署簡單,無需對現有M-S結構做任何改動(至少3台,保證切換後仍保持M-S結構)

 

3. 支持手動在線切換(主機硬件維護),downtime幾乎很短0.5-2s

 

4. 保證故障切換後多從庫數據的一致性

 

5. 完全自動化的failover及快速復制架構恢復方案(一主多從)

 

6. 恢復過程包括:選擇新主庫、確認從庫間relaylog差異、新主庫應用必要語句、其他從庫同步差異語句、重新建立復制連接

 

2,工作原理

\

相較於其它HA軟件,MHA的目的在於維持MySQL Replication中Master庫的高可用性,其最大特點是可以修復多個Slave之間的差異日志,最終使所有Slave保持數據一致,然後從中選擇一個充當新的Master,並將其它Slave指向它。

-從宕機崩潰的master保存二進制日志事件(binlogevents)。

-識別含有最新更新的slave。

-應用差異的中繼日志(relay log)到其它slave。

-應用從master保存的二進制日志事件(binlogevents)。

-提升一個slave為新master。

-使其它的slave連接新的master進行復制。

 

 

 

3,mha工具包

(1)、 Manager工具:

- masterha_check_ssh : 檢查MHA的SSH配置。

- masterha_check_repl : 檢查MySQL復制。

- masterha_manager : 啟動MHA。

- masterha_check_status : 檢測當前MHA運行狀態。

- masterha_master_monitor : 監測master是否宕機。

- masterha_master_switch : 控制故障轉移(自動或手動)。

- masterha_conf_host : 添加或刪除配置的server信息。

 

(2)、 Node工具(這些工具通常由MHAManager的腳本觸發,無需人手操作)。

- save_binary_logs : 保存和復制master的二進制日志。

- apply_diff_relay_logs : 識別差異的中繼日志事件並應用於其它slave。

- filter_mysqlbinlog : 去除不必要的ROLLBACK事件(MHA已不再使用這個工具)。

- purge_relay_logs : 清除中繼日志(不會阻塞SQL線程)。

 

4,主機部署

Ip地址

mha狀況

Mysql狀況

192.168.52.129

Node

Master

192.168.52.130

Node

Slave

192.168.52.131

Manager

Slave

 

5,生成ssh無密鑰證書

5.1 先在192.168.52.129上面生成密鑰

ssh-keygen -t dsa -P '' -f id_dsa

Id_dsa.pub為公鑰,id_dsa為私鑰,緊接著將公鑰文件復制成authorized_keys文件,這個步驟是必須的,過程如下:

 

cat id_dsa.pub >> authorized_keys

 

 

5.2 在192.168.52.130上面生產密鑰

ssh-keygen -t dsa -P '' -f id_dsa

cat id_dsa.pub >> authorized_keys

 

 

5.3 在192.168.52.131上面生產密鑰

ssh-keygen -t dsa -P '' -f id_dsa

cat id_dsa.pub >> authorized_keys

 

5.4 構造3個通用的authorized_keys

在192.168.52.129上面操作:

cd /root/.ssh #copy130和131上面的密鑰過來

scp 192.168.52.130:/root/.ssh/id_dsa.pub./id_dsa.pub.130

scp 192.168.52.131:/root/.ssh/id_dsa.pub./id_dsa.pub.131

cat id_dsa.pub.130 >> authorized_keys

cat id_dsa.pub.131 >> authorized_keys

 

查看生成的通用密鑰

[root@data01 .ssh]# cat authorized_keys

ssh-dssAAAAB3NzaC1kc3MAAACBAKe9oTz+hQ3vAUE+x7s2EIT6RSrlPoD2VHuSsDo+ZmAyv6+DD6/eVhVXrCyqzjQPJa6UI0PYjlPMk2r2wqdvC/YqQaLhFuQmsBQwrVA2xNHwhB3ic+Om44GVoiZFM7+bGAtfhQ9DLK2+sjfaa/oQfuDvTJ2SI/f0oG3wDGmokgdLAAAAFQC/O4R1yX1FxW7+dCKHhV+LQHWWHQAAAIADR5yqZGZCx8VB8Q6kAYg3cuUCCo3gF/hA5bHY+2xO5CcGAD1mq/l5v55QPUlGAH7btdmfip1tiaQ+V3N+Ektf2psM0uap/LKvbV2jQYKc2UZrpfF6R0lG+x9rpkxWMce1TJ4yANGOasjNjUV6Lg0RVDxLiMT4Ja4+edQVduYt2AAAAIBPNfJlyglZ5CwsI+v753tD8WT4GaH8ssNLpIKrH9qJU6OuT9MmniKE1RqZr+e5MCvkUAAXHFPq0DhHQlPKWqhIpVlu0E8Zsn9a5tv728JpIYz1umB5VLo2J5fhpobefD3AhjEHSyaubJgQG4Gu+jdwsR0H21xLNx0VoP8YPbYkAQ==root@data01

ssh-dssAAAAB3NzaC1kc3MAAACBAPtU+mTL9an88U1wfSwxdlHBg9n8eB9l218sXGLHwJyxNJN0pq4iPCLMfRxhM6T30HxUnyhghxBF2XvkAkMEjZ+IuCoA0mwRi1CcYSMh72SXqfRdgE2GpRBZDRRhlCIE5feNswWZdC7fIDmgNHtK5CFmJLcl+9Bkb3kfGqu8JOxbAAAAFQDo2YRMd5ZsfBRvPZcCYWcOsuy2oQAAAIEA4pGH2w7luN9JhJ0V6sDUUySg4R488yloKR/k4yD33qPXTejCiDUKUbanMLs6obQqxpbVvlhRT6cyo/le7PO6H8IzRHjFy65EPL0omn7J06PmHBUMqCn4jXo27EGXlRLavnonUf3tFeaVo7GxXerj71NdBKkQX7e/bgzD4d5v0PMAAACBAIhx1X50hlYzqPEZEDXTJmntLRXsMB20DvngvUcQLlgLozwfaNdJAQuYSGqklXYTquSmsyaTNJsxj8EkKG4XbM/7hHtNbV8KuAMJGT4Dm7jEdiKClyle2tIvVtonYaL41KeZVdn6Lk4lRYIFxpDoQHKXXr+XEFhLjoniT8urPISlroot@data02

ssh-dss AAAAB3NzaC1kc3MAAACBAJtC3j4Gq+zR7adyKFco/1hELblR65Af+Cgi81mfL+sJAFuOqPcFvDANhymBQ9ltH1N2/eDq1CrD0U9tRMxSwBvgiWZW9brkMpl5ix6oJSCBHdyqL6iuREk7CZ3V/y7P2V+TDCc+am5onMWDG1Af9o6CeA7CP92CHaUoltcXd7L7AAAAFQCqpeVqjbEs/lC/J1utfWrpGDxt8QAAAIB1aeB6C3S9t0dU3bEZs06DaooO46foBFMOI7u0w7uEWvj48UQF7W5Y++vjsiARxr6clk2g2T70n0fJmZCtMMiHqD6vyy8OOy8PzdUcQVAUW2GZQ8hn7M1U2GOz2KPo6uUbPiKkXilEfh9YRsyZyxMdmC4raPjPea8sj6favK8RbgAAAIAima6hWfBFsFcE1oh02xk7ydavHGJbHAlxeH9g9glE0FPmzPgWMFkYQnLsyV2z+ouMPFmERUPYzg1k/8Dhoxjp9j4JB6bIcPNtKdlS660NcFLxRtFAhrnzvLOTzXYzeIuZOlE0WcjeQGNpx8JHAef/3WzpHnVmnhbmlkBrZ8X/OQ==root@oraclem1

[root@data01 .ssh]#

 

看到authorized_keys文件裡面有3行記錄,分別代表了訪問data01(192.168.52.129),oraclem1(192.168.52.131),data02(192.168.52.130)的公用密鑰。然後把這個authorized_keys公鑰文件copy到oraclem1(192.168.52.131)和data02(192.168.52.130)上面同一個目錄下。Scp命令如下:

scp authorized_keys192.168.52.130:/root/.ssh/

scp authorized_keys192.168.52.131:/root/.ssh/

 

5.5 開始check驗證如下:

[root@oraclem1 ~]# ssh 192.168.52.130

Last login: Tue Apr 7 02:40:40 2015 from data01

[root@data02 ~]# ssh 192.168.52.131

Last login: Tue Apr 7 02:40:56 2015 from 192.168.52.131

[root@oraclem1 ~]# ssh 192.168.52.130

Last login: Tue Apr 7 02:41:11 2015 from 192.168.52.131

[root@data02 ~]# ssh 192.168.52.129

Last login: Tue Apr 7 02:40:01 2015 from 192.168.52.131

[root@data01 ~]# ssh 192.168.52.131

Last login: Tue Apr 7 02:41:18 2015 from 192.168.52.130

[root@oraclem1 ~]# ssh 192.168.52.129

Last login: Tue Apr 7 02:41:26 2015 from data02

[root@data01 ~]# ssh 192.168.52.131

Last login: Tue Apr 7 02:41:31 2015 from 192.168.52.129

[root@oraclem1 ~]# ssh 192.168.52.130

Last login: Tue Apr 7 02:41:21 2015 from 192.168.52.131

[root@data02 ~]# ssh 192.168.52.129

Last login: Tue Apr 7 02:41:42 2015 from 192.168.52.131

[root@data01 ~]

OK,3台服務器已經能實現兩兩互相ssh通了,不需要輸入密碼即可。

PS:如果不能實現任何兩台主機互相之間可以無密碼登錄,後面的環節可能會有問題。

 

5.6 實現主機名hostname登錄

在3台服務器上,編輯/etc/hosts,追加入以下內容保存退出

192.168.52.129 data01

192.168.52.130 data02

192.168.52.131 oraclem1

 

驗證主機名登錄:

[root@data02 ~]# ssh oraclem1

The authenticity of host 'oraclem1(192.168.52.131)' can't be established.

RSA key fingerprint ise5:f0:ae:e3:14:35:2f:09:1f:88:dd:31:c3:1a:e1:73.

Are you sure you want to continueconnecting (yes/no)? yes

Warning: Permanently added 'oraclem1' (RSA)to the list of known hosts.

Last login: Tue Apr 7 02:51:59 2015 from data01

[root@oraclem1 ~]# ssh data02

Last login: Tue Apr 7 02:52:10 2015 from data01

[root@data02 ~]# ssh oraclem1

Last login: Tue Apr 7 02:52:18 2015 from data02

[root@oraclem1 ~]# vim /etc/hosts

[root@oraclem1 ~]# ssh data01

Last login: Tue Apr 7 02:52:06 2015 from data02

[root@data01 ~]# ssh data02

Last login: Tue Apr 7 02:52:21 2015 from oraclem1

[root@data02 ~]# ssh data01

Last login: Tue Apr 7 02:55:13 2015 from oraclem1

[root@data01 ~]#

OK,看到可以實現通過主機名來無密碼登錄了。

 

6,准備好mysql主從環境

 

具體搭建過程,參考以前的blog:http://blog.csdn.net/mchdba/article/details/44734597

架構如下,一主二從的架構

Master主庫à192.168.52.129,slave從庫à192.168.52.130

Master 主庫à192.168.52.129,slave從庫à192.168.52.131

 

 

創建用戶mha管理的賬號,在所有mysql服務器上都需要執行:

GRANT SUPER,RELOAD,REPLICATIONCLIENT,SELECT ON *.* TO manager@'192.168.52.%' IDENTIFIED BY 'manager_1234';

GRANT CREATE,INSERT,UPDATE,DELETE,DROP ON*.* TO manager@'192.168.52.%';

 

創建主從賬號,在所有mysql服務器上都需要執行:

GRANT RELOAD, SUPER, REPLICATION SLAVE ON*.* TO 'repl'@'192.168.52.%' IDENTIFIED BY 'repl_1234';

 

7,開始安裝mha

mha包括manager節點和data節點,data節點包括原有的MySQL復制結構中的主機,至少3台,即1主2從,當masterfailover後,還能保證主從結構;只需安裝node包。manager server:運行監控腳本,負責monitoring 和 auto-failover;需要安裝node包和manager包。

 

為了節省機器,可以從現有復制架構中選一台“閒置”從庫作為manager server,比如:某台從庫不對外提供讀的服務,只是作為候選主庫,或是專門用於備份。

7.1,在數據節點上安裝mha

首先安裝yum -y install perl-DBD-MySQL

tar -zxvpf mha4mysql-node-0.56.tar.gz

perl Makefile.PL

make && make install

 

7.2 在管理節點上安裝mha

首先安裝perl的mysql包:

yum install -y perl-DBD-MySQL

yum install -y perl-Config-Tiny

yum install -y perl-Log-Dispatch

yum install -y perl-Parallel-ForkManager

yum install -y perl-Config-IniFiles

一些安裝失敗,可以直接下載rpm包安裝:

wgetftp://ftp.muug.mb.ca/mirror/centos/5.10/os/x86_64/CentOS/perl-5.8.8-41.el5.x86_64.rpm

wgetftp://ftp.muug.mb.ca/mirror/centos/6.5/os/x86_64/Packages/compat-db43-4.3.29-15.el6.x86_64.rpm

wgethttp://downloads.naulinux.ru/pub/NauLinux/6x/i386/sites/School/RPMS/perl-Log-Dispatch-2.27-1.el6.noarch.rpm

 

wget http://dl.fedoraproject.org/pub/epel/6/i386/perl-Parallel-ForkManager-0.7.9-1.el6.noarch.rpm

 

wget http://dl.fedoraproject.org/pub/epel/6/i386/perl-Mail-Sender-0.8.16-3.el6.noarch.rpm wget http://dl.fedoraproject.org/pub/epel/6/i386/perl-Mail-Sendmail-0.79-12.el6.noarch.rpm

 

wget http://mirror.centos.org/centos/6/os/x86_64/Packages/perl-Time-HiRes-1.9721-136.el6.x86_64.rpm

 

下載完後,一個個rpm安裝好

如果最後還是安裝不好,可以嘗試一下perl CPAN的方式:

perl -MCPAN -e shell

cpan[1]> install Log::Dispatch

 

然後通過perlMakefile.PL檢查mha的perl安裝環境,如下所示:

[root@oraclem1 mha4mysql-manager-0.56]#perl Makefile.PL

*** Module::AutoInstall version 1.03

*** Checking for Perl dependencies...

[Core Features]

- DBI ...loaded. (1.609)

- DBD::mysql ...loaded. (4.013)

- Time::HiRes ...loaded. (1.9726)

- Config::Tiny ...loaded. (2.12)

- Log::Dispatch ...loaded. (2.44)

- Parallel::ForkManager ...loaded. (0.7.9)

- MHA::NodeConst ...loaded. (0.56)

*** Module::AutoInstall configurationfinished.

Generating a Unix-style Makefile

Writing Makefile for mha4mysql::manager

Writing MYMETA.yml and MYMETA.json

[root@oraclem1 mha4mysql-manager-0.56]#

 

然後解壓縮安裝:

tar -xvf mha4mysql-manager-0.56.tar.gz

cd mha4mysql-manager-0.56

perl Makefile.PL

make && make install

 

7.3 編輯管理節點配置

 

在管理節點192.168.52.129上面

 

[root@data01 mha4mysql-manager-0.56]# vim/etc/masterha/app1.cnf

 

[server default]

manager_workdir=/var/log/masterha/app1

manager_log=/var/log/masterha/app1/manager.log

 

ssh_user=root #ssh免密鑰登錄的帳號名

repl_user=repl #mysql復制帳號,用來在主從機之間同步二進制日志等

repl_password=repl_1234

ping_interval=1 #ping間隔,用來檢測master是否正常

 

[server1]

hostname=192.168.52.129

candidate_master=1 #master機宕掉後,優先啟用這台作為新master

master_binlog_dir=/home/data/mysql/binlog/

 

[server2]

hostname=192.168.52.130

#candidate_master=1

master_binlog_dir=/home/data/mysql/binlog/

 

 

[server3]

hostname=192.168.52.131

#candidate_master=1

master_binlog_dir=/home/data/mysql/binlog/

 

 

#[server4]

#hostname=host4

#no_master=1

 

7.4 利用mha工具檢測ssh

然後check ssh成功,采用命令:masterha_check_ssh --conf=/etc/masterha/app1.cnf,如下所示:

[root@data01 ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
Tue Apr  7 02:56:12 2015 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Tue Apr  7 02:56:12 2015 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Tue Apr  7 02:56:12 2015 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Tue Apr  7 02:56:12 2015 - [info] Starting SSH connection tests..
Tue Apr  7 02:56:15 2015 - [debug] 
Tue Apr  7 02:56:12 2015 - [debug]  Connecting via SSH from [email protected](192.168.52.129:22) to [email protected](192.168.52.130:22)..
Warning: Permanently added '192.168.52.129' (RSA) to the list of known hosts.
Tue Apr  7 02:56:13 2015 - [debug]   ok.
Tue Apr  7 02:56:13 2015 - [debug]  Connecting via SSH from [email protected](192.168.52.129:22) to [email protected](192.168.52.131:22)..
Tue Apr  7 02:56:15 2015 - [debug]   ok.
Tue Apr  7 02:56:15 2015 - [debug] 
Tue Apr  7 02:56:13 2015 - [debug]  Connecting via SSH from [email protected](192.168.52.130:22) to [email protected](192.168.52.129:22)..
Tue Apr  7 02:56:14 2015 - [debug]   ok.
Tue Apr  7 02:56:14 2015 - [debug]  Connecting via SSH from [email protected](192.168.52.130:22) to [email protected](192.168.52.131:22)..
Tue Apr  7 02:56:15 2015 - [debug]   ok.
Tue Apr  7 02:56:16 2015 - [debug] 
Tue Apr  7 02:56:13 2015 - [debug]  Connecting via SSH from [email protected](192.168.52.131:22) to [email protected](192.168.52.129:22)..
Tue Apr  7 02:56:15 2015 - [debug]   ok.
Tue Apr  7 02:56:15 2015 - [debug]  Connecting via SSH from [email protected](192.168.52.131:22) to [email protected](192.168.52.130:22)..
Tue Apr  7 02:56:16 2015 - [debug]   ok.
Tue Apr  7 02:56:16 2015 - [info] All SSH connection tests passed successfully.
[root@data01 ~]#

 

 

7.5 使用mha工具check檢查repl環境

檢測命令為:masterha_check_repl--conf=/etc/masterha/app1.cnf,檢測結果如下:

 [root@oraclem1 ~]#  masterha_check_repl --conf=/etc/masterha/app1.cnf
Fri Apr 10 01:02:18 2015 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Apr 10 01:02:18 2015 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Fri Apr 10 01:02:18 2015 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Fri Apr 10 01:02:18 2015 - [info] MHA::MasterMonitor version 0.56.
Fri Apr 10 01:02:18 2015 - [info] Multi-master configuration is detected. Current primary(writable) master is 192.168.52.129(192.168.52.129:3306)
Fri Apr 10 01:02:18 2015 - [info] Master configurations are as below: 
Master 192.168.52.130(192.168.52.130:3306), replicating from 192.168.52.129(192.168.52.129:3306), read-only
Master 192.168.52.129(192.168.52.129:3306), replicating from 192.168.52.130(192.168.52.130:3306)
 
Fri Apr 10 01:02:18 2015 - [info] GTID failover mode = 0
Fri Apr 10 01:02:18 2015 - [info] Dead Servers:
Fri Apr 10 01:02:18 2015 - [info] Alive Servers:
Fri Apr 10 01:02:18 2015 - [info]   192.168.52.129(192.168.52.129:3306)
Fri Apr 10 01:02:18 2015 - [info]   192.168.52.130(192.168.52.130:3306)
Fri Apr 10 01:02:18 2015 - [info]   192.168.52.131(192.168.52.131:3306)
Fri Apr 10 01:02:18 2015 - [info] Alive Slaves:
Fri Apr 10 01:02:18 2015 - [info]   192.168.52.130(192.168.52.130:3306)  Version=5.6.12-log (oldest major version between slaves) log-bin:enabled
Fri Apr 10 01:02:18 2015 - [info]     Replicating from 192.168.52.129(192.168.52.129:3306)
Fri Apr 10 01:02:18 2015 - [info]   192.168.52.131(192.168.52.131:3306)  Version=5.6.12-log (oldest major version between slaves) log-bin:enabled
Fri Apr 10 01:02:18 2015 - [info]     Replicating from 192.168.52.129(192.168.52.129:3306)
Fri Apr 10 01:02:18 2015 - [info] Current Alive Master: 192.168.52.129(192.168.52.129:3306)
Fri Apr 10 01:02:18 2015 - [info] Checking slave configurations..
Fri Apr 10 01:02:18 2015 - [info] Checking replication filtering settings..
Fri Apr 10 01:02:18 2015 - [info]  binlog_do_db= user_db, binlog_ignore_db= information_schema,mysql,performance_schema,test
Fri Apr 10 01:02:18 2015 - [info]  Replication filtering check ok.
Fri Apr 10 01:02:18 2015 - [info] GTID (with auto-pos) is not supported
Fri Apr 10 01:02:18 2015 - [info] Starting SSH connection tests..
Fri Apr 10 01:02:20 2015 - [info] All SSH connection tests passed successfully.
Fri Apr 10 01:02:20 2015 - [info] Checking MHA Node version..
Fri Apr 10 01:02:23 2015 - [info]  Version check ok.
Fri Apr 10 01:02:23 2015 - [info] Checking SSH publickey authentication settings on the current master..
Fri Apr 10 01:02:23 2015 - [info] HealthCheck: SSH to 192.168.52.129 is reachable.
Fri Apr 10 01:02:23 2015 - [info] Master MHA Node version is 0.56.
Fri Apr 10 01:02:23 2015 - [info] Checking recovery script configurations on 192.168.52.129(192.168.52.129:3306)..
Fri Apr 10 01:02:23 2015 - [info]   Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/home/data/mysql/binlog/ --output_file=/var/tmp/save_binary_logs_test --manager_version=0.56 --start_file=mysql-bin.000183 
Fri Apr 10 01:02:23 2015 - [info]   Connecting to [email protected](192.168.52.129:22).. 
  Creating /var/tmp if not exists..    ok.
  Checking output directory is accessible or not..
   ok.
  Binlog found at /home/data/mysql/binlog/, up to mysql-bin.000183
Fri Apr 10 01:02:23 2015 - [info] Binlog setting check done.
Fri Apr 10 01:02:23 2015 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Fri Apr 10 01:02:23 2015 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='manager' --slave_host=192.168.52.130 --slave_ip=192.168.52.130 --slave_port=3306 --workdir=/var/tmp --target_version=5.6.12-log --manager_version=0.56 --relay_dir=/home/data/mysql/data --current_relay_log=mysqld-relay-bin.000011  --slave_pass=xxx
Fri Apr 10 01:02:23 2015 - [info]   Connecting to [email protected](192.168.52.130:22).. 
  Checking slave recovery environment settings..
    Relay log found at /home/data/mysql/data, up to mysqld-relay-bin.000013
    Temporary relay log file is /home/data/mysql/data/mysqld-relay-bin.000013
    Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Fri Apr 10 01:02:24 2015 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='manager' --slave_host=192.168.52.131 --slave_ip=192.168.52.131 --slave_port=3306 --workdir=/var/tmp --target_version=5.6.12-log --manager_version=0.56 --relay_log_info=/home/data/mysql/data/relay-log.info  --relay_dir=/home/data/mysql/data/  --slave_pass=xxx
Fri Apr 10 01:02:24 2015 - [info]   Connecting to [email protected](192.168.52.131:22).. 
  Checking slave recovery environment settings..
    Opening /home/data/mysql/data/relay-log.info ... ok.
    Relay log found at /home/data/mysql/data, up to mysql-relay-bin.000023
    Temporary relay log file is /home/data/mysql/data/mysql-relay-bin.000023
    Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Fri Apr 10 01:02:26 2015 - [info] Slaves settings check done.
Fri Apr 10 01:02:26 2015 - [info] 
192.168.52.129(192.168.52.129:3306) (current master)
 +--192.168.52.130(192.168.52.130:3306)
 +--192.168.52.131(192.168.52.131:3306)
 
Fri Apr 10 01:02:26 2015 - [info] Checking replication health on 192.168.52.130..
Fri Apr 10 01:02:26 2015 - [info]  ok.
Fri Apr 10 01:02:26 2015 - [info] Checking replication health on 192.168.52.131..
Fri Apr 10 01:02:26 2015 - [info]  ok.
Fri Apr 10 01:02:26 2015 - [warning] master_ip_failover_script is not defined.
Fri Apr 10 01:02:26 2015 - [warning] shutdown_script is not defined.
Fri Apr 10 01:02:26 2015 - [info] Got exit code 0 (Not master dead).
 
MySQL Replication Health is OK.
[root@oraclem1 ~]#

 

8,管理mha操作

8.1 啟動manager

nohup masterha_manager --conf=/etc/masterha/app1.cnf < /dev/null >/logs/mha/app1/manager.log 2>&1 &

執行後台情形,如下所示:

 

[root@oraclem1 mha4mysql-manager-0.56]#  nohup masterha_manager --conf=/etc/masterha/app1.cnf /logs/mha/app1/manager.log 2>&1 &
[1] 8973
[root@oraclem1 mha4mysql-manager-0.56]# 
[root@oraclem1 mha4mysql-manager-0.56]# tail -f /logs/mha/app1/manager.log
Fri Apr 10 02:46:43 2015 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Apr 10 02:46:43 2015 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Fri Apr 10 02:46:43 2015 - [info] Reading server configuration from /etc/masterha/app1.cnf..

 

8.2 使用masterha_check_status檢測下

[root@oraclem1 ~]# masterha_check_status--conf=/etc/masterha/app1.cnf

app1 (pid:8973) is running(0:PING_OK),master:192.168.52.129

[root@oraclem1 ~]#

 

手動操作:

 

 

8.3 停止manager

命令:masterha_stop --conf=/etc/masterha/app1.cnf

[root@oraclem1 mha4mysql-manager-0.56]#masterha_stop --conf=/etc/masterha/app1.cnf

Stopped app1 successfully.

[1]+ Exit 1 nohupmasterha_manager --conf=/etc/masterha/app1.cnf < /dev/null >/logs/mha/app1/manager.log 2>&1

[root@oraclem1 mha4mysql-manager-0.56]#

 

8.4 master死機自動切換測試

在mysql的master庫52.129上,執行如下命令:echo c> /proc/sysrq-trigger

後果是:然後會看到master庫變成了52.130,而52.131從庫也被迫去連接新的主庫52.130了。

 

Manager自動完成了切換操作。

 

8.5 master手動切換

先停止manager:masterha_stop --conf=/etc/masterha/app1.cnf

在備選slave和master上添加crontab –e任務,

 

手動切換master,命令如下

masterha_master_switch--conf=/etc/masterha/app1.cnf --master_state=dead--dead_master_host=192.168.52.129

masterha_master_switch--conf=/etc/masterha/app1.cnf --master_state=alive--new_master_host=192.168.52.130

 

 

先設置原來的master為dead,如下所示:

 [root@oraclem1 ~]# masterha_master_switch --conf=/etc/masterha/app1.cnf --master_state=dead --dead_master_host=192.168.52.129
--dead_master_ip= is not set. Using 192.168.52.129.
--dead_master_port= is not set. Using 3306.
Fri Apr 10 04:19:36 2015 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Apr 10 04:19:36 2015 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Fri Apr 10 04:19:36 2015 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Fri Apr 10 04:19:36 2015 - [info] MHA::MasterFailover version 0.56.
Fri Apr 10 04:19:36 2015 - [info] Starting master failover.
Fri Apr 10 04:19:36 2015 - [info] 
Fri Apr 10 04:19:36 2015 - [info] * Phase 1: Configuration Check Phase..
Fri Apr 10 04:19:36 2015 - [info] 
Fri Apr 10 04:19:36 2015 - [info] Multi-master configuration is detected. Current primary(writable) master is 192.168.52.129(192.168.52.129:3306)
Fri Apr 10 04:19:36 2015 - [info] Master configurations are as below: 
Master 192.168.52.130(192.168.52.130:3306), replicating from 192.168.52.129(192.168.52.129:3306), read-only
Master 192.168.52.129(192.168.52.129:3306), replicating from 192.168.52.130(192.168.52.130:3306)
 
Fri Apr 10 04:19:36 2015 - [info] GTID failover mode = 0
Fri Apr 10 04:19:36 2015 - [info] Dead Servers:
Fri Apr 10 04:19:36 2015 - [error][/usr/local/share/perl5/MHA/MasterFailover.pm, ln187] None of server is dead. Stop failover.
Fri Apr 10 04:19:36 2015 - [error][/usr/local/share/perl5/MHA/ManagerUtil.pm, ln177] Got ERROR:  at /usr/local/bin/masterha_master_switch line 53
[root@oraclem1 ~]#

 

 

然後設置新的master為alive,在切換過程中,界面幾次自動輸入YES,最後會有Switchingmaster to 192.168.52.130(192.168.52.130:3306) completed successfully.提示標志著手動切換成功,如下所示:

 [root@oraclem1 ~]# masterha_master_switch --conf=/etc/masterha/app1.cnf --master_state=alive --new_master_host=192.168.52.130
Fri Apr 10 04:28:06 2015 - [info] MHA::MasterRotate version 0.56.
Fri Apr 10 04:28:06 2015 - [info] Starting online master switch..
Fri Apr 10 04:28:06 2015 - [info] 
Fri Apr 10 04:28:06 2015 - [info] * Phase 1: Configuration Check Phase..
Fri Apr 10 04:28:06 2015 - [info] 
Fri Apr 10 04:28:06 2015 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Apr 10 04:28:06 2015 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Fri Apr 10 04:28:06 2015 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Fri Apr 10 04:28:06 2015 - [info] Multi-master configuration is detected. Current primary(writable) master is 192.168.52.129(192.168.52.129:3306)
Fri Apr 10 04:28:06 2015 - [info] Master configurations are as below: 
Master 192.168.52.130(192.168.52.130:3306), replicating from 192.168.52.129(192.168.52.129:3306), read-only
Master 192.168.52.129(192.168.52.129:3306), replicating from 192.168.52.130(192.168.52.130:3306)
 
Fri Apr 10 04:28:06 2015 - [info] GTID failover mode = 0
Fri Apr 10 04:28:06 2015 - [info] Current Alive Master: 192.168.52.129(192.168.52.129:3306)
Fri Apr 10 04:28:06 2015 - [info] Alive Slaves:
Fri Apr 10 04:28:06 2015 - [info]   192.168.52.130(192.168.52.130:3306)  Version=5.6.12-log (oldest major version between slaves) log-bin:enabled
Fri Apr 10 04:28:06 2015 - [info]     Replicating from 192.168.52.129(192.168.52.129:3306)
Fri Apr 10 04:28:06 2015 - [info]   192.168.52.131(192.168.52.131:3306)  Version=5.6.12-log (oldest major version between slaves) log-bin:enabled
Fri Apr 10 04:28:06 2015 - [info]     Replicating from 192.168.52.129(192.168.52.129:3306)
 
It is better to execute FLUSH NO_WRITE_TO_BINLOG TABLES on the master before switching. Is it ok to execute on 192.168.52.129(192.168.52.129:3306)? (YES/no): YES
Fri Apr 10 04:28:09 2015 - [info] Executing FLUSH NO_WRITE_TO_BINLOG TABLES. This may take long time..
Fri Apr 10 04:28:09 2015 - [info]  ok.
Fri Apr 10 04:28:09 2015 - [info] Checking MHA is not monitoring or doing failover..
Fri Apr 10 04:28:09 2015 - [info] Checking replication health on 192.168.52.130..
Fri Apr 10 04:28:09 2015 - [info]  ok.
Fri Apr 10 04:28:09 2015 - [info] Checking replication health on 192.168.52.131..
Fri Apr 10 04:28:09 2015 - [info]  ok.
Fri Apr 10 04:28:09 2015 - [info] 192.168.52.130 can be new master.
Fri Apr 10 04:28:09 2015 - [info] 
From:
192.168.52.129(192.168.52.129:3306) (current master)
 +--192.168.52.130(192.168.52.130:3306)
 +--192.168.52.131(192.168.52.131:3306)
 
To:
192.168.52.130(192.168.52.130:3306) (new master)
 +--192.168.52.131(192.168.52.131:3306)
 
Starting master switch from 192.168.52.129(192.168.52.129:3306) to 192.168.52.130(192.168.52.130:3306)? (yes/NO): yes
Fri Apr 10 04:28:13 2015 - [info] Checking whether 192.168.52.130(192.168.52.130:3306) is ok for the new master..
Fri Apr 10 04:28:13 2015 - [info]  ok.
Fri Apr 10 04:28:13 2015 - [info] ** Phase 1: Configuration Check Phase completed.
Fri Apr 10 04:28:13 2015 - [info] 
Fri Apr 10 04:28:13 2015 - [info] * Phase 2: Rejecting updates Phase..
Fri Apr 10 04:28:13 2015 - [info] 
master_ip_online_change_script is not defined. If you do not disable writes on the current master manually, applications keep writing on the current master. Is it ok to proceed? (yes/NO): yes
Fri Apr 10 04:28:18 2015 - [info] Locking all tables on the orig master to reject updates from everybody (including root):
Fri Apr 10 04:28:18 2015 - [info] Executing FLUSH TABLES WITH READ LOCK..
Fri Apr 10 04:28:18 2015 - [info]  ok.
Fri Apr 10 04:28:18 2015 - [info] Orig master binlog:pos is mysql-bin.000185:120.
Fri Apr 10 04:28:18 2015 - [info]  Waiting to execute all relay logs on 192.168.52.130(192.168.52.130:3306)..
Fri Apr 10 04:28:18 2015 - [info]  master_pos_wait(mysql-bin.000185:120) completed on 192.168.52.130(192.168.52.130:3306). Executed 0 events.
Fri Apr 10 04:28:18 2015 - [info]   done.
Fri Apr 10 04:28:18 2015 - [info] Getting new master's binlog name and position..
Fri Apr 10 04:28:18 2015 - [info]  mysql-bin.000058:578
Fri Apr 10 04:28:18 2015 - [info]  All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.52.130', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000058', MASTER_LOG_POS=578, MASTER_USER='repl', MASTER_PASSWORD='xxx';
Fri Apr 10 04:28:18 2015 - [info] Setting read_only=0 on 192.168.52.130(192.168.52.130:3306)..
Fri Apr 10 04:28:18 2015 - [info]  ok.
Fri Apr 10 04:28:18 2015 - [info] 
Fri Apr 10 04:28:18 2015 - [info] * Switching slaves in parallel..
Fri Apr 10 04:28:18 2015 - [info] 
Fri Apr 10 04:28:18 2015 - [info] -- Slave switch on host 192.168.52.131(192.168.52.131:3306) started, pid: 14563
Fri Apr 10 04:28:18 2015 - [info] 
Fri Apr 10 04:28:18 2015 - [info] Log messages from 192.168.52.131 ...
Fri Apr 10 04:28:18 2015 - [info] 
Fri Apr 10 04:28:18 2015 - [info]  Waiting to execute all relay logs on 192.168.52.131(192.168.52.131:3306)..
Fri Apr 10 04:28:18 2015 - [info]  master_pos_wait(mysql-bin.000185:120) completed on 192.168.52.131(192.168.52.131:3306). Executed 0 events.
Fri Apr 10 04:28:18 2015 - [info]   done.
Fri Apr 10 04:28:18 2015 - [info]  Resetting slave 192.168.52.131(192.168.52.131:3306) and starting replication from the new master 192.168.52.130(192.168.52.130:3306)..
Fri Apr 10 04:28:18 2015 - [info]  Executed CHANGE MASTER.
Fri Apr 10 04:28:18 2015 - [info]  Slave started.
Fri Apr 10 04:28:18 2015 - [info] End of log messages from 192.168.52.131 ...
Fri Apr 10 04:28:18 2015 - [info] 
Fri Apr 10 04:28:18 2015 - [info] -- Slave switch on host 192.168.52.131(192.168.52.131:3306) succeeded.
Fri Apr 10 04:28:18 2015 - [info] Unlocking all tables on the orig master:
Fri Apr 10 04:28:18 2015 - [info] Executing UNLOCK TABLES..
Fri Apr 10 04:28:18 2015 - [info]  ok.
Fri Apr 10 04:28:18 2015 - [info] All new slave servers switched successfully.
Fri Apr 10 04:28:18 2015 - [info] 
Fri Apr 10 04:28:18 2015 - [info] * Phase 5: New master cleanup phase..
Fri Apr 10 04:28:18 2015 - [info] 
Fri Apr 10 04:28:18 2015 - [info]  192.168.52.130: Resetting slave info succeeded.
Fri Apr 10 04:28:18 2015 - [info] Switching master to 192.168.52.130(192.168.52.130:3306) completed successfully.
[root@oraclem1 ~]#

 

PS:手動切換後,使用masterha_check_repl不能使用原來的/etc/masterha/app1.cnf來做check,要用新的app2.cnf來做check,因為app1.cnf裡面的master是原來舊的cnf,check會報錯主從復制失敗。如何生成新的app2.cnf,很簡單,如下所示:

(1)復制原理的app1.cnf為新的app2.cnf

cp /etc/masterha/app1.cnf/etc/masterha/app2.cnf

(2)編輯app2.cnf,將裡面的server1和server2的ip互換,也就是switch的兩個主從的ip換掉,如下所示:

[server1]

hostname=192.168.52.130

candidate_master=1

master_binlog_dir=/home/data/mysql/binlog/

 

[server2]

hostname=192.168.52.129

#candidate_master=1

#master_binlog_dir=/home/data/mysql/binlog/

(3)然後在使用masterha_check_repl --conf=/etc/masterha/app2.cnf進行check,就可以看到switch後的mha的主從是ok的了。

 

 

9報錯記錄總結

報錯記錄1:

[root@data01 ~]# masterha_check_repl--conf=/etc/masterha/app1.cnf

Tue Apr 7 22:31:06 2015 - [warning] Global configuration file/etc/masterha_default.cnf not found. Skipping.

Tue Apr 7 22:31:07 2015 - [info] Reading application default configuration from/etc/masterha/app1.cnf..

Tue Apr 7 22:31:07 2015 - [info] Reading server configuration from/etc/masterha/app1.cnf..

Tue Apr 7 22:31:07 2015 - [info] MHA::MasterMonitor version 0.56.

Tue Apr 7 22:31:07 2015 - [error][/usr/local/share/perl5/MHA/Server.pm,ln303] Getting relay log directory orcurrent relay logfile from replication table failed on192.168.52.130(192.168.52.130:3306)!

Tue Apr 7 22:31:07 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln424] Error happened on checking configurations. at /usr/local/share/perl5/MHA/ServerManager.pmline 315

Tue Apr 7 22:31:07 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln523] Error happened on monitoring servers.

Tue Apr 7 22:31:07 2015 - [info] Got exit code 1 (Not master dead).

 

MySQL Replication Health is NOT OK!

[root@data01 ~]#

 

解決辦法:在192.168.52.130上面,vim /etc/my.cnf,在裡面添加

relay-log=/home/data/mysql/binlog/mysql-relay-bin

然後重啟mysql,再去重新設置slave連接。

STOP SLAVE;

RESET SLAVE;

CHANGE MASTER TOMASTER_HOST='192.168.52.129',MASTER_USER='repl',MASTER_PASSWORD='repl_1234',MASTER_LOG_FILE='mysql-bin.000178',MASTER_LOG_POS=459;

START SLAVE;

Ok,搞定了。

 

報錯記錄2:

 

[root@data01 perl]# masterha_check_repl--conf=/etc/masterha/app1.cnf

Thu Apr 9 00:54:32 2015 - [warning] Global configuration file/etc/masterha_default.cnf not found. Skipping.

Thu Apr 9 00:54:32 2015 - [info] Reading application default configuration from/etc/masterha/app1.cnf..

Thu Apr 9 00:54:32 2015 - [info] Reading server configuration from/etc/masterha/app1.cnf..

Thu Apr 9 00:54:32 2015 - [info] MHA::MasterMonitor version 0.56.

Thu Apr 9 00:54:32 2015 - [error][/usr/local/share/perl5/MHA/Server.pm,ln306] Getting relay log directory orcurrent relay logfile from replication table failed on 192.168.52.130(192.168.52.130:3306)!

Thu Apr 9 00:54:32 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln424] Error happened on checking configurations. at/usr/local/share/perl5/MHA/ServerManager.pm line 315

Thu Apr 9 00:54:32 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln523] Error happened on monitoring servers.

Thu Apr 9 00:54:32 2015 - [info] Got exit code 1 (Not master dead).

 

MySQL Replication Health is NOT OK!

[root@data01 perl]#

 

解決方法:

/etc/masterha/app1.cnf文件裡面的參數配置,user和repl_user都是mysql賬號,需要創建好,這裡是只創建了repl_user而沒有創建好user賬號:

user=manager

password=manager_1234

repl_user=repl

repl_password=repl_1234

 

在mysql節點上,建立允許manager 訪問數據庫的“ manager manager ”賬戶,主要用於SHOW SLAVESTATUS,RESET SLAVE; 所以需要執行如下命令:

GRANT SUPER,RELOAD,REPLICATIONCLIENT,SELECT ON *.* TO manager@'192.168.52.%' IDENTIFIED BY 'manager_1234';

 

錯誤記錄3:

 

[root@oraclem1 ~]# masterha_check_repl--conf=/etc/masterha/app1.cnf

Thu Apr 9 23:09:05 2015 - [warning] Global configuration file/etc/masterha_default.cnf not found. Skipping.

Thu Apr 9 23:09:05 2015 - [info] Reading application default configuration from/etc/masterha/app1.cnf..

Thu Apr 9 23:09:05 2015 - [info] Reading server configuration from/etc/masterha/app1.cnf..

Thu Apr 9 23:09:05 2015 - [info] MHA::MasterMonitor version 0.56.

Thu Apr 9 23:09:05 2015 - [error][/usr/local/share/perl5/MHA/ServerManager.pm,ln781] Multi-master configuration is detected, but two or more masters areeither writable (read-only is not set) or dead! Check configurations fordetails. Master configurations are as below:

Master 192.168.52.130(192.168.52.130:3306),replicating from 192.168.52.129(192.168.52.129:3306)

Master 192.168.52.129(192.168.52.129:3306),replicating from 192.168.52.130(192.168.52.130:3306)

 

Thu Apr 9 23:09:05 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln424] Error happened on checking configurations. at/usr/local/share/perl5/MHA/MasterMonitor.pm line 326

Thu Apr 9 23:09:05 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln523] Error happened on monitoring servers.

Thu Apr 9 23:09:05 2015 - [info] Got exit code 1 (Not master dead).

 

MySQL Replication Health is NOT OK!

[root@oraclem1 ~]#

 

解決辦法:

mysql> set global read_only=1;

Query OK, 0 rows affected (0.00 sec)

 

mysql>

 

報錯記錄4:

 

Thu Apr 9 23:54:32 2015 - [info] Checking SSH publickey authentication andchecking recovery script configurations on all alive slave servers..

Thu Apr 9 23:54:32 2015 - [info] Executing command : apply_diff_relay_logs --command=test--slave_user='manager' --slave_host=192.168.52.130 --slave_ip=192.168.52.130--slave_port=3306 --workdir=/var/tmp --target_version=5.6.12-log--manager_version=0.56 --relay_dir=/home/data/mysql/data--current_relay_log=mysqld-relay-bin.000011 --slave_pass=xxx

Thu Apr 9 23:54:32 2015 - [info] Connecting to [email protected](192.168.52.130:22)..

Can't exec "mysqlbinlog": No suchfile or directory at /usr/local/share/perl5/MHA/BinlogManager.pm line 106.

mysqlbinlog version command failed with rc1:0, please verify PATH, LD_LIBRARY_PATH, and client options

at/usr/local/bin/apply_diff_relay_logs line 493

Thu Apr 9 23:54:32 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln205] Slaves settings check failed!

Thu Apr 9 23:54:32 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln413] Slave configuration failed.

Thu Apr 9 23:54:32 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln424] Error happened on checking configurations. at /usr/local/bin/masterha_check_repl line 48

Thu Apr 9 23:54:32 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln523] Error happened on monitoring servers.

Thu Apr 9 23:54:32 2015 - [info] Got exit code 1 (Not master dead).

 

MySQL Replication Health is NOT OK!

[root@oraclem1 ~]#

 

解決辦法:

[root@data02 ~]# type mysqlbinlog

mysqlbinlog is/usr/local/mysql/bin/mysqlbinlog

[root@data02 ~]#

[root@data02 ~]# ln -s/usr/local/mysql/bin/mysqlbinlog /usr/bin/mysqlbinlog

 

報錯記錄5:

Thu Apr 9 23:57:24 2015 - [info] Connecting to [email protected](192.168.52.130:22)..

Checking slave recovery environment settings..

Relay log found at /home/data/mysql/data, up to mysqld-relay-bin.000013

Temporary relay log file is /home/data/mysql/data/mysqld-relay-bin.000013

Testing mysql connection and privileges..sh: mysql: command not found

mysql command failed with rc 127:0!

at/usr/local/bin/apply_diff_relay_logs line 375

main::check()called at /usr/local/bin/apply_diff_relay_logs line 497

eval{...} called at /usr/local/bin/apply_diff_relay_logs line 475

main::main()called at /usr/local/bin/apply_diff_relay_logs line 120

Thu Apr 9 23:57:24 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln205] Slaves settings check failed!

Thu Apr 9 23:57:24 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln413] Slave configuration failed.

Thu Apr 9 23:57:24 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln424] Error happened on checking configurations. at /usr/local/bin/masterha_check_repl line 48

Thu Apr 9 23:57:24 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln523] Error happened on monitoring servers.

Thu Apr 9 23:57:24 2015 - [info] Got exit code 1 (Not master dead).

 

MySQL Replication Health is NOT OK!

 

解決辦法:

ln -s /usr/local/mysql/bin/mysql/usr/bin/mysql

報錯記錄6:

Fri Apr 10 00:58:36 2015 - [info] Executing command : apply_diff_relay_logs--command=test --slave_user='manager' --slave_host=192.168.52.130--slave_ip=192.168.52.130 --slave_port=3306 --workdir=/var/tmp--target_version=5.6.12-log --manager_version=0.56--relay_dir=/home/data/mysql/data--current_relay_log=mysqld-relay-bin.000011 --slave_pass=xxx

Fri Apr 10 00:58:36 2015 - [info] Connecting to [email protected](192.168.52.130:22)..

Checking slave recovery environment settings..

Relay log found at /home/data/mysql/data, up to mysqld-relay-bin.000013

Temporary relay log file is/home/data/mysql/data/mysqld-relay-bin.000013

Testing mysql connection and privileges..Warning: Using a password onthe command line interface can be insecure.

ERROR 1142 (42000) at line 1: CREATEcommand denied to user 'manager'@'192.168.52.130' for table'apply_diff_relay_logs_test'

mysql command failed with rc 1:0!

at/usr/local/bin/apply_diff_relay_logs line 375

main::check()called at /usr/local/bin/apply_diff_relay_logs line 497

eval{...} called at /usr/local/bin/apply_diff_relay_logs line 475

main::main()called at /usr/local/bin/apply_diff_relay_logs line 120

Fri Apr 10 00:58:37 2015 -[error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln205] Slaves settingscheck failed!

Fri Apr 10 00:58:37 2015 -[error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln413] Slave configurationfailed.

Fri Apr 10 00:58:37 2015 -[error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln424] Error happened onchecking configurations. at/usr/local/bin/masterha_check_repl line 48

Fri Apr 10 00:58:37 2015 -[error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln523] Error happened onmonitoring servers.

Fri Apr 10 00:58:37 2015 - [info] Got exitcode 1 (Not master dead).

 

MySQL Replication Health is NOT OK!

解決辦法:

執行如下授權語句sql:

GRANT CREATE,INSERT,UPDATE,DELETE,DROP ON*.* TO manager@'192.168.52.%';

  1. 上一頁:
  2. 下一頁:
Copyright © 程式師世界 All Rights Reserved