说明: 本文档仅围绕lvs+keepalived如何实现负载均衡、故障剔除、后端realserver健康监测、主备切换邮件通知;而防火墙、网络(路由交换)、后端数据存储、内外网暂未考虑; 一、环境准备: 1.操作系统

CentOS6.4-x86_64

2.软件版本:

ipvsadm-1.25-10.el6.x86_64
keepalived-1.2.7-3.el6.x86_64
httpd-2.2.15-26.el6.centos.x86_64

3.实验拓扑: [

1](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/1.png)
1](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/1.png)
4.时间同步:
1
2
3
4
5
6
7
8
9
10
11
12
node1:
\[root@node1 ~\]# ntpdate 203.117.180.36
\[root@node1 ~\]# echo "*/10 * * * * /usr/sbin/ntpdate 203.117.180.36" >> /etc/crontab
node2:
\[root@node2 ~\]# ntpdate 203.117.180.36
\[root@node1 ~\]# echo "*/10 * * * * /usr/sbin/ntpdate 203.117.180.36" >> /etc/crontab
master:
\[root@master ~\]# ntpdate 203.117.180.36
\[root@master ~\]# echo "*/10 * * * * /usr/sbin/ntpdate 203.117.180.36" >> /etc/crontab
Slave:
\[root@slave ~\]# ntpdate 203.117.180.36
\[root@slave ~\]# echo "*/10 * * * * /usr/sbin/ntpdate 203.117.180.36" >> /etc/crontab

5.主机名相互解析:

1
2
3
4
5
6
7
8
9
10
11
12
node1:
\[root@node1 ~\]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 
192.168.254.201    node1.test.com    node1 
192.168.254.202    node2.test.com    node2
node2:
\[root@node2 ~\]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 
192.168.254.201    node1.test.com    node1 
192.168.254.202    node2.test.com    node2

6.安装yum源:(其他三台主机上面同样执行以下两条命令即可,前提:能上网)

1
2
3
4
node1:
\[root@node1~\]#
rpm -ivh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
\[root@node1 ~\]# rpm -ivh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm

二、**web**节点安装配置: 安装web服务并执行realserver.sh,为lo:0绑定VIP地址192.168.254.200,抑制ARP广播; node1/node2配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
\[root@node1 ~\]# yum install -y httpd        
\[root@node1 ~\]# echo "<h1>Rs1.test.com</h1>" > /var/www/html/index.html
\[root@node1 ~\]# service httpd start
\[root@node1 ~\]# chkconfig httpd on
\[root@node1 ~\]# mkdir script
\[root@node1 ~\]# cd script/
\[root@node1 ~\]# vim script/realserver.sh
#->LVS客户端配置脚本realserver.sh:
#!/bin/bash 
\# Script to start LVS DR real server.  
\# description: LVS DR real server  

.  /etc/rc.d/init.d/functions
VIP=192.168.254.200 
host=`/bin/hostname`
case "$1" in 
start)  
       # Start LVS-DR real server on this machine.  
       /sbin/ifconfig lo down  
        /sbin/ifconfig lo up  
        echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore  
        echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce  
        echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore  
        echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
        /sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up 
        /sbin/route add -host $VIP dev lo:0
;; 

stop)

        # Stop LVS-DR real server loopback device(s). 
        /sbin/ifconfig lo:0 down  
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore  
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce  
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore  
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
;; 

status)
        # Status of LVS-DR real server. 
        islothere=`/sbin/ifconfig lo:0 | grep $VIP`  
        isrothere=\`netstat -rn | grep "lo:0" | grep $VIP\`  
        if \[ ! "$islothere" -o ! "isrothere" \];then  
            # Either the route or the lo:0 device  
            # not found.  
            echo "LVS-DR real server Stopped."  
        else  
           echo "LVS-DR real server Running."  
        fi  
;;  
*)  
            # Invalid entry.  
            echo "$0: Usage: $0 {start|status|stop}"  
            exit 1  

;;  
esac
\[root@node1 ~\]# chmod +x script/realserver.sh
\[root@node1 ~\]# ./script/realserver.sh start

#->将该脚本scp到node2节点执行./script/realserver.sh start即可

#->如果服务器重启,那还需要手动的去执行这个脚本,服务便不可使用了,所以可以将realserver.sh加入到开机启动项中;

1
2
3
4
5
6
7
\[root@node1 ~\]# vim /etc/rc.local
/bin/bash /root/script/realserver.sh start
\[root@node1 ~\]# scp /script/realserver 192.168.254.46:/root/script
#->效果如下:
[![2](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/2.png)](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/2.png) [![3](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/3.png)](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/3.png)
#->客户端访问测试:
[![4](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/4.png)](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/4.png)

三、**LVS-DR-Master/Slave**安装配置: 3.1.master/slave**都安装keepalived**ipvsadm :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
master:
\[root@master ~\]# yum install -y ipvsadm keepalived
\[root@master ~\]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak  #->修改配置文件之前最好将该配置文件备份。避免后续出现问题
Slave:
\[root@slave ~\]# yum install -y ipvsadm keepalived
\[root@slave ~\]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

**3.2.****在****Lvs-DR-Master****上面的配置:**

\[root@master ~\]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
    2399447849@qq.com               #->#设置报警邮件地址,可以设置多个,每行一个
   }
   notification\_email\_from root@localhost.localdomain
   smtp_server 127.0.0.1            #->设置smtp server的地址
   smtp\_connect\_timeout 30          #->设置连接smtp server的超时时间
   router\_id LVS\_DEVEL              #->表示运行keepalived服务器的一个标识。发邮件时显示在邮件主题的信息
}

vrrp\_instance VI\_1 {
    state MASTER           #->指定keepalived的角色,MASTER表示此主机是主服务器,BACKUP表示此主机是备用服务器
    interface eth0         #->指定HA监测网络的接口
    virtual\_router\_id 60  #->虚拟路由标识,这个标识是一个数字,同一个vrrp实例使用唯一的标识。即同一vrrp_instance下,MASTERBACKUP>必须是一致的
    priority 101          #->定义优先级,数字越大,优先级越高,在同一个vrrp_instance下,MASTER的优先级必须大于BACKUP的优先级
    advert_int 1          #->设定MASTERBACKUP负载均衡器之间同步检查的时间间隔,单位是秒
    authentication {      #->设置验证类型和密码
        auth_type PASS    #->设置验证类型,主要有PASSAH两种
        auth\_pass 1111    #->设置验证密码,在同一个vrrp\_instance下,MASTERBACKUP必须使用相同的密码才能正常通信
   }

    virtual_ipaddress {   #->设置虚拟IP地址,可以设置多个虚拟IP地址,每行一个
        192.168.254.200   #->客户端通过访问的就是该IP地址
    }
}

virtual_server 192.168.254.200 80#->设置虚拟服务器,需要指定虚拟IP地址和服务端口,IP与端口之间用空格隔开
    delay_loop 6                       #->设置运行情况检查时间,单位是秒
    lb_algo rr                          #->设置负载调度算法,这里设置为rr,即轮询算法
    lb_kind DR                          #->设置LVS实现负载均衡的机制,有NATTUNDR三个模式可选
nat_mask 255.255.255.0
#persistence_timeout 50      #->会话保持时间,单位是秒。这个选项对动态网页是非常有用的,为集群系统中的session共享提供了一个很好>的解决方案。
                               #->有了这个会话保持功能,用户的请求会被一直分发到某个服务节点,直到超过这个会话的保持时间。
                               #->需要注意的是,这个会话保持时间是最大无响应超时时间,也就是说,用户在操作动态页面时,如果50秒内没
有执行任何操作,
    protocol TCP                  #->指定转发协议类型,有TCPUDP两种
    real_server 192.168.254.45 80  {     #->配置服务节点1,需要指定real server的真实IP地址和端口,IP与端口之间用空格隔开
        weight 1                           #->配置服务节点的权值,权值大小用数字表示,数字越大,权值越高,设置权值大小可以为不同性能的服务器
                                            #->分配不同的负载,可以为性能高的服务器设置较高的权值,而为性能较低的服务器设置相对较低的权值,这样
才能合理地利用和分配系统资源
        HTTP_GET {                         #->realserver的状态检测设置部分,单位是秒
            url {                         
              path /
               status_code 200            #->状态码定义
            }
            connect_timeout 3             #->表示3秒无响应超时
            nb\_get\_retry 3                #->表示重试次数
            delay\_before\_retry 3         #->表示重试间隔
                  }

    }

    real_server 192.168.254.46 80 {
        weight 1
        HTTP_GET {
            url {
              path /
               status_code 200
           }
            connect_timeout 3
            nb\_get\_retry 3
            delay\_before\_retry 3
        }
    }
}

#->根据实验的拓扑可知后端的realserver有两台,所以上面定义了两个realserver的配置

3.3**Lvs-DR-Slave上面的配置:**

[root@master ~]# scp /etc/keepalived/keepalived.conf 192.168.254.48:/etc/keepalived/ #->将master上面的配置复制至slave,然后稍作修改:
[root@slave ~]# vim /etc/keepalived/keepalived.conf

#->配置文件各个参数上面以解释,这里只对需要修改的作说明:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
! Configuration File for keepalived


global_defs {
   notification_email {
    2399447849@qq.com
   }
   notification\_email\_from root
   smtp_server 127.0.0.1
   smtp\_connect\_timeout 30
   router\_id LVS\_DEVEL

}

 

vrrp\_instance VI\_1 {
    state BACKUP               #->指定该服务器的keepalived角色为BACKUP(备用服务器)
   interface eth0
   virtual\_router\_id 60     
    priority 100               #->在同一个vrrp_instance下,MASTER的优先级必须大于BACKUP的优先级
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }

    virtual_ipaddress {
        192.168.254.200
    }
}

 

virtual_server 192.168.254.200 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
   nat_mask 255.255.255.0
    #persistence_timeout 50
protocol TCP
   real_server 192.168.254.45 80  {
        weight 1
        HTTP_GET {
            url {
              path /
               status_code 200
            }
            connect_timeout 3
            nb\_get\_retry 3
            delay\_before\_retry 3
                 }
    }
    real_server 192.168.254.46 80 {
        weight 1
        HTTP_GET {
            url {
             path /
              status_code 200
            }
            connect_timeout 3
            nb\_get\_retry 3
            delay\_before\_retry 3
        }
    }
}

3.4.**启动keepalived (启动过程中观察日志**)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
master:
\[root@master ~\]# service keepalived start && tail -f /var/log/messages
...........
...........
Oct 29 21:42:14 master Keepalived_healthcheckers\[31358\]: Opening file '/etc/keepalived/keepalived.conf'.
Oct 29 21:42:14 master Keepalived_healthcheckers\[31358\]: Configuration is using : 16384 Bytes
Oct 29 21:42:14 master Keepalived_healthcheckers\[31358\]: Using LinkWatch kernel netlink reflector...
Oct 29 21:42:14 master Keepalived_healthcheckers\[31358\]: Activating healthchecker for service \[192.168.254.45\]:80
Oct 29 21:42:14 master Keepalived_healthcheckers\[31358\]: Activating healthchecker for service \[192.168.254.46\]:80
Oct 29 21:42:15 master Keepalived\_vrrp\[31359\]: VRRP\_Instance(VI_1) Transition to MASTER STATE
Oct 29 21:42:16 master Keepalived\_vrrp\[31359\]: VRRP\_Instance(VI_1) Entering MASTER STATE  #->主服务器状态
Oct 29 21:42:16 master Keepalived\_vrrp\[31359\]: VRRP\_Instance(VI_1) setting protocol VIPs.
Oct 29 21:42:16 master Keepalived\_vrrp\[31359\]: VRRP\_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.254.200
Oct 29 21:42:16 master Keepalived_healthcheckers\[31358\]: Netlink reflector reports IP 192.168.254.200 added
Oct 29 21:42:21 master Keepalived\_vrrp\[31359\]: VRRP\_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.254.200
...........
...........
\[root@master ~\]# ipvsadm -L –n       #->LVS状态
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -\> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.254.200:80 rr
  -\> 192.168.254.45:80            Route   1      0          0        
  -\> 192.168.254.46:80            Route   1      0          0        
\[root@master ~\]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid\_lft forever preferred\_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER\_UP> mtu 1500 qdisc pfifo\_fast state UP qlen 1000
   link/ether 00:0c:29:5d:7d:94 brd ff:ff:ff:ff:ff:ff
    inet 192.168.254.47/24 brd 192.168.254.255 scope global eth0
    inet 192.168.254.200/32 scope global eth0    #->此时VIP在master上面
    inet6 fe80::20c:29ff:fe5d:7d94/64 scope link
       valid\_lft forever preferred\_lft forever
\[root@master ~\]#
Slave:
\[root@slave ~\]# service keepalived start && tail -f /var/log/messages
...........
...........
Oct 29 21:42:34 slave Keepalived_vrrp\[31389\]: Opening file '/etc/keepalived/keepalived.conf'.
Oct 29 21:42:34 slave Keepalived_vrrp\[31389\]: Configuration is using : 62845 Bytes
Oct 29 21:42:34 slave Keepalived_vrrp\[31389\]: Using LinkWatch kernel netlink reflector...
Oct 29 21:42:34 slave Keepalived\_vrrp\[31389\]: VRRP\_Instance(VI_1) Entering BACKUP STATE  #->备用服务器状态
Oct 29 21:42:34 slave Keepalived_vrrp\[31389\]: VRRP sockpool: \[ifindex(2), proto(112), fd(10,11)\]
Oct 29 21:42:34 slave Keepalived_healthcheckers\[31388\]: Opening file '/etc/keepalived/keepalived.conf'.
Oct 29 21:42:34 slave Keepalived_healthcheckers\[31388\]: Configuration is using : 16384 Bytes
Oct 29 21:42:34 slave Keepalived_healthcheckers\[31388\]: Using LinkWatch kernel netlink reflector...
Oct 29 21:42:34 slave Keepalived_healthcheckers\[31388\]: Activating healthchecker for service \[192.168.254.45\]:80
Oct 29 21:42:34 slave Keepalived_healthcheckers\[31388\]: Activating healthchecker for service \[192.168.254.46\]:80
...........
...........

**3.5.****测试:** [![5](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/5.png)](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/5.png) **3.6.****模拟故障**: (1)停掉node1节点的web服务:

\[root@node1 ~\]# service httpd stop
停止 httpd:                                               \[确定\]
\[root@node1 ~\]#

(2)查看一下报警邮件: [![6](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/6.png)](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/6.png) (3)再在前端调度器上查看一下LVS状态: [![7](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/7.png)](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/7.png) 很明显那台出现问题的realserver条目已经被剔除了 (4)恢复node1节点上的web服务:

\[root@node1 ~\]# service httpd start
启动 httpd:                                               \[确定\]
\[root@node1 ~\]#

(5) 查看一下报警邮件: [![8](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/8.png)](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/8.png) (6)关闭master上面的keepalived:

\[root@master ~\]# service keepalived stop
停止 keepalived:                                          \[确定\]
\[root@master ~\]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -\> RemoteAddress:Port           Forward Weight ActiveConn InActConn
\[root@master ~\]#

(7)查看slave状态:

\[root@slave ~\]# ip a
.......
.......
    inet 192.168.254.200/32 scope global eth0     #->可见VIP已经转移到了slave上面;并且通过客户端访问仍然正常!
.......
.......
 \[root@slave ~\]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -\> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  192.168.254.200:80 rr
  -\> 192.168.254.45:80            Route   1      0          0        
  -\> 192.168.254.46:80            Route   1      0          0        
\[root@slave ~\]#

**通过上面的演示现在的****LVS****的高可用即前端负载均衡调度器的高可用,同时实现了对后端****realserver****监控,也实现了后端****realserver****宕机时会给管理员发送邮件;但是目前还面临几个问题:**

1. **如果所有的****realserver****都宕机,如何处理,用户打不开就等它打不开,还是友善的提示一下?**
2. **怎么完成维护模式****keepalived****切换?**
3. **如何在****keepalived****主备切换时向管理员发送邮件?**

**四、****LVS+Keepalived****后续延伸:** **4.1.****所有****realserver****都宕机如何处理?** 在集群中如果所有real server全部宕机了,客户端访问时就会出现错误页面,这样是很不友好的,我们得提供一个维护页面来提醒用户,服务器正在维护,什么时间可以访问等,下面就来解决一下这个问题。解决方案有两种,一种是提供一台备用的real server当所有的服务器宕机时,提供维护页面,但这样做有点浪费服务器。另一种就是在负载均衡器上提供维护页面,这样是比较靠谱的,也比较常用。下面就来具体操作一下。 (1)在master和slave上面安装httpd

\[root@master ~\]# yum install -y httpd
\[root@slave ~\]# yum install -y httpd

(2)提供维护页面文件

\[root@master ~\]# echo "Oops ... you visit the page does not exist, the server may be maintained?" > /var/www/html/index.html
\[root@master ~\]# service httpd start
启动 httpd:                                               \[确定\]
\[root@slave ~\]# echo "Oops ... you visit the page does not exist, the server may be maintained?" > /var/www/html/index.html
\[root@slave ~\]# service httpd start
启动 httpd:                                               \[确定\]

(3)测试: [![9](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/9.png)](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/9.png) (4)修改master/slave的keepalived配置文件:

\[root@master ~\]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
2399447849@qq.com
}

notification\_email\_from root
smtp_server 127.0.0.1
smtp\_connect\_timeout 30
router\_id LVS\_DEVEL
}

vrrp\_instance VI\_1 {
state MASTER
interface eth0
virtual\_router\_id 60
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}

virtual_ipaddress {
192.168.254.200
}
}


virtual_server 192.168.254.200 80 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.0
#persistence_timeout 50
protocol TCP

real_server 192.168.254.45 80&nbsp; {
weight 1
HTTP_GET {

url {
path /
status_code 200
}

connect_timeout 3
nb\_get\_retry 3
delay\_before\_retry 3
}
}

real_server 192.168.254.46 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb\_get\_retry 3
delay\_before\_retry 3
}
}

sorry_server 127.0.0.1&nbsp;&nbsp; #->增加该配置参数,slave上面也需要添加,此处略。
}

(5)关闭所有的realserver web服务,重新启动master/slave 的keepalived:  

node1:
\[root@node1 ~\]# service httpd stop
停止 httpd:                                               \[确定\]
\[root@node1 ~\]#
node2:
\[root@node2 ~\]# service httpd stop
停止 httpd:                                               \[确定\]
\[root@node2 ~\]#
master:
\[root@master ~\]# service keepalived restart
停止 keepalived:                                          \[确定\]
正在启动 keepalived:                                      \[确定\]
\[root@master ~\]#
slave:
\[root@slave ~\]# service keepalived restart
停止 keepalived:                                          \[确定\]
正在启动 keepalived:                                      \[确定\]
\[root@slave ~\]#

(6)查看一下LVS状态: [![10](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/10.png)](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/10.png) (7)访问测试: [![11](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/11.png)](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/11.png) **4.2.****如何完成维护模式****keepalived** **切换?** 一般我们在测试主从切换的过程当中要么是手动停止keepalived服务,要么是手动关闭网卡,那还有其他方法实现维护模式的切换,这就是vrrp_script功能; (1)master/slave配置:**(****注****:****这里演示主服务器的配置,添加上去的在****slave****上面也需要添加以红色标注内容****)**

\[root@master ~\]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {

notification_email {
2399447849@qq.com
}

notification\_email\_from root
smtp_server 127.0.0.1
smtp\_connect\_timeout 30
router\_id LVS\_DEVEL
}

vrrp\_script chk\_schedown {       #->定义vrrp执行脚本
   script "\[ -e /etc/keepalived/down \] && exit 1 || exit 0"  #->查看是否有down文件,有就进入维护模式
   interval 1         #->监控间隔时间
   weight -5          #->降低优先级,即priority参数
   fall 2             #->失败次数
   rise 1             #->成功次数
}

vrrp\_instance VI\_1 {
state MASTER
interface eth0
virtual\_router\_id 60
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}

virtual_ipaddress {
192.168.254.200
}

track_script {             #->脚本追踪
    chk_schedown           #->上面自定义的vrrp脚本名称
}
}


virtual_server 192.168.254.200 80 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.0
#persistence_timeout 50
protocol TCP
 
real_server 192.168.254.45 80  {
weight 1
HTTP_GET {
url {
path /
status_code 200
}

connect_timeout 3
nb\_get\_retry 3
delay\_before\_retry 3
}

}

real_server 192.168.254.46 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}

connect_timeout 3
nb\_get\_retry 3
delay\_before\_retry 3
}
}

sorry_server 127.0.0.1
}

(2)测试:

master:
\[root@master keepalived\]# touch down  #->新建一个down文件,进入维护模式
\[root@master keepalived\]# ll
总用量 4
-rw-r--r--. 1 root root    0 1030 00:16 down
-rw-r--r--. 1 root root 1513 1030 00:08 keepalived.conf
\[root@master keepalived\]# tail -f /var/log/messages
.......
.......
Oct 30 00:16:43 node3 Keepalived\_vrrp\[31993\]: VRRP\_Script(chk_schedown) failed
Oct 30 00:16:44 node3 Keepalived\_vrrp\[31993\]: VRRP\_Instance(VI_1) Received higher prio advert
Oct 30 00:16:44 node3 Keepalived\_vrrp\[31993\]: VRRP\_Instance(VI_1) Entering BACKUP STATE
Oct 30 00:16:44 node3 Keepalived\_vrrp\[31993\]: VRRP\_Instance(VI_1) removing protocol VIPs.
Oct 30 00:16:44 node3 Keepalived_healthcheckers\[31992\]: Netlink reflector reports IP 192.168.254.200 removed  #->该VIP已转移到slave.
.......
.......
\[root@master keepalived\]# ip a   #->VIP 已转移到slave
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid\_lft forever preferred\_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER\_UP> mtu 1500 qdisc pfifo\_fast state UP qlen 1000
link/ether 00:0c:29:5d:7d:94 brd ff:ff:ff:ff:ff:ff
inet 192.168.254.47/24 brd 192.168.254.255 scope global eth0
inet6 fe80::20c:29ff:fe5d:7d94/64 scope link
valid\_lft forever preferred\_lft forever
\[root@master keepalived\]#
slave:
\[root@slave keepalived\]# ip a
.......
.......
inet 192.168.254.200/32 scope global eth0   #->VIP 已转移过来
.......
.......
\[root@slave keepalived\]#

  **至此自写监测脚本,完成维护模式切换,已经完成;下面来解决最后一个问题:** **4.3.****如何在****Keepalived****主从切换时向管理员发送通知邮件?** (1)Keepalived通知脚本进阶示例:

下面的脚本可以接受选项,其中:
-s, --service SERVICE,...:指定服务脚本名称,当状态切换时可自动启动、重启或关闭此服务;
-a, --address VIP: 指定相关虚拟路由器的VIP地址;
-m, --mode {mm|mb}:指定虚拟路由的模型,mm表示主主,mb表示主备;它们表示相对于同一种服务而方,其VIP的工作类型;
-n, --notify {master|backup|fault}:指定通知的类型,即vrrp角色切换的目标角色;
-h, --help:获取脚本的使用帮助;

#!/bin/bash
\# Author: Tux
\# description: An example of notify script
\# Usage: notify.sh -m|--mode {mm|mb} -s|--service SERVICE1,... -a|--address VIP&nbsp; -n|--notify {master|backup|falut} -h|--help

&nbsp;

helpflag=0
serviceflag=0
modeflag=0
addressflag=0
notifyflag=0

contact='2399447849@qq.com'&nbsp;&nbsp; #->指定联系人;可以有多个,用”,”分隔开来


<br>
Usage() {
echo "Usage: notify.sh \[-m|--mode {mm|mb}\] \[-s|--service SERVICE1,...\] <-a|--address VIP> <-n|--notify {master|backup|falut}>"
echo "Usage: notify.sh -h|--help"
}
########################################################################################
ParseOptions() {
local I=1;
if \[ $# -gt 0 \]; then
while \[ $I -le $# \]; do
case $1 in
-s|--service)
\[ $# -lt 2 \] && return 3
serviceflag=1
services=(\`echo $2|awk -F"," '{for(i=1;i<=NF;i++) print $i}'\`)
shift 2 ;;
-h|--help)
helpflag=1
return 0
shift
;;
-a|--address)
\[ $# -lt 2 \] && return 3
addressflag=1
vip=$2
shift 2
;;
-m|--mode)
\[ $# -lt 2 \] && return 3
mode=$2
shift 2
;;
-n|--notify)
\[ $# -lt 2 \] && return 3
notifyflag=1
notify=$2
shift 2
;;
*)
echo "Wrong options..."
Usage
return 7
;;
esac
done
return 0
fi
}

#workspace=$(dirname $0)

RestartService() {
if \[ ${#@} -gt 0 \]; then
for I in $@; do
if \[ -x /etc/rc.d/init.d/$I \]; then
/etc/rc.d/init.d/$I restart
else
echo "$I is not a valid service..."
fi
done
fi
}

StopService() {
if \[ ${#@} -gt 0 \]; then
for I in $@; do
if \[ -x /etc/rc.d/init.d/$I \]; then
/etc/rc.d/init.d/$I stop
else
echo "$I is not a valid service..."
fi
done
fi
}


Notify() {
mailsubject="\`hostname\` to be $1: $vip floating"
mailbody="\`date '+%F %H:%M:%S'\`, vrrp transition, \`hostname\` changed to be $1."
echo $mailbody | mail -s "$mailsubject" $contact
}


\# Main Function
ParseOptions $@
\[ $? -ne 0 \] && Usage && exit 5

\[ $helpflag -eq 1 \] && Usage && exit 0

if \[ $addressflag -ne 1 -o $notifyflag -ne 1 \]; then
Usage
exit 2
fi

mode=${mode:-mb}

case $notify in
'master')
if \[ $serviceflag -eq 1 \]; then
RestartService ${services\[*\]}
fi
Notify master
;;
'backup')
if \[ $serviceflag -eq 1 \]; then
if \[ "$mode" == 'mb' \]; then
StopService ${services\[*\]}
else
RestartService ${services\[*\]}
fi
fi
Notify backup
;;
'fault')
Notify fault
;;
*)
Usage
exit 4
;;
esac

(2) 在keepalived.conf配置文件中,其调用方法如下所示:

notify\_master "/etc/keepalived/notify.sh -n master -a VIP\_address"
notify\_backup "/etc/keepalived/notify.sh -n backup -a VIP\_address"
notify\_fault "/etc/keepalived/notify.sh -n fault -a VIP\_address"

(3)修改master/slave 的keepalived配置文件:、  

master:
\[root@master ~\]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
notification_email {
2399447849@qq.com
}

notification\_email\_from root
smtp_server 127.0.0.1
smtp\_connect\_timeout 30
router\_id LVS\_DEVEL
}

 

vrrp\_script chk\_schedown {
script "\[ -e /etc/keepalived/down \] && exit 1 || exit 0"
interval 1
weight -5
fall 2
rise 1
}


vrrp\_instance VI\_1 {
state MASTER
interface eth0
virtual\_router\_id 60
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}

virtual_ipaddress {
192.168.254.200
}

track_script {
chk_schedown
}

#->增加以下三行(注:在slave上面也一样添加这三行,此处略)
    notify_master "/etc/keepalived/notify.sh -n master -a 192.168.254.200"
    notify_backup "/etc/keepalived/notify.sh -n backup -a 192.168.254.200"
    notify_fault "/etc/keepalived/notify.sh -n fault -a 192.168.254.200"
}

 

virtual_server 192.168.254.200 80 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.0
#persistence_timeout 50
protocol TCP
 

real_server 192.168.254.45 80  {
weight 1
HTTP_GET {
url {
path /
status_code 200
}

connect_timeout 3
nb\_get\_retry 3
delay\_before\_retry 3
}

}

real_server 192.168.254.46 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb\_get\_retry 3
delay\_before\_retry 3
}
}

sorry_server 127.0.0.1 80
}

  (4)添加脚本: 讲上述的脚本添加至master和slave的/etc/keepalived/目录下(注意权限):

\[root@master ~\]# ll /etc/keepalived
总用量 8
-rw-r--r--. 1 root root 1748 1030 17:08 keepalived.conf
-rwxr-xr-x. 1 root root 2380 1030 00:57 notify.sh
\[root@master ~\]#
 
#->复制至slave
\[root@master ~\]# scp /etc/keepalived/notify.sh 192.168.254.48:/etc/keepalived/

(5)测试一下脚本可用性:

\[root@slave keepalived\]# ./notify.sh --help
Usage: notify.sh \[-m|--mode {mm|mb}\] \[-s|--service SERVICE1,...\] <-a|--address VIP>  <-n|--notify {master|backup|falut}>
Usage: notify.sh -h|--help
\[root@slave keepalived\]# ./notify.sh -m mb -a 2.2.2.2 -n master
\[root@slave keepalived\]#

查看邮件: [![12](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/12.png)](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/12.png) 在模拟故障时重启一下keepalived,以免前面的实验造成影响。 注:现在已经可以成功收到邮件,通知脚本可用; (6)故障模拟: <1>先重启主备keepalived服务

master:
\[root@master ~\]# service keepalived restart
停止 keepalived:                                          \[确定\]
正在启动 keepalived:                                      \[确定\]
\[root@master ~\]#
slave:
\[root@slave ~\]# service keepalived restart
停止 keepalived:                                          \[确定\]
正在启动 keepalived:                                      \[确定\]
\[root@slave ~\]#

<2>正常情况下此时VIP在master上面

master:
\[root@master ~\]# ip a
.......
.......
inet 192.168.254.200/32 scope global eth0
.......
.......
\[root@master ~\]#

<3>在master的/etc/keepalived目录下 touch一个文件”down”

master:
\[root@master keepalived\]# touch down

<4>观察VIP 转移情况

slave:
\[root@slave ~\]# ip a
.......
.......
inet 192.168.254.200/32 scope global eth0
.......
.......
\[root@slave ~\]#

<5>结果查看—>邮件收取 [

13](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/13.png)](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/13.png) <6>Client访问测试 [![14](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/14.png)
13](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/13.png)](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/13.png) <6>Client访问测试 [![14](https://qcloud.coding.net/u/guomaoqiu/p/guomaoqiu/git/raw/master/uploads/2014/10/14.png)
从上可以看到,在keepalived主备切换时,不仅能够发送邮件,而且访问服务也没有问题; 至此Lvs+Keepalived的基本应用实验演示完毕!