1. 首页 > 软件安装教程 > 正文

LVS安装配置-LVS负载均衡安装配置_升级迁移详细过程

1. LVS概述与环境规划

LVS(Linux Virtual Server)是一个高性能、高可用的服务器集群负载均衡解决方案,由章文嵩博士开发。LVS工作在内核空间,性能极高,支持多种负载均衡算法和工作模式。更多学习教程www.fgedu.net.cn

1.1 LVS版本说明

LVS已集成到Linux内核中,无需单独安装。本教程以LVS 1.2.2为例进行详细讲解。

# 查看内核版本
# uname -r
4.18.0-477.27.1.el8_8.x86_64

# 查看LVS模块
# modprobe -l | grep ipvs
kernel/net/netfilter/ipvs/ip_vs.ko.xz
kernel/net/netfilter/ipvs/ip_vs_rr.ko.xz
kernel/net/netfilter/ipvs/ip_vs_wrr.ko.xz
kernel/net/netfilter/ipvs/ip_vs_lc.ko.xz
kernel/net/netfilter/ipvs/ip_vs_wlc.ko.xz
kernel/net/netfilter/ipvs/ip_vs_lblc.ko.xz
kernel/net/netfilter/ipvs/ip_vs_lblcr.ko.xz
kernel/net/netfilter/ipvs/ip_vs_dh.ko.xz
kernel/net/netfilter/ipvs/ip_vs_sh.ko.xz
kernel/net/netfilter/ipvs/ip_vs_sed.ko.xz
kernel/net/netfilter/ipvs/ip_vs_nq.ko.xz
kernel/net/netfilter/ipvs/ip_vs_ftp.ko.xz
kernel/net/netfilter/ipvs/ip_vs_pe_sip.ko.xz

# 加载LVS模块
# modprobe ip_vs
# lsmod | grep ip_vs
ip_vs 172032 0
nf_conntrack 172032 1 ip_vs

1.2 环境规划

本次安装环境规划如下:

LVS Director节点1:
主机名:lvs01.fgedu.net.cn
IP地址:192.168.1.51
VIP地址:192.168.1.100

LVS Director节点2:
主机名:lvs02.fgedu.net.cn
IP地址:192.168.1.52
VIP地址:192.168.1.100

Real Server节点1:
主机名:web01.fgedu.net.cn
IP地址:192.168.1.53

Real Server节点2:
主机名:web02.fgedu.net.cn
IP地址:192.168.1.54

Real Server节点3:
主机名:web03.fgedu.net.cn
IP地址:192.168.1.55

工作模式:DR模式
调度算法:wrr(加权轮询)

1.3 LVS核心特性

主要特点:
1. 高性能:工作在内核空间,处理能力极强
2. 高可用:支持Keepalived实现故障切换
3. 多种模式:NAT、DR、TUN三种工作模式
4. 多种算法:rr、wrr、lc、wlc等调度算法
5. 协议支持:TCP、UDP、SCTP
6. 连接持久:支持持久连接和会话保持
7. 健康检查:配合Keepalived实现健康检查

工作模式对比:
DR模式:性能最高,要求Director和RS在同一网段
NAT模式:配置简单,支持跨网段,但性能较低
TUN模式:支持跨网段,需要IP隧道支持

2. 硬件环境要求与检查

在安装LVS之前,需要对服务器硬件环境进行全面检查。学习交流加群风哥微信: itpux-com

2.1 最低硬件要求

Director节点最低配置:
CPU:2核心
内存:2GB
网卡:1Gbps

Director节点推荐配置(生产环境):
CPU:4核心以上
内存:4GB以上
网卡:10Gbps

Real Server节点配置:
根据业务需求配置
建议与Director在同一网段(DR模式)

2.2 系统环境检查

# 检查操作系统版本
# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.8 (Ootpa)

# 检查内核版本
# uname -r
4.18.0-477.27.1.el8_8.x86_64

# 检查内存信息
# free -h
total used free shared buff/cache available
Mem: 15Gi 1.0Gi 13Gi 256Mi 1.0Gi 14Gi
Swap: 7Gi 0B 7Gi

# 检查网络配置
# ip addr show eth0
2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:a3:01:51 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.51/24 brd 192.168.1.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fea3:151/64 scope link
valid_lft forever preferred_lft forever

2.3 依赖包安装

# 安装ipvsadm管理工具
# yum install -y ipvsadm

# 输出示例:
Last metadata expiration check: 0:00:00 ago on Sat Apr 4 10:00:00 2026.
Dependencies resolved.
Installed:
ipvsadm-1.31-1.el8.x86_64

Complete!

# 安装Keepalived
# yum install -y keepalived

# 输出示例:
Installed:
keepalived-2.2.8-1.el8.x86_64

Complete!

# 验证安装
$ ipvsadm –version
ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1)

$ keepalived -v
Keepalived v2.2.8 (04/04,2026)

3. LVS安装步骤

本节详细介绍LVS的安装过程。学习交流加群风哥QQ113257174

3.1 加载内核模块

# 加载ip_vs模块
# modprobe ip_vs

# 加载调度算法模块
# modprobe ip_vs_rr
# modprobe ip_vs_wrr
# modprobe ip_vs_wlc

# 验证模块加载
# lsmod | grep ip_vs
ip_vs_rr 16384 0
ip_vs_wrr 16384 0
ip_vs_wlc 16384 0
ip_vs 172032 6 ip_vs_wlc,ip_vs_rr,ip_vs_wrr
nf_conntrack 172032 1 ip_vs

# 设置开机自动加载
# vi /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_wlc

# 启用模块加载服务
# systemctl enable systemd-modules-load

3.2 配置系统参数

# 配置内核参数
# vi /etc/sysctl.d/99-lvs.conf

# 开启IP转发
net.ipv4.ip_forward = 1

# 关闭ARP响应(DR模式需要)
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2

# 增加连接跟踪表大小
net.netfilter.nf_conntrack_max = 1048576
net.nf_conntrack_max = 1048576

# 增加本地端口范围
net.ipv4.ip_local_port_range = 1024 65535

# 优化TCP参数
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.tcp_max_syn_backlog = 65535
net.core.somaxconn = 65535

# 使配置生效
# sysctl -p /etc/sysctl.d/99-lvs.conf

# 输出示例:
net.ipv4.ip_forward = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
net.netfilter.nf_conntrack_max = 1048576
net.nf_conntrack_max = 1048576
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.tcp_max_syn_backlog = 65535
net.core.somaxconn = 65535

3.3 配置ipvsadm

# 清除现有规则
# ipvsadm -C

# 查看当前规则
# ipvsadm -Ln

# 输出示例:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

# 创建虚拟服务(VIP:80)
# ipvsadm -A -t 192.168.1.100:80 -s wrr

# 添加Real Server
# ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.53:80 -g -w 3
# ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.54:80 -g -w 2
# ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.55:80 -g -w 1

# 查看配置结果
# ipvsadm -Ln

# 输出示例:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.1.100:80 wrr
-> 192.168.1.53:80 Route 3 0 0
-> 192.168.1.54:80 Route 2 0 0
-> 192.168.1.55:80 Route 1 0 0

# 保存配置
# ipvsadm-save > /etc/sysconfig/ipvsadm

# 启用ipvsadm服务
# systemctl enable ipvsadm

风哥提示:LVS配置建议使用Keepalived管理,避免手动配置ipvsadm规则。Keepalived提供更好的高可用支持。

4. LVS参数配置

LVS参数配置是性能优化的关键步骤,直接影响系统性能。更多学习教程公众号风哥教程itpux_com

4.1 调度算法配置

# 常用调度算法说明

rr(轮询):
将请求依次分配到每个Real Server
适用场景:服务器性能相近

wrr(加权轮询):
根据权重分配请求,权重高的服务器获得更多请求
适用场景:服务器性能不同

lc(最少连接):
将请求分配给当前连接数最少的服务器
适用场景:长连接场景

wlc(加权最少连接):
综合考虑权重和连接数分配请求
适用场景:推荐生产环境使用

sh(源地址哈希):
根据源IP地址哈希分配请求
适用场景:需要会话保持

# 修改调度算法
# ipvsadm -E -t 192.168.1.100:80 -s wlc

# 查看修改结果
# ipvsadm -Ln

# 输出示例:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.1.100:80 wlc
-> 192.168.1.53:80 Route 3 0 0
-> 192.168.1.54:80 Route 2 0 0
-> 192.168.1.55:80 Route 1 0 0

4.2 持久连接配置

# 配置持久连接(300秒)
# ipvsadm -E -t 192.168.1.100:80 -s wrr -p 300

# 配置持久连接模板
# ipvsadm -E -t 192.168.1.100:80 -s wrr -p 300 -M 255.255.255.255

# 查看持久连接
# ipvsadm -Ln –persistent-conn

# 输出示例:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.1.100:80 wrr persistent 300
-> 192.168.1.53:80 Route 3 0 0
-> 192.168.1.54:80 Route 2 0 0
-> 192.168.1.55:80 Route 1 0 0

# 查看持久连接模板
# ipvsadm -Ln -c

# 输出示例:
IPVS connection entries
pro expire state source virtual destination
TCP 14:59 ESTABLISHED 192.168.1.100:54321 192.168.1.100:80 192.168.1.53:80

4.3 超时参数配置

# 查看当前超时设置
# ipvsadm -Ln –timeout

# 输出示例:
Timeout (tcp tcpfin udp): 900 120 300

# 设置超时参数
# ipvsadm –set 7200 120 300

# 参数说明:
tcp:TCP连接超时时间(秒)
tcpfin:TCP FIN连接超时时间(秒)
udp:UDP连接超时时间(秒)

# 验证设置
# ipvsadm -Ln –timeout

# 输出示例:
Timeout (tcp tcpfin udp): 7200 120 300

生产环境建议:生产环境建议使用wlc调度算法,综合性能最优。持久连接时间根据业务需求设置,一般300秒足够。

5. DR模式配置

DR(Direct Routing)模式是LVS性能最高的工作模式,本节介绍详细的配置方法。from:www.itpux.com

5.1 Director节点配置

# 在Director节点配置VIP
# vi /etc/sysconfig/network-scripts/ifcfg-eth0:0

DEVICE=eth0:0
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.1.100
NETMASK=255.255.255.255
BROADCAST=192.168.1.100

# 启用VIP
# ifup eth0:0

# 验证VIP配置
# ip addr show eth0:0

# 输出示例:
3: eth0:0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:a3:01:51 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.100/32 brd 192.168.1.100 scope global eth0:0
valid_lft forever preferred_lft forever

# 配置ipvs规则
# ipvsadm -C
# ipvsadm -A -t 192.168.1.100:80 -s wrr
# ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.53:80 -g -w 3
# ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.54:80 -g -w 2
# ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.55:80 -g -w 1

# 保存配置
# ipvsadm-save > /etc/sysconfig/ipvsadm

5.2 Real Server节点配置

# 在所有Real Server节点执行以下配置

# 配置ARP参数
# vi /etc/sysctl.d/99-lvs-rs.conf

net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2

# 使配置生效
# sysctl -p /etc/sysctl.d/99-lvs-rs.conf

# 配置VIP到lo接口
# vi /etc/sysconfig/network-scripts/ifcfg-lo:0

DEVICE=lo:0
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.1.100
NETMASK=255.255.255.255
BROADCAST=192.168.1.100

# 启用VIP
# ifup lo:0

# 验证配置
# ip addr show lo:0

# 输出示例:
1: lo:0: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 192.168.1.100/32 brd 192.168.1.100 scope global lo:0
valid_lft forever preferred_lft forever

# 添加路由
# route add -host 192.168.1.100 dev lo:0

# 验证路由
# route -n | grep 192.168.1.100

# 输出示例:
192.168.1.100 0.0.0.0 255.255.255.255 UH 0 0 0 lo

5.3 验证DR模式

# 在Director节点查看连接状态
# ipvsadm -Ln –stats

# 输出示例:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes
-> RemoteAddress:Port
TCP 192.168.1.100:80 100 1000 0 50000 0
-> 192.168.1.53:80 50 500 0 25000 0
-> 192.168.1.54:80 33 333 0 16650 0
-> 192.168.1.55:80 17 167 0 8350 0

# 测试访问
$ curl http://192.168.1.100/

# 输出示例:
Welcome to fgedu web server on web01

# 再次访问
$ curl http://192.168.1.100/

# 输出示例:
Welcome to fgedu web server on web02

# 查看连接分配
# ipvsadm -Ln

# 输出示例:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.1.100:80 wrr
-> 192.168.1.53:80 Route 3 1 0
-> 192.168.1.54:80 Route 2 1 0
-> 192.168.1.55:80 Route 1 0 0

风哥提示:DR模式要求Director和Real Server在同一网段。Real Server必须配置VIP到lo接口并关闭ARP响应。

6. NAT模式配置

NAT(Network Address Translation)模式配置简单,支持跨网段,本节介绍详细的配置方法。更多学习教程www.fgedu.net.cn

6.1 Director节点配置

# 配置双网卡
# 内网网卡:eth0(192.168.1.51)
# 外网网卡:eth1(172.16.1.51)

# 配置VIP(外网)
# vi /etc/sysconfig/network-scripts/ifcfg-eth1:0

DEVICE=eth1:0
BOOTPROTO=static
ONBOOT=yes
IPADDR=172.16.1.100
NETMASK=255.255.255.0

# 启用VIP
# ifup eth1:0

# 开启IP转发
# echo 1 > /proc/sys/net/ipv4/ip_forward

# 配置ipvs规则(NAT模式)
# ipvsadm -C
# ipvsadm -A -t 172.16.1.100:80 -s wrr
# ipvsadm -a -t 172.16.1.100:80 -r 192.168.1.53:80 -m -w 3
# ipvsadm -a -t 172.16.1.100:80 -r 192.168.1.54:80 -m -w 2
# ipvsadm -a -t 172.16.1.100:80 -r 192.168.1.55:80 -m -w 1

# 配置NAT规则
# iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o eth1 -j MASQUERADE

# 保存iptables规则
# service iptables save

# 查看配置
# ipvsadm -Ln

# 输出示例:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.1.100:80 wrr
-> 192.168.1.53:80 Masq 3 0 0
-> 192.168.1.54:80 Masq 2 0 0
-> 192.168.1.55:80 Masq 1 0 0

6.2 Real Server节点配置

# NAT模式Real Server配置简单
# 只需将默认网关指向Director内网IP

# 配置默认网关
# vi /etc/sysconfig/network-scripts/ifcfg-eth0

GATEWAY=192.168.1.51

# 重启网络
# systemctl restart network

# 验证网关
# route -n | grep default

# 输出示例:
0.0.0.0 192.168.1.51 0.0.0.0 UG 100 0 0 eth0

# 验证网络连通性
$ ping 172.16.1.100

# 输出示例:
PING 172.16.1.100 (172.16.1.100) 56(84) bytes of data.
64 bytes from 172.16.1.100: icmp_seq=1 ttl=63 time=0.521 ms

6.3 验证NAT模式

# 从外网测试访问
$ curl http://172.16.1.100/

# 输出示例:
Welcome to fgedu web server on web01

# 在Director节点查看连接状态
# ipvsadm -Ln –stats

# 输出示例:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes
-> RemoteAddress:Port
TCP 172.16.1.100:80 100 2000 1000 100000 50000
-> 192.168.1.53:80 50 1000 500 50000 25000
-> 192.168.1.54:80 33 666 333 33300 16650
-> 192.168.1.55:80 17 334 167 16700 8350

生产环境建议:NAT模式适合跨网段场景,但性能低于DR模式。Director节点需要双网卡,Real Server网关指向Director。

7. Keepalived高可用

Keepalived为LVS提供高可用支持,实现Director节点的故障切换。学习交流加群风哥微信: itpux-com

7.1 Master节点配置

# 安装Keepalived
# yum install -y keepalived

# 配置Keepalived
# vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
router_id LVS_MASTER
script_user root
enable_script_security
}

vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.100/24 dev eth0
}
}

virtual_server 192.168.1.100 80 {
delay_loop 6
lb_algo wrr
lb_kind DR
persistence_timeout 300
protocol TCP

real_server 192.168.1.53 80 {
weight 3
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}

real_server 192.168.1.54 80 {
weight 2
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}

real_server 192.168.1.55 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}

# 启动Keepalived
# systemctl start keepalived
# systemctl enable keepalived

# 查看状态
# systemctl status keepalived

# 输出示例:
● keepalived.service – LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2026-04-04 10:00:00 CST; 1s ago
Main PID: 12345 (keepalived)
Tasks: 3 (limit: 49134)
Memory: 2.1M
CGroup: /system.slice/keepalived.service
├─12345 /usr/sbin/keepalived -D
├─12346 /usr/sbin/keepalived -D
└─12347 /usr/sbin/keepalived -D

7.2 Backup节点配置

# 在Backup节点配置
# vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
router_id LVS_BACKUP
script_user root
enable_script_security
}

vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.100/24 dev eth0
}
}

virtual_server 192.168.1.100 80 {
delay_loop 6
lb_algo wrr
lb_kind DR
persistence_timeout 300
protocol TCP

real_server 192.168.1.53 80 {
weight 3
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}

real_server 192.168.1.54 80 {
weight 2
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}

real_server 192.168.1.55 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}

# 启动Keepalived
# systemctl start keepalived
# systemctl enable keepalived

7.3 验证高可用

# 在Master节点查看VIP
# ip addr show eth0 | grep 192.168.1.100

# 输出示例:
inet 192.168.1.100/24 scope global secondary eth0

# 查看ipvs规则
# ipvsadm -Ln

# 输出示例:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.1.100:80 wrr persistent 300
-> 192.168.1.53:80 Route 3 0 0
-> 192.168.1.54:80 Route 2 0 0
-> 192.168.1.55:80 Route 1 0 0

# 模拟Master故障
# systemctl stop keepalived

# 在Backup节点查看VIP是否切换
# ip addr show eth0 | grep 192.168.1.100

# 输出示例:
inet 192.168.1.100/24 scope global secondary eth0

# 恢复Master节点
# systemctl start keepalived

# 查看日志
# tail -f /var/log/messages | grep Keepalived

# 输出示例:
Apr 4 10:00:00 lvs01 Keepalived[12345]: Starting Keepalived v2.2.8
Apr 4 10:00:00 lvs01 Keepalived[12345]: VRRP instance VI_1 entering MASTER STATE

风哥提示:Keepalived配置中priority值越大优先级越高。Master节点故障时,Backup节点自动接管VIP。

8. 监控与运维

LVS提供了完善的监控和管理功能,本节介绍常用的运维方法。更多学习教程公众号风哥教程itpux_com

8.1 连接监控

# 查看当前连接
# ipvsadm -Ln -c

# 输出示例:
IPVS connection entries
pro expire state source virtual destination
TCP 14:59 ESTABLISHED 192.168.1.100:54321 192.168.1.100:80 192.168.1.53:80
TCP 14:58 ESTABLISHED 192.168.1.101:54322 192.168.1.100:80 192.168.1.54:80
TCP 14:57 ESTABLISHED 192.168.1.102:54323 192.168.1.100:80 192.168.1.55:80

# 查看连接统计
# ipvsadm -Ln –stats

# 输出示例:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes
-> RemoteAddress:Port
TCP 192.168.1.100:80 100 1000 0 50000 0
-> 192.168.1.53:80 50 500 0 25000 0
-> 192.168.1.54:80 33 333 0 16650 0
-> 192.168.1.55:80 17 167 0 8350 0

# 查看速率统计
# ipvsadm -Ln –rate

# 输出示例:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port CPS InPPS OutPPS InBPS OutBPS
-> RemoteAddress:Port
TCP 192.168.1.100:80 10 100 0 5000 0
-> 192.168.1.53:80 5 50 0 2500 0
-> 192.168.1.54:80 3 33 0 1665 0
-> 192.168.1.55:80 2 17 0 835 0

8.2 Real Server管理

# 添加Real Server
# ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.56:80 -g -w 1

# 修改Real Server权重
# ipvsadm -e -t 192.168.1.100:80 -r 192.168.1.53:80 -g -w 5

# 删除Real Server
# ipvsadm -d -t 192.168.1.100:80 -r 192.168.1.56:80

# 设置Real Server为维护模式(权重为0)
# ipvsadm -e -t 192.168.1.100:80 -r 192.168.1.53:80 -g -w 0

# 恢复Real Server
# ipvsadm -e -t 192.168.1.100:80 -r 192.168.1.53:80 -g -w 3

# 清除所有规则
# ipvsadm -C

# 保存配置
# ipvsadm-save > /etc/sysconfig/ipvsadm

# 恢复配置
# ipvsadm-restore < /etc/sysconfig/ipvsadm

8.3 日志管理

# 查看Keepalived日志
# tail -f /var/log/messages | grep Keepalived

# 输出示例:
Apr 4 10:00:00 lvs01 Keepalived[12345]: VRRP_Instance(VI_1) Entering MASTER STATE
Apr 4 10:00:00 lvs01 Keepalived[12345]: VRRP_Instance(VI_1) setting protocol VIPs.
Apr 4 10:00:00 lvs01 Keepalived[12345]: Sending gratuitous ARP on eth0 for 192.168.1.100

# 配置日志轮转
# vi /etc/logrotate.d/keepalived

/var/log/keepalived.log {
daily
missingok
rotate 30
compress
delaycompress
notifempty
create 0640 root root
}

9. 升级与迁移

LVS升级和迁移是运维工作中的重要环节,需要仔细规划和执行。from:www.itpux.com

9.1 版本升级

# 查看当前版本
$ ipvsadm –version
ipvsadm v1.31 2019/12/24

# 备份配置
# cp /etc/sysconfig/ipvsadm /backup/ipvsadm_$(date +%Y%m%d)
# cp -r /etc/keepalived /backup/keepalived_$(date +%Y%m%d)

# 升级ipvsadm
# yum update ipvsadm

# 升级Keepalived
# yum update keepalived

# 验证版本
$ ipvsadm –version
ipvsadm v1.31 2019/12/24

$ keepalived -v
Keepalived v2.2.8 (04/04,2026)

# 重启服务
# systemctl restart keepalived

9.2 配置迁移

# 导出配置
# ipvsadm-save > /backup/lvs_rules.txt
# cp /etc/keepalived/keepalived.conf /backup/

# 迁移到新服务器
# scp /backup/lvs_rules.txt root@newserver:/backup/
# scp /backup/keepalived.conf root@newserver:/etc/keepalived/

# 在新服务器导入配置
# ipvsadm-restore < /backup/lvs_rules.txt # 启动Keepalived # systemctl start keepalived

生产环境建议:升级前必须进行完整备份。LVS配置向后兼容,但建议验证配置后再重启服务。

10. 生产环境实战案例

本节提供一个完整的生产环境配置案例,帮助读者更好地理解LVS的实际应用。更多学习教程www.fgedu.net.cn

10.1 生产环境完整配置

# 生产环境Keepalived配置(Master)
# vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
router_id LVS_MASTER
script_user root
enable_script_security
}

# 健康检查脚本
vrrp_script check_nginx {
script “/etc/keepalived/check_nginx.sh”
interval 2
weight -20
fall 3
rise 2
}

vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass fgedu2026
}
virtual_ipaddress {
192.168.1.100/24 dev eth0 label eth0:0
}
track_script {
check_nginx
}
}

# HTTP虚拟服务
virtual_server 192.168.1.100 80 {
delay_loop 6
lb_algo wlc
lb_kind DR
persistence_timeout 300
protocol TCP

real_server 192.168.1.53 80 {
weight 3
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}

real_server 192.168.1.54 80 {
weight 2
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}

real_server 192.168.1.55 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}

# HTTPS虚拟服务
virtual_server 192.168.1.100 443 {
delay_loop 6
lb_algo wlc
lb_kind DR
persistence_timeout 300
protocol TCP

real_server 192.168.1.53 443 {
weight 3
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 443
}
}

real_server 192.168.1.54 443 {
weight 2
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 443
}
}

real_server 192.168.1.55 443 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 443
}
}
}

10.2 健康检查脚本

# 创建健康检查脚本
# vi /etc/keepalived/check_nginx.sh

#!/bin/bash
count=$(ps -C nginx –no-header | wc -l)
if [ $count -eq 0 ]; then
exit 1
fi
exit 0

# 设置执行权限
# chmod +x /etc/keepalived/check_nginx.sh

# 测试脚本
# /etc/keepalived/check_nginx.sh
# echo $?
0

10.3 性能调优实战

# 操作系统优化
# vi /etc/sysctl.d/99-lvs.conf

net.ipv4.ip_forward = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
net.netfilter.nf_conntrack_max = 1048576
net.nf_conntrack_max = 1048576
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.tcp_max_syn_backlog = 65535
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 65535

# 使配置生效
# sysctl -p /etc/sysctl.d/99-lvs.conf

# 压力测试
$ ab -n 100000 -c 1000 http://192.168.1.100/

# 输出示例:
Server Software: nginx/1.24.0
Server Hostname: 192.168.1.100
Server Port: 80

Concurrency Level: 1000
Time taken for tests: 10.000 seconds
Complete requests: 100000
Failed requests: 0
Requests per second: 10000.00 [#/sec] (mean)
Time per request: 100.000 [ms] (mean)

风哥提示:LVS作为高性能负载均衡器,建议使用DR模式获得最佳性能。配合Keepalived实现高可用,是生产环境的标准方案。

本文由风哥教程整理发布,仅用于学习测试使用,转载注明出处:http://www.fgedu.net.cn/10327.html

联系我们

在线咨询:点击这里给我发消息

微信号:itpux-com

工作日:9:30-18:30,节假日休息