网络负载均衡是提高服务可用性和性能的关键技术。在Linux系统中,有多种方法可以实现网络负载均衡。以下是几种主要方法的详细说明:
# 假设有两台后端服务器192.168.1.100和192.168.1.101
iptables -A PREROUTING -t nat -p tcp --dport 80 -m state --state NEW \
-m statistic --mode random --probability 0.5 -j DNAT --to-destination 192.168.1.100:80
iptables -A PREROUTING -t nat -p tcp --dport 80 -m state --state NEW \
-j DNAT --to-destination 192.168.1.101:80
IPVS是Linux内核内置的L4负载均衡器,性能极高。
# 安装ipvsadm管理工具
sudo apt-get install ipvsadm # Debian/Ubuntu
sudo yum install ipvsadm # CentOS/RHEL
# 添加虚拟服务
ipvsadm -A -t 192.168.1.1:80 -s rr # rr表示轮询调度
# 添加真实服务器
ipvsadm -a -t 192.168.1.1:80 -r 192.168.1.100:80 -g # -g表示直接路由模式
ipvsadm -a -t 192.168.1.1:80 -r 192.168.1.101:80 -g
Nginx是优秀的L7负载均衡器,配置示例:
http {
upstream backend {
server 192.168.1.100:80 weight=5; # 权重5
server 192.168.1.101:80;
server 192.168.1.102:80 backup; # 备用服务器
# 可选负载均衡算法: least_conn, ip_hash, hash $request_uri等
least_conn;
}
server {
listen 80;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
}
HAProxy是专业级的负载均衡器,支持TCP和HTTP:
frontend http_front
bind *:80
default_backend http_back
backend http_back
balance roundrobin # 调度算法: roundrobin, leastconn, source等
server server1 192.168.1.100:80 check
server server2 192.168.1.101:80 check
server server3 192.168.1.102:80 check backup
listen stats # 启用统计页面
bind *:8080
stats enable
stats uri /haproxy?stats
Keepalived可以配合LVS或Nginx/HAProxy实现高可用:
# 安装
sudo apt-get install keepalived # Debian/Ubuntu
sudo yum install keepalived # CentOS/RHEL
配置示例(/etc/keepalived/keepalived.conf
):
vrrp_instance VI_1 {
state MASTER # 另一台设为BACKUP
interface eth0
virtual_router_id 51
priority 100 # BACKUP节点设为较低值如90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.1/24 # 虚拟IP
}
}
virtual_server 192.168.1.1 80 {
delay_loop 6
lb_algo rr # 调度算法
lb_kind DR # 直接路由模式
protocol TCP
real_server 192.168.1.100 80 {
weight 1
TCP_CHECK {
connect_timeout 10
}
}
real_server 192.168.1.101 80 {
weight 1
TCP_CHECK {
connect_timeout 10
}
}
}
对于极高性能需求,可以使用基于eBPF/XDP的技术:
// 示例XDP程序片段(需要编译加载)
SEC("xdp_lb")
int xdp_load_balancer(struct xdp_md *ctx) {
void *data = (void *)(long)ctx->data;
void *data_end = (void *)(long)ctx->data_end;
struct ethhdr *eth = data;
if (eth + 1 > data_end) return XDP_PASS;
if (eth->h_proto != htons(ETH_P_IP)) return XDP_PASS;
struct iphdr *iph = data + sizeof(struct ethhdr);
if (iph + 1 > data_end) return XDP_PASS;
if (iph->protocol != IPPROTO_TCP) return XDP_PASS;
struct tcphdr *tcph = data + sizeof(struct ethhdr) + sizeof(struct iphdr);
if (tcph + 1 > data_end) return XDP_PASS;
// 简单的哈希负载均衡
__u32 backend_idx = hash_func(iph->saddr) % BACKEND_COUNT;
return xdp_redirect_map(&tx_port, backend_servers[backend_idx], 0);
}
每种方案都有其适用场景,实际部署时应根据性能需求、功能需求和技术栈选择合适的方案。