——————-
HAProxy
Architecture Guide
——————-
version 1.2.18
willy tarreau
2008/05/25

This document provides real world examples with working configurations.
Please note that except stated otherwise, global configuration parameters
such as logging, chrooting, limits and time-outs are not described here.

===================================================
1. Simple HTTP load-balancing with cookie insertion
===================================================

A web application often saturates the front-end server with high CPU loads,
due to the scripting language involved. It also relies on a back-end database
which is not much loaded. User contexts are stored on the server itself, and
not in the database, so that simply adding another server with simple IP/TCP
load-balancing would not work.

+——-+
|clients| clients and/or reverse-proxy
+—+—+
|
-+—–+——–+—-
| _|_db
+–+–+ (___)
| web | (___)
+—–+ (___)
192.168.1.1 192.168.1.2

Replacing the web server with a bigger SMP system would cost much more than
adding low-cost pizza boxes. The solution is to buy N cheap boxes and install
the application on them. Install haproxy on the old one which will spread the
load across the new boxes.

192.168.1.1 192.168.1.11-192.168.1.14 192.168.1.2
——-+———–+—–+—–+—–+——–+—-
| | | | | _|_db
+–+–+ +-+-+ +-+-+ +-+-+ +-+-+ (___)
| LB1 | | A | | B | | C | | D | (___)
+—–+ +—+ +—+ +—+ +—+ (___)
haproxy 4 cheap web servers

Config on haproxy (LB1) :
————————-

listen webfarm 192.168.1.1:80
mode http
balance roundrobin
cookie SERVERID insert indirect
option httpchk HEAD /index.html HTTP/1.0
server webA 192.168.1.11:80 cookie A check
server webB 192.168.1.12:80 cookie B check
server webC 192.168.1.13:80 cookie C check
server webD 192.168.1.14:80 cookie D check

Description :
————-
– LB1 will receive clients requests.
– if a request does not contain a cookie, it will be forwarded to a valid
server
– in return, a cookie “SERVERID” will be inserted in the response holding the
server name (eg: “A”).
– when the client comes again with the cookie “SERVERID=A”, LB1 will know that
it must be forwarded to server A. The cookie will be removed so that the
server does not see it.
– if server “webA” dies, the requests will be sent to another valid server
and a cookie will be reassigned.

Flows :
——-

(client) (haproxy) (server A)
>– GET /URI1 HTTP/1.0 ————> |
( no cookie, haproxy forwards in load-balancing mode. )
| >– GET /URI1 HTTP/1.0 ———->
| < -- HTTP/1.0 200 OK -------------< ( the proxy now adds the server cookie in return ) <-- HTTP/1.0 200 OK ---------------< | Set-Cookie: SERVERID=A | >— GET /URI2 HTTP/1.0 ————> |
Cookie: SERVERID=A |
( the proxy sees the cookie. it forwards to server A and deletes it )
| >– GET /URI2 HTTP/1.0 ———->
| < -- HTTP/1.0 200 OK -------------< ( the proxy does not add the cookie in return because the client knows it ) <-- HTTP/1.0 200 OK ---------------< | >— GET /URI3 HTTP/1.0 ————> |
Cookie: SERVERID=A |
( … )

Limits :
——–
– if clients use keep-alive (HTTP/1.1), only the first response will have
a cookie inserted, and only the first request of each session will be
analyzed. This does not cause trouble in insertion mode because the cookie
is put immediately in the first response, and the session is maintained to
the same server for all subsequent requests in the same session. However,
the cookie will not be removed from the requests forwarded to the servers,
so the server must not be sensitive to unknown cookies. If this causes
trouble, you can disable keep-alive by adding the following option :

option httpclose

– if for some reason the clients cannot learn more than one cookie (eg: the
clients are indeed some home-made applications or gateways), and the
application already produces a cookie, you can use the “prefix” mode (see
below).

– LB1 becomes a very sensible server. If LB1 dies, nothing works anymore.
=> you can back it up using keepalived (see below)

– if the application needs to log the original client’s IP, use the
“forwardfor” option which will add an “X-Forwarded-For” header with the
original client’s IP address. You must also use “httpclose” to ensure
that you will rewrite every requests and not only the first one of each
session :

option httpclose
option forwardfor

The web server will have to be configured to use this header instead.
For example, on apache, you can use LogFormat for this :

LogFormat “%{X-Forwarded-For}i %l %u %t \”%r\” %>s %b ” combined
CustomLog /var/log/httpd/access_log combined

Hints :
——-
Sometimes on the internet, you will find a few percent of the clients which
disable cookies on their browser. Obviously they have troubles everywhere on
the web, but you can still help them access your site by using the “source”
balancing algorithm instead of the “roundrobin”. It ensures that a given IP
address always reaches the same server as long as the number of servers remains
unchanged. Never use this behind a proxy or in a small network, because the
distribution will be unfair. However, in large internal networks, and on the
internet, it works quite well. Clients which have a dynamic address will not
be affected as long as they accept the cookie, because the cookie always has
precedence over load balancing :

listen webfarm 192.168.1.1:80
mode http
balance source
cookie SERVERID insert indirect
option httpchk HEAD /index.html HTTP/1.0
server webA 192.168.1.11:80 cookie A check
server webB 192.168.1.12:80 cookie B check
server webC 192.168.1.13:80 cookie C check
server webD 192.168.1.14:80 cookie D check

==================================================================
2. HTTP load-balancing with cookie prefixing and high availability
==================================================================

Now you don’t want to add more cookies, but rather use existing ones. The
application already generates a “JSESSIONID” cookie which is enough to track
sessions, so we’ll prefix this cookie with the server name when we see it.
Since the load-balancer becomes critical, it will be backed up with a second
one in VRRP mode using keepalived under Linux.

Download the latest version of keepalived from this site and install it
on each load-balancer LB1 and LB2 :

http://www.keepalived.org/

You then have a shared IP between the two load-balancers (we will still use the
original IP). It is active only on one of them at any moment. To allow the
proxy to bind to the shared IP on Linux 2.4, you must enable it in /proc :

# echo 1 >/proc/sys/net/ipv4/ip_nonlocal_bind

shared IP=192.168.1.1
192.168.1.3 192.168.1.4 192.168.1.11-192.168.1.14 192.168.1.2
——-+————+———–+—–+—–+—–+——–+—-
| | | | | | _|_db
+–+–+ +–+–+ +-+-+ +-+-+ +-+-+ +-+-+ (___)
| LB1 | | LB2 | | A | | B | | C | | D | (___)
+—–+ +—–+ +—+ +—+ +—+ +—+ (___)
haproxy haproxy 4 cheap web servers
keepalived keepalived

Config on both proxies (LB1 and LB2) :
————————————–

listen webfarm 192.168.1.1:80
mode http
balance roundrobin
cookie JSESSIONID prefix
option httpclose
option forwardfor
option httpchk HEAD /index.html HTTP/1.0
server webA 192.168.1.11:80 cookie A check
server webB 192.168.1.12:80 cookie B check
server webC 192.168.1.13:80 cookie C check
server webD 192.168.1.14:80 cookie D check

Notes: the proxy will modify EVERY cookie sent by the client and the server,
so it is important that it can access to ALL cookies in ALL requests for
each session. This implies that there is no keep-alive (HTTP/1.1), thus the
“httpclose” option. Only if you know for sure that the client(s) will never
use keep-alive (eg: Apache 1.3 in reverse-proxy mode), you can remove this
option.

Configuration for keepalived on LB1/LB2 :
—————————————–

vrrp_script chk_haproxy { # Requires keepalived-1.1.13
script “killall -0 haproxy” # cheaper than pidof
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}

vrrp_instance VI_1 {
interface eth0
state MASTER
virtual_router_id 51
priority 101 # 101 on master, 100 on backup
virtual_ipaddress {
192.168.1.1
}
track_script {
chk_haproxy
}
}

Description :
————-
– LB1 is VRRP master (keepalived), LB2 is backup. Both monitor the haproxy
process, and lower their prio if it fails, leading to a failover to the
other node.
– LB1 will receive clients requests on IP 192.168.1.1.
– both load-balancers send their checks from their native IP.
– if a request does not contain a cookie, it will be forwarded to a valid
server
– in return, if a JESSIONID cookie is seen, the server name will be prefixed
into it, followed by a delimitor (‘~’)
– when the client comes again with the cookie “JSESSIONID=A~xxx”, LB1 will
know that it must be forwarded to server A. The server name will then be
extracted from cookie before it is sent to the server.
– if server “webA” dies, the requests will be sent to another valid server
and a cookie will be reassigned.

Flows :
——-

(client) (haproxy) (server A)
>– GET /URI1 HTTP/1.0 ————> |
( no cookie, haproxy forwards in load-balancing mode. )
| >– GET /URI1 HTTP/1.0 ———->
| X-Forwarded-For: 10.1.2.3
| < -- HTTP/1.0 200 OK -------------< ( no cookie, nothing changed ) <-- HTTP/1.0 200 OK ---------------< | >— GET /URI2 HTTP/1.0 ————> |
( no cookie, haproxy forwards in lb mode, possibly to another server. )
| >– GET /URI2 HTTP/1.0 ———->
| X-Forwarded-For: 10.1.2.3
| < -- HTTP/1.0 200 OK -------------< | Set-Cookie: JSESSIONID=123 ( the cookie is identified, it will be prefixed with the server name ) <-- HTTP/1.0 200 OK ---------------< | Set-Cookie: JSESSIONID=A~123 | >— GET /URI3 HTTP/1.0 ————> |
Cookie: JSESSIONID=A~123 |
( the proxy sees the cookie, removes the server name and forwards
to server A which sees the same cookie as it previously sent )
| >– GET /URI3 HTTP/1.0 ———->
| Cookie: JSESSIONID=123
| X-Forwarded-For: 10.1.2.3
| < -- HTTP/1.0 200 OK -------------< ( no cookie, nothing changed ) <-- HTTP/1.0 200 OK ---------------< | ( ... ) Hints : ------- Sometimes, there will be some powerful servers in the farm, and some smaller ones. In this situation, it may be desirable to tell haproxy to respect the difference in performance. Let's consider that WebA and WebB are two old P3-1.2 GHz while WebC and WebD are shiny new Opteron-2.6 GHz. If your application scales with CPU, you may assume a very rough 2.6/1.2 performance ratio between the servers. You can inform haproxy about this using the "weight" keyword, with values between 1 and 256. It will then spread the load the most smoothly possible respecting those ratios : server webA 192.168.1.11:80 cookie A weight 12 check server webB 192.168.1.12:80 cookie B weight 12 check server webC 192.168.1.13:80 cookie C weight 26 check server webD 192.168.1.14:80 cookie D weight 26 check ======================================================== 2.1 Variations involving external layer 4 load-balancers ======================================================== Instead of using a VRRP-based active/backup solution for the proxies, they can also be load-balanced by a layer4 load-balancer (eg: Alteon) which will also check that the services run fine on both proxies : | VIP=192.168.1.1 +----+----+ | Alteon | +----+----+ | 192.168.1.3 | 192.168.1.4 192.168.1.11-192.168.1.14 192.168.1.2 -------+-----+------+-----------+-----+-----+-----+--------+---- | | | | | | _|_db +--+--+ +--+--+ +-+-+ +-+-+ +-+-+ +-+-+ (___) | LB1 | | LB2 | | A | | B | | C | | D | (___) +-----+ +-----+ +---+ +---+ +---+ +---+ (___) haproxy haproxy 4 cheap web servers Config on both proxies (LB1 and LB2) : -------------------------------------- listen webfarm 0.0.0.0:80 mode http balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httplog option dontlognull option httpchk HEAD /index.html HTTP/1.0 server webA 192.168.1.11:80 cookie A check server webB 192.168.1.12:80 cookie B check server webC 192.168.1.13:80 cookie C check server webD 192.168.1.14:80 cookie D check The "dontlognull" option is used to prevent the proxy from logging the health checks from the Alteon. If a session exchanges no data, then it will not be logged. Config on the Alteon : ---------------------- /c/slb/real 11 ena name "LB1" rip 192.168.1.3 /c/slb/real 12 ena name "LB2" rip 192.168.1.4 /c/slb/group 10 name "LB1-2" metric roundrobin health tcp add 11 add 12 /c/slb/virt 10 ena vip 192.168.1.1 /c/slb/virt 10/service http group 10 Note: the health-check on the Alteon is set to "tcp" to prevent the proxy from forwarding the connections. It can also be set to "http", but for this the proxy must specify a "monitor-net" with the Alteons' addresses, so that the Alteon can really check that the proxies can talk HTTP but without forwarding the connections to the end servers. Check next section for an example on how to use monitor-net. ============================================================ 2.2 Generic TCP relaying and external layer 4 load-balancers ============================================================ Sometimes it's useful to be able to relay generic TCP protocols (SMTP, TSE, VNC, etc...), for example to interconnect private networks. The problem comes when you use external load-balancers which need to send periodic health-checks to the proxies, because these health-checks get forwarded to the end servers. The solution is to specify a network which will be dedicated to monitoring systems and must not lead to a forwarding connection nor to any log, using the "monitor-net" keyword. Note: this feature expects a version of haproxy greater than or equal to 1.1.32 or 1.2.6. | VIP=172.16.1.1 | +----+----+ +----+----+ | Alteon1 | | Alteon2 | +----+----+ +----+----+ 192.168.1.252 | GW=192.168.1.254 | 192.168.1.253 | | ------+---+------------+--+-----------------> TSE farm : 192.168.1.10
192.168.1.1 | | 192.168.1.2
+–+–+ +–+–+
| LB1 | | LB2 |
+—–+ +—–+
haproxy haproxy

Config on both proxies (LB1 and LB2) :
————————————–

listen tse-proxy
bind :3389,:1494,:5900 # TSE, ICA and VNC at once.
mode tcp
balance roundrobin
server tse-farm 192.168.1.10
monitor-net 192.168.1.252/31

The “monitor-net” option instructs the proxies that any connection coming from
192.168.1.252 or 192.168.1.253 will not be logged nor forwarded and will be
closed immediately. The Alteon load-balancers will then see the proxies alive
without perturbating the service.

Config on the Alteon :
———————-

/c/l3/if 1
ena
addr 192.168.1.252
mask 255.255.255.0
/c/slb/real 11
ena
name “LB1”
rip 192.168.1.1
/c/slb/real 12
ena
name “LB2”
rip 192.168.1.2
/c/slb/group 10
name “LB1-2”
metric roundrobin
health tcp
add 11
add 12
/c/slb/virt 10
ena
vip 172.16.1.1
/c/slb/virt 10/service 1494
group 10
/c/slb/virt 10/service 3389
group 10
/c/slb/virt 10/service 5900
group 10

Special handling of SSL :
————————-
Sometimes, you want to send health-checks to remote systems, even in TCP mode,
in order to be able to failover to a backup server in case the first one is
dead. Of course, you can simply enable TCP health-checks, but it sometimes
happens that intermediate firewalls between the proxies and the remote servers
acknowledge the TCP connection themselves, showing an always-up server. Since
this is generally encountered on long-distance communications, which often
involve SSL, an SSL health-check has been implemented to workaround this issue.
It sends SSL Hello messages to the remote server, which in turns replies with
SSL Hello messages. Setting it up is very easy :

listen tcp-syslog-proxy
bind :1514 # listen to TCP syslog traffic on this port (SSL)
mode tcp
balance roundrobin
option ssl-hello-chk
server syslog-prod-site 192.168.1.10 check
server syslog-back-site 192.168.2.10 check backup

=========================================================
3. Simple HTTP/HTTPS load-balancing with cookie insertion
=========================================================

This is the same context as in example 1 above, but the web
server uses HTTPS.

+——-+
|clients| clients
+—+—+
|
-+—–+——–+—-
| _|_db
+–+–+ (___)
| SSL | (___)
| web | (___)
+—–+
192.168.1.1 192.168.1.2

Since haproxy does not handle SSL, this part will have to be extracted from the
servers (freeing even more ressources) and installed on the load-balancer
itself. Install haproxy and apache+mod_ssl on the old box which will spread the
load between the new boxes. Apache will work in SSL reverse-proxy-cache. If the
application is correctly developped, it might even lower its load. However,
since there now is a cache between the clients and haproxy, some security
measures must be taken to ensure that inserted cookies will not be cached.

192.168.1.1 192.168.1.11-192.168.1.14 192.168.1.2
——-+———–+—–+—–+—–+——–+—-
| | | | | _|_db
+–+–+ +-+-+ +-+-+ +-+-+ +-+-+ (___)
| LB1 | | A | | B | | C | | D | (___)
+—–+ +—+ +—+ +—+ +—+ (___)
apache 4 cheap web servers
mod_ssl
haproxy

Config on haproxy (LB1) :
————————-

listen 127.0.0.1:8000
mode http
balance roundrobin
cookie SERVERID insert indirect nocache
option httpchk HEAD /index.html HTTP/1.0
server webA 192.168.1.11:80 cookie A check
server webB 192.168.1.12:80 cookie B check
server webC 192.168.1.13:80 cookie C check
server webD 192.168.1.14:80 cookie D check

Description :
————-
– apache on LB1 will receive clients requests on port 443
– it forwards it to haproxy bound to 127.0.0.1:8000
– if a request does not contain a cookie, it will be forwarded to a valid
server
– in return, a cookie “SERVERID” will be inserted in the response holding the
server name (eg: “A”), and a “Cache-control: private” header will be added
so that the apache does not cache any page containing such cookie.
– when the client comes again with the cookie “SERVERID=A”, LB1 will know that
it must be forwarded to server A. The cookie will be removed so that the
server does not see it.
– if server “webA” dies, the requests will be sent to another valid server
and a cookie will be reassigned.

Notes :
——-
– if the cookie works in “prefix” mode, there is no need to add the “nocache”
option because it is an application cookie which will be modified, and the
application flags will be preserved.
– if apache 1.3 is used as a front-end before haproxy, it always disables
HTTP keep-alive on the back-end, so there is no need for the “httpclose”
option on haproxy.
– configure apache to set the X-Forwarded-For header itself, and do not do
it on haproxy if you need the application to know about the client’s IP.

Flows :
——-

(apache) (haproxy) (server A)
>– GET /URI1 HTTP/1.0 ————> |
( no cookie, haproxy forwards in load-balancing mode. )
| >– GET /URI1 HTTP/1.0 ———->
| < -- HTTP/1.0 200 OK -------------< ( the proxy now adds the server cookie in return ) <-- HTTP/1.0 200 OK ---------------< | Set-Cookie: SERVERID=A | Cache-Control: private | >— GET /URI2 HTTP/1.0 ————> |
Cookie: SERVERID=A |
( the proxy sees the cookie. it forwards to server A and deletes it )
| >– GET /URI2 HTTP/1.0 ———->
| < -- HTTP/1.0 200 OK -------------< ( the proxy does not add the cookie in return because the client knows it ) <-- HTTP/1.0 200 OK ---------------< | >— GET /URI3 HTTP/1.0 ————> |
Cookie: SERVERID=A |
( … )

========================================
3.1. Alternate solution using Stunnel
========================================

When only SSL is required and cache is not needed, stunnel is a cheaper
solution than Apache+mod_ssl. By default, stunnel does not process HTTP and
does not add any X-Forwarded-For header, but there is a patch on the official
haproxy site to provide this feature to recent stunnel versions.

This time, stunnel will only process HTTPS and not HTTP. This means that
haproxy will get all HTTP traffic, so haproxy will have to add the
X-Forwarded-For header for HTTP traffic, but not for HTTPS traffic since
stunnel will already have done it. We will use the “except” keyword to tell
haproxy that connections from local host already have a valid header.

192.168.1.1 192.168.1.11-192.168.1.14 192.168.1.2
——-+———–+—–+—–+—–+——–+—-
| | | | | _|_db
+–+–+ +-+-+ +-+-+ +-+-+ +-+-+ (___)
| LB1 | | A | | B | | C | | D | (___)
+—–+ +—+ +—+ +—+ +—+ (___)
stunnel 4 cheap web servers
haproxy

Config on stunnel (LB1) :
————————-

cert=/etc/stunnel/stunnel.pem
setuid=stunnel
setgid=proxy

socket=l:TCP_NODELAY=1
socket=r:TCP_NODELAY=1

[https]
accept=192.168.1.1:443
connect=192.168.1.1:80
xforwardedfor=yes

Config on haproxy (LB1) :
————————-

listen 192.168.1.1:80
mode http
balance roundrobin
option forwardfor except 192.168.1.1
cookie SERVERID insert indirect nocache
option httpchk HEAD /index.html HTTP/1.0
server webA 192.168.1.11:80 cookie A check
server webB 192.168.1.12:80 cookie B check
server webC 192.168.1.13:80 cookie C check
server webD 192.168.1.14:80 cookie D check

Description :
————-
– stunnel on LB1 will receive clients requests on port 443
– it forwards them to haproxy bound to port 80
– haproxy will receive HTTP client requests on port 80 and decrypted SSL
requests from Stunnel on the same port.
– stunnel will add the X-Forwarded-For header
– haproxy will add the X-Forwarded-For header for everyone except the local
address (stunnel).

========================================
4. Soft-stop for application maintenance
========================================

When an application is spread across several servers, the time to update all
instances increases, so the application seems jerky for a longer period.

HAproxy offers several solutions for this. Although it cannot be reconfigured
without being stopped, nor does it offer any external command, there are other
working solutions.

=========================================
4.1 Soft-stop using a file on the servers
=========================================

This trick is quite common and very simple: put a file on the server which will
be checked by the proxy. When you want to stop the server, first remove this
file. The proxy will see the server as failed, and will not send it any new
session, only the old ones if the “persist” option is used. Wait a bit then
stop the server when it does not receive anymore connections.

listen 192.168.1.1:80
mode http
balance roundrobin
cookie SERVERID insert indirect
option httpchk HEAD /running HTTP/1.0
server webA 192.168.1.11:80 cookie A check inter 2000 rise 2 fall 2
server webB 192.168.1.12:80 cookie B check inter 2000 rise 2 fall 2
server webC 192.168.1.13:80 cookie C check inter 2000 rise 2 fall 2
server webD 192.168.1.14:80 cookie D check inter 2000 rise 2 fall 2
option persist
redispatch
contimeout 5000

Description :
————-
– every 2 seconds, haproxy will try to access the file “/running” on the
servers, and declare the server as down after 2 attempts (4 seconds).
– only the servers which respond with a 200 or 3XX response will be used.
– if a request does not contain a cookie, it will be forwarded to a valid
server
– if a request contains a cookie for a failed server, haproxy will insist
on trying to reach the server anyway, to let the user finish what he was
doing. (“persist” option)
– if the server is totally stopped, the connection will fail and the proxy
will rebalance the client to another server (“redispatch”)

Usage on the web servers :
————————–
– to start the server :
# /etc/init.d/httpd start
# touch /home/httpd/www/running

– to soft-stop the server
# rm -f /home/httpd/www/running

– to completely stop the server :
# /etc/init.d/httpd stop

Limits
——
If the server is totally powered down, the proxy will still try to reach it
for those clients who still have a cookie referencing it, and the connection
attempt will expire after 5 seconds (“contimeout”), and only after that, the
client will be redispatched to another server. So this mode is only useful
for software updates where the server will suddenly refuse the connection
because the process is stopped. The problem is the same if the server suddenly
crashes. All of its users will be fairly perturbated.

==================================
4.2 Soft-stop using backup servers
==================================

A better solution which covers every situation is to use backup servers.
Version 1.1.30 fixed a bug which prevented a backup server from sharing
the same cookie as a standard server.

listen 192.168.1.1:80
mode http
balance roundrobin
redispatch
cookie SERVERID insert indirect
option httpchk HEAD / HTTP/1.0
server webA 192.168.1.11:80 cookie A check port 81 inter 2000
server webB 192.168.1.12:80 cookie B check port 81 inter 2000
server webC 192.168.1.13:80 cookie C check port 81 inter 2000
server webD 192.168.1.14:80 cookie D check port 81 inter 2000

server bkpA 192.168.1.11:80 cookie A check port 80 inter 2000 backup
server bkpB 192.168.1.12:80 cookie B check port 80 inter 2000 backup
server bkpC 192.168.1.13:80 cookie C check port 80 inter 2000 backup
server bkpD 192.168.1.14:80 cookie D check port 80 inter 2000 backup

Description
———–
Four servers webA..D are checked on their port 81 every 2 seconds. The same
servers named bkpA..D are checked on the port 80, and share the exact same
cookies. Those servers will only be used when no other server is available
for the same cookie.

When the web servers are started, only the backup servers are seen as
available. On the web servers, you need to redirect port 81 to local
port 80, either with a local proxy (eg: a simple haproxy tcp instance),
or with iptables (linux) or pf (openbsd). This is because we want the
real web server to reply on this port, and not a fake one. Eg, with
iptables :

# /etc/init.d/httpd start
# iptables -t nat -A PREROUTING -p tcp –dport 81 -j REDIRECT –to-port 80

A few seconds later, the standard server is seen up and haproxy starts to send
it new requests on its real port 80 (only new users with no cookie, of course).

If a server completely crashes (even if it does not respond at the IP level),
both the standard and backup servers will fail, so clients associated to this
server will be redispatched to other live servers and will lose their sessions.

Now if you want to enter a server into maintenance, simply stop it from
responding on port 81 so that its standard instance will be seen as failed,
but the backup will still work. Users will not notice anything since the
service is still operational :

# iptables -t nat -D PREROUTING -p tcp –dport 81 -j REDIRECT –to-port 80

The health checks on port 81 for this server will quickly fail, and the
standard server will be seen as failed. No new session will be sent to this
server, and existing clients with a valid cookie will still reach it because
the backup server will still be up.

Now wait as long as you want for the old users to stop using the service, and
once you see that the server does not receive any traffic, simply stop it :

# /etc/init.d/httpd stop

The associated backup server will in turn fail, and if any client still tries
to access this particular server, he will be redispatched to any other valid
server because of the “redispatch” option.

This method has an advantage : you never touch the proxy when doing server
maintenance. The people managing the servers can make them disappear smoothly.

4.2.1 Variations for operating systems without any firewall software
——————————————————————–

The downside is that you need a redirection solution on the server just for
the health-checks. If the server OS does not support any firewall software,
this redirection can also be handled by a simple haproxy in tcp mode :

global
daemon
quiet
pidfile /var/run/haproxy-checks.pid
listen 0.0.0.0:81
mode tcp
dispatch 127.0.0.1:80
contimeout 1000
clitimeout 10000
srvtimeout 10000

To start the web service :

# /etc/init.d/httpd start
# haproxy -f /etc/haproxy/haproxy-checks.cfg

To soft-stop the service :

# kill $(

发表评论

电子邮件地址不会被公开。 必填项已用*标注