Portál AbcLinuxu, 30. dubna 2025 23:17

ATOL: Piranha & load-balancing

29.4.2008 21:31 | Přečteno: 1196× | Linux | Výběrový blog

Author: Vladimír Beneš (AToL - PV208, FI.MUNI)

Piranha is a part of Red Hat Cluster Suite and is responsible for IP Load Balancing. IPLB provides the ability to load-balance incoming IP network requests across a farm of servers.

IP Load Balancing is based on open source Linux Virtual Server (LVS) technology, with significant Red Hat enhancements.The IPLB cluster appears as one server, but, in reality, a user from the Web is accessing a group of servers behind a pair of redundant IPLB routers. An IPLB cluster consists of at least two layers. The first layer is composed of a pair of similarly configured Red Hat Enterprise Linux AS or ES systems with Red Hat Cluster Suite installed. One of these nodes acts as the active IPLB router (the other acts as a backup), directing requests from the Internet to the second layer >> a pool of servers called real servers. The real servers provide the critical services to the end-user while the LVS router balances the load to these servers.

Piranha's Design:

The IPVS code provides the controlling intelligence. It matches incoming network traffic for defined virtual servers and redirects each request to a real server, based on an adaptive scheduling algorithm. The scheduler supports two classes of scheduling, each with a weighted and non-weighted version. There is a basic Round Robin scheduler that simply rotates between all active real servers. The more complex scheduler, Least Connections, keeps a history of open connections to all real servers and sends new requests to the real server with the least number of open connections.

IPVS supports three types of network configurations. Network Address Translation (NAT), Tunneling and Direct Routing. NAT requires that there be a public address for the virtual server(s) and a private subnet for the real servers. It then uses IP Masquerading for the real servers. Tunneling uses IP encapsulation and reroutes packets to the real servers. This method requires that the real servers support a tunneled device to unencode the packets. Direct routing rewrites the IP header information and then resends the packet directly to the real server.

Each service running on a real server being routed to as a part of a virtual server is monitored by a nanny process running on the active IPVS router. These service monitors follow a two-step process. First, the hardware/network connectivity is checked to ensure that the real server is responding to the network. Second, a connect is sent to the port of the real server that has the monitored service running on it. Once connected, nanny sends a short header request string and checks to make sure that it receives a banner string back. This process is repeated every two seconds. If a sufficient time (configurable) elapses with no successful connects, the real server is assumed dead and is removed from the IPVS routing table. Nanny continues to monitor the real server and when the service has returned and has remained alive for a specified amount of time, the server's place in the IPVS routing table is restored.

The IPVS router is a single point of failure (SPOF) so support for a hot standby node is supported. When configured with a standby, the inactive machine maintains a current copy of the cluster's configuration file (/etc/lvs.cf) and heartbeats across the public network between it and the active IPVS router node. If, after a specified amount of time, the active router fails to respond to heartbeats, the inactive node will execute a failover. The failover process consists of recreating the last known IPVS routing table and stealing the virtual IP(s) that the cluster is responsible for. The failed node should return to life, it will announce its return in the form of heartbeats and will become the new inactive hot standby IPVS router.

Currently, Piranha only supports the NAT networking model of the IPVS code. The other significant limitation of Piranha is the filesystems. Currently static content on the servers is required. Any dynamic content must come from other shared filesystems or backend databases. For most web sites, the page content is mostly static with CGI activities and dynamic content being pulled from databases.

The primary supported services within a Piranha environment are Web and FTP servers. With the NAT model, the real servers can be any operating system on any hardware platform. The dynamic weight adjustment capabilities are not usable on OSes that do not support rsh (or similar) logins to acquire CPU loads.

The number of real servers that can be supported is theoretically limitless; however, network limitations can be reached with eight to twelve servers, depending on the network connectivity and the server data types. Static FTP content reaches network limitations sooner than dynamic CGI web content.

       

Hodnocení: 100 %

        špatnédobré        

Anketa

How do you rate this article?
 (83 %)
 (17 %)
 (0 %)
Celkem 6 hlasů

Obrázky

ATOL: Piranha & load-balancing, obrázek 1

Tiskni Sdílej: Linkuj Jaggni to Vybrali.sme.sk Google Del.icio.us Facebook

Komentáře

Nástroje: Začni sledovat (1) ?Zašle upozornění na váš email při vložení nového komentáře. , Tisk

Vložit další komentář

29.4.2008 23:36 petr_p | skóre: 59 | blog: pb
Rozbalit Rozbalit vše Re: ATOL: Piranha & load-balancing
Odpovědět | Sbalit | Link | Blokovat | Admin
As I said I couldn't vote for Exhaustive level, I'm voting Exhaustive now.

However I have two hints:

This report seems broken somewhere around definition of two level design (>> sequence).

And polling CPU load via rsh? WTF? We have SNMP that is well supported on most platforms.

ISSN 1214-1267, (c) 1999-2007 Stickfish s.r.o.