Check out the new USENIX Web site.


Load Balancing

The goal of the load balancing algorithm is to detect and correct overload situations in the network. We expect that such situations will be rare in an environment with a dense deployment of access points, and with numerous available orthogonal channels (e.g. 12 in 802.11a). However, it is important to watch for, and correct the overload situations if and when they occur.

For example, an overload situation might occur if many clients congregate in a conference room, and the network conditions are such that the algorithm described in Section 4.2 assigns several of them to a single DAP. In such a situation, all clients simultaneously transmitting or receiving data can cause an overload at the DAP.

The load balancing algorithm works as follows. Once every minute, the DC checks all DAPs to see if any are severely overloaded. Recall from Section 4.1.1 that the busy air time (load) calculation incorporates the impact of traffic/interference near the DAP and the downlink traffic generated by the DAP. We consider a DAP to be overloaded, if it has at least one client associated with it, and it reports free air time of less than 20%. In other words, the channel is more than 80% busy in the vicinity of this DAP. The DC considers the DAPs in the decreasing order of load. If an overloaded DAP ($A$) is found, the DC considers the clients of $A$ as potential candidates to move to another DAP. Recall that the DAPs send periodic summaries of client traffic to the DC. These summaries include, for each client, a smoothed average of the sum of uplink and downlink traffic load generated by the client during the previous interval. The load is reported in terms of air time consumed by the traffic of this client, and the average transmission rate of the traffic.

For each client $M \in A$, the DC attempts to find a DAP $B$ such that the expected rate $M$ will get at $B$ is no less than the average transmission rate of the client at $A$, and the free air time at $B$ is at least 25% more than the air time consumed by $M$ at $A$. If such a DAP is found, $M$ is moved to $B$ using the process described in Section 3.2. Note that if $B$ had no clients associated with it, the DC will also assign it a channel (the one $B$ reported to have the most free air time on), just as it would do when associating a new client.

The load balancing algorithm moves at most one client that satisfies the above criteria during each iteration. Furthermore, once a client $M$ has been handed off from $A$ to $B$, it is considered ineligible to participate in the next round of load balancing. These hysteresis techniques are intended to prevent oscillations.

We note a few things about the load balancing algorithm. (i) Our algorithm is conservative. Moving clients from one AP to another is a potentially disruptive event, and we try to minimize how often we force such reassociations to occur. (ii) The load balancing algorithm improves overall system throughput in two ways. First, the client that is moved to the less-loaded AP can ramp up and consume more bandwidth. Second, the clients that stayed with the previously overloaded AP now have one less client to contend with, and they can also increase their throughput. (iii) It is sometimes possible to do load balancing by changing the channel of the overloaded DAP. This technique is useful only if the background traffic/interference (potentially from other DAPs) on the channel is significantly higher compared to the traffic sent/received by the overloaded DAP itself. However, the drawback of this technique is that all clients associated with the DAP will have to to re-associate. Since we consider client re-associations to be disruptive events, we do not to use this technique.

NSDI-2008