While the tests demonstrate that the load balancer can scale to an incredible level if run on very powerful servers with significant network bandwidth, it’s also possible to run on much more modest hardware while still achieving excellent results.
Wanting to see how HAProxy would scale in a high performance environment, Willy Tarreau, Community Project Lead & CTO at HAProxy Technologies, accessed some of AWS’s new Arm Neoverse-based Graviton2-powered instances, which provide up to 64 cores.
Each core uses its own L2 cache and there’s a single L3 cache shared by all cores. In addition, Tarreau switched to the new Armv8.1-A Large System Extensions (LSE) atomic instructions, which totally unlocked the true power of these machines. This environment gave him the opportunity to push the HAProxy benchmarks to the next level.
Results: HAProxy’s performance peaked at 2.04 to 2.05 million requests per second. The direct communication from the client to the server shows 160 microseconds response time while adding HAProxy increases it to 560. So, HAProxy adds at most 400 microseconds on average—less than half a millisecond.
At that point, however, the machine is totally saturated and due to this, HAProxy is measuring the effect of queuing at every stage in the chain (network buffers and HAProxy).
Preliminary tests on the forthcoming HAProxy 2.4 release show very promising results with even higher rates and lower traversal times, indicating that 2.4 will be an excellent release.