[httperf] httperf doesn´t generate desired
Wed, 3 Dec 2003 07:05:08 -0800 (PST)
> I encounter a problem with httperf: It does not generate the request
> rate I want it to do.
> I´m testing a site involving CGI and generate (just for the example) 50
> requests at a increasing rate. No timeouts are involved.
just so you know, if you do not specify a timeout you will be operating in
a closed-loop rather than an open loop manner.
> With no rate, httperf reports to issue 2.7 req/s. I take this as a
> measure of what the webserver can handle easily.
in this case 2.7req/s is ALL the server can handle.
> The second test runs with --rate=2. According to the value above, this
> should be no problem. httperf reports 2.0 req/s and nearly the same
> connection time values, as expected.
> Now I increase the load very slightly: --rate=3. But now httperf still
> reports it run at a request rate of 2.7 req/s.
this is because the server is overloaded. have a look at the connection
test 1 (sequential - issue the next request when the previous request is
completed): median is 362.5ms
test 2 (2req/second): median is 361.5 ms
test 3 (3 req/second): median is 2,289.5 ms
test 4 (5 req/second): median is 8,358.5 ms
> Last test: --rate=5. Still got 2.7 req/s.
> You can find the test reports at
> As I understand it, httperf should _send_out_ requests at the given
> rate, even if the server cannot _handle_ them. But it obviously does
> not, preventing me to run useful tests with som dozens requests a
> second and some thousand requests in total. Any ideas why?
I expect it is sending out the requests at the given rate, even though the
server cannot handle them. the problem is your test is very short, and
the responses for the queued requests are taking so long that the averages
are affected by the long test duration.
for example, in the 5 request/second test, generating 50 requests should
take 10 seconds. however, because the responses are so slow, the test
duration is increased to 18.656 seconds. 50/18.565= 2.7reqs/second.
you can verify this by using tcpdump to capture all of the packets. you
should see that all of the requests are issued in the first 10 seconds
(any requests after that could be TCP retransmissions).
so to fix your problem, you either need longer test durations so that the
average request rate is not significantly affected by the time required
for the server to respond to all of the queued requests; or you can modify
httperf to keep a separate timer for when it has finished sending all of
the requests, and use that duration for calculating the average request
you should also keep an eye on the network bandwidth, it's not difficult
to generate 10Mb/s (although not with the current workload).
> My setup:
> Server: Celeron 300 MHz, 98 MB RAM, Linux 2.2.16, Apache 1.3, mod_perl,
> Realtek 8029AS 10 MBit NIC
> Client: Pentium 166 MHz, 64 MB RAM, Linux 2.2.16, httperf 0.8, Realtek
> 8029AS 10 MBit NIC
> Connected via 10baseT, Twisted Pair crossover cable
> Any help would be greatly appreciated.
> Florian Berger, Leipzig, Germany
> httperf mailing list