[httperf] Httperf concurrent/rate implementation.
arlitt at granite.hpl.hp.com
Mon Feb 23 07:00:54 PST 2004
> I would like to understand the concurrency and rate models implemented in the httperf tool. In some experiments I need to define a specific --rate to work in a constant concurrency level.
the --rate option does not define the concurrency.
> For example, I have an apache web server prepared to support until 200 concurrent conns (MaxClients 200). I want to create a load based on 200 persistent concurrent connections, where each connection will make 1000 requests (e.g. GET). The rate of the 200,000 GET requests is 50 reqs/s.
> Q1: Is it possible ?
if your question is "Can I have 200 persistent connections open in
parallel?", the answer is yes. However, the summary statistics that
httperf provides may not be that useful in such a case. in other words,
you could use httperf to generate a workload that maintains 200 persistent
connections and generates 50 requests per second, but you may need other
tools to measure the impact of that workload on the server.
> Trying to do that I'm handling the httperf's arguments and I found that setting the --rate=400 I get the follow msg "... <=200 concurrent connections".
> Q2: What the symbol (<=) exactly means ?
'<=' means less than or equal to. for the example you provided, there
were at most 200 TCP connections open at one time during the particular
> Anyway, using the --rate value to reach a specific concurrent level isn't adequate in this case because I need to use another rate like a said based on 50 reqs/s.
I think you will need to use a more complex httperf set up to achieve
this. I would do it using the --wsesslog option. httperf will use a
separate TCP connection for each session. there are a number of factors
to deal with here:
-the number of concurrent user sessions (e.g., 200, one for each Apache
-the number of requests per session (you don't need to use a large number,
although you can if you'd like)
-the session duration
-the (average) request rate
-the session arrival rate
there may be other ways of doing this, this is just the way I've done this
in the past. note that if you do things in this way you will have an
initial ramp-up period where there are fewer than X concurrent user
sessions, as well as a ramp-down period at the end. because of this the
summary statistics may not be that useful, which is why I mentioned the
need for alternative ways for monitoring the performance of the server.
More information about the httperf