A web system consists of a web server, a number of clients, and a network that connects the clients to the server. The protocol used to communicate between the client and server is HTTP . In order to measure server performance in such a system it is necessary to run a tool on the clients that generates a specific HTTP workload. Currently, web server measurements are conducted using benchmarks such as SPECweb or WebStone [6, 8] which simulate a fixed number of clients. Given that the potential user base of an Internet-based server is on the order of hundreds of millions of users it is clear that simulating a fixed and relatively small number of clients is often insufficient. For this reason, Banga and Druschel  recently argued the case for measuring web servers with tools that can generate and sustain overload, which is effectively equivalent to simulating an infinite user base. They also presented a tool called ``s-clients'' that is capable of generating such loads. The s-clients approach is similar to httperf in that both are capable of sustaining overload but they differ significantly in design and implementation. For example, httperf completely separates the issue of how to perform HTTP calls from issues such as what kind of workload and measurements should be used. Consequently, httperf can readily be used to perform various kinds of web-server related measurements, including SPECweb/WebStone-like measurements, s-client-like measurements, or new kinds of measurements such as the session-based measurements that we will discuss briefly in Section 4.
Creating httperf turned out to be a surprisingly difficult task due to factors inherent in the problem and shortcomings of current operating systems (OSes). The first challenge is that a web system is a distributed system and as such inherently more difficult to evaluate than a centralized system that has little or no concurrency and a synchronized clock.
Second, HTTP in general and HTTP/1.0 in particular cause connection usage patterns that TCP was not designed for. Some of these problems have been fixed in response to the experience gained from running web servers. However, since tools such as httperf run on the client-side they exercise the system in a manner that is quite different from the way servers do. As a consequence, there are a number of additional issues that such a test tool needs to guard against.
A third challenge is that the world-wide web is a highly dynamic system. Almost every part in it---server and client software, network infrastructure, web content, and user access pattern---is subject to relatively frequent and fundamental changes. For a test tool to remain useful over a period of time requires a design that makes it relatively easy to extend and modify as need arises.
The rest of this paper is organized as follows: the next section gives a brief introduction on how to use httperf. Section 3 describes the overall design of the tool and presents the rationale for the most important design choices. Section 4 discusses the current state of httperf and some of the more subtle implementation issues discovered so far. Finally, Section 5 presents some concluding remarks.