[httperf] httperf-0.9.0 segmentation fault on Mac 10.6.7

Michael Irwin mdirwin78 at att.net
Tue May 3 08:19:46 PDT 2011


Thanks for this, but Mac OS X does not use the /proc FS, so much of this does not have a correlation AFAIK. All that I know I can do is set 'ulimit -n', but setting above 1024 doesn't seem to work, and as I said before setting it to 1024 doesn't make a difference, as I still get the segmentation faults.

Thanks!

Mike

On May 2, 2011, at 3:19 PM, Arlitt, Martin wrote:

> Hi Mike
> 
> I don't know if this will be of any help for a Mac environment, but it is what I would verify next on Linux:
> 
> http://www.cs.uwaterloo.ca/~brecht/servers/openfiles.html
> 
> It is just a more complete discussion of setting/checking the number of available file descriptors.
> 
> Thanks
> Martin
> 
> 
>> -----Original Message-----
>> From: Michael Irwin [mailto:mdirwin78 at att.net]
>> Sent: Monday, May 02, 2011 8:38 AM
>> To: Arlitt, Martin
>> Cc: httperf at linux.hpl.hp.com
>> Subject: Re: [httperf] httperf-0.9.0 segmentation fault on Mac 10.6.7
>> 
>> Thanks for the response. Setting ulimit -n to 1024 doesn't seem to make a
>> difference.
>> 
>> On May 2, 2011, at 11:32 AM, Arlitt, Martin wrote:
>> 
>>> Hi Mike
>>> 
>>> I do not have a Mac environment to explore this on, but the fact that
>> 'ulimit -n' is 256 and httperf thinks it can open 1,024 file descriptors looks
>> like a problem.
>>> 
>>> Thanks
>>> Martin
>>> 
>>>> -----Original Message-----
>>>> From: httperf-bounces at linux.hpl.hp.com [mailto:httperf-
>>>> bounces at linux.hpl.hp.com] On Behalf Of Michael Irwin
>>>> Sent: Monday, May 02, 2011 7:33 AM
>>>> To: httperf at linux.hpl.hp.com
>>>> Subject: [httperf] httperf-0.9.0 segmentation fault on Mac 10.6.7
>>>> 
>>>> Running httperf 0.9.0 on Mac 10.6.7 I get segmentation faults with
>>>> certain command line options. It seems to be based on what --rate
>>>> and/or --num- conn is set to. For example, the following shows the
>> problem:
>>>> 
>>>> httperf --server clubland.heroku.com --uri "/" --num-conn 100
>>>> --num-call 1 - -timeout 5 --port 80 --verbose --rate 30
>>>> 
>>>> And any rate >= 30 shows the same problem.
>>>> 
>>>> These do not seg fault (usually):
>>>> httperf --server clubland.heroku.com --uri "/" --num-conn 100
>>>> --num-call 1 - -timeout 5 --port 80 --verbose --rate 10 httperf
>>>> --server clubland.heroku.com --uri "/" --num-conn 100 --num-call 1
>>>> --timeout 5 -- port 80 --verbose --rate 20 httperf --server
>>>> clubland.heroku.com --uri "/" -- num-conn 50 --num-call 1 --timeout 5
>>>> --port 80 --verbose --rate 30
>>>> 
>>>> When I get the segmentation fault, the output is this:
>>>> 
>>>> httperf --server clubland.heroku.com --uri "/" --num-conn 100
>>>> --num-call 1 - -timeout 5 --port 80 --verbose --rate 30 httperf
>>>> --verbose --timeout=5 --
>>>> client=0/1 --server=clubland.heroku.com --port=80 --uri=/ --rate=30
>>>> --send-
>>>> buffer=4096 --recv-buffer=16384 --num-conns=100 --num-calls=1
>>>> httperf: warning: open file limit > FD_SETSIZE; limiting max. # of
>>>> open files to FD_SETSIZE
>>>> httperf: maximum number of open descriptors = 1024 Segmentation fault
>>>> 
>>>> Not sure if it matters but `ulimit -n` is 256.
>>>> 
>>>> How can I solve this issue? Thanks!
>>>> 
>>>> Mike Irwin
>>>> 
>>>> 
>>>> _______________________________________________
>>>> httperf mailing list
>>>> httperf at linux.hpl.hp.com
>>>> http://www.hpl.hp.com/hosted/linux/mail-archives/httperf/
>>> 
>>> _______________________________________________
>>> httperf mailing list
>>> httperf at linux.hpl.hp.com
>>> http://www.hpl.hp.com/hosted/linux/mail-archives/httperf/
> 




More information about the httperf mailing list