> <..snip.. max_requests>
>
> > o min_alive: to avoid short-lived helpers, they should stay alive at
> > least for this amount of seconds. This should help against
> > fork-query-kill per request behaviour if request rate is drastically
> > increased.
>
> Not sure about this one. If you have problems with a helper leaking a
> lot of memory requiring it to be frequently restarted then the helper
> should be fixed. Having safeguards in Squid to prevent Squid from
> restarting the helper frequently if the same configuration says it needs
> to be restarted after only a few requests is confusing I think.
>
> There very rarely is time related issues with the helpers. Nearly always
> the only relation to garbage/leaks in the helper is the number of
> requests processed. So even if the helper functions so poorly that it
> needs to be restarted after only a few requests you'd probably still
> want to do this even if the request rate is suddenly higher than you
> planned for.
I proposed min_alive to limit helper recycle rate, not to increase it.
max_requests imposes a relation between request rate and helper
recycle rate. If request rate increases drastically (not a difficult
thing to do), helpers may start recycling too fast.
That's why it's measured in seconds, not in requests.
Perhaps 'min_age' is a better name.
> And obviously the best action is to get the helper fixed...
agreed :D. I am no having troubles with my helpers, but process
rotation is a good policy (at some extent).
helper rotation would help not only as a dirty way of coping with
resource leakage, but as well as a way of using shorter idle timeouts
for database conections.
Regards,
-- Gonzalo A. AranaReceived on Wed Apr 26 2006 - 08:13:19 MDT
This archive was generated by hypermail pre-2.1.9 : Mon May 01 2006 - 12:00:03 MDT