Thanks, H. Some replies inline.
On 23/10/2008, at 5:49 AM, Henrik Nordstrom wrote:
> (15.47.03) mnot: hno: my only remaining concern is the deep ctx's -
> unfortunately I'm having a real problem reproducing them (although
> they're unfortunately common)
>
> would help a bit if you could make it coredump with a binary having
> debug symbols...
I'll do what I can.
> I suspect it's related to the LRU problem in the sense that they are
> both triggered by the same family of inter-object dependencies.
> (collapsed forwarding, async refresh etc)
>
> The higher the load the deeper such dependency chains will become
> until
> things time out or resolves in some other manner.
>
> collapsed_forwarding is most likely the bigger culpit in creating
> these
> long chains.
I've been thinking a while that it would be useful to make the
collapsed_forwarding timeout configurable; right now it's hard-coded
at 30 seconds. Would reducing this time help manage the depth (and
perhaps load implications)?
> (15.49.47) mnot: the other issue I see is TCP_MEM_HITs taking a few
> hundred milliseconds, even on a lightly loaded box, with responses
> smaller than the write buffer. (and no, hno, they're not collpased ;)
>
> If there is Vary+ETag involved then those MAY be partial cache misses.
> There is a slight grey zone there an If-None-Match query for finding
> which object to respond with results in TCP_(MEM_)HIT if the 304
> indicated object is a hit.
Don't think so; there is some varying going on, but the resources I've
looked at in connection with this don't have ETags.
> Could also be delays due to acl lookups or url rewriters.
no rewriting, and the ACLs are pretty manageable; there's one
external, but its averaging 13ms and is very cacheable.
> (15.54.48) mnot: hno: is running a proxy and accelerator on different
> ports in the same squid process no longer supported? I forget where
> that
> ended up
> (15.54.58) mnot: yeah, that's definitely a limitation
>
> It is. The reason is that we can't tell for sure if the request is
> accelerated or proxied when following RFC2616. We can guess based on
> if
> we receive and URL-path or and absultute URL, but HTTP/1.1 requires
> servers to accept requests with an absolute URL.
>
> Now in most setups this is no problem, but many is used to be able to
> use the same port for both proxy and WPAD pac servicing...
Right, but I want to use *separate* ports; i.e., one accel port, and
one proxy port. E.g.,
http_port 80 vhost
cache_peer origin.example.com parent 80 0 originserver no-query no-
digest
http_port 127.0.0.1:3128 name=local_proxy http11
acl local_proxy myportname local_proxy
always_direct allow local_proxy
http_access allow local_proxy localhost
http_access deny local_proxy
log_access deny local_proxy
Right now when I do this, the proxy port still goes to the cache_peer
that's configured for the accelerator port...
Cheers,
-- Mark Nottingham mnot_at_yahoo-inc.comReceived on Thu Oct 23 2008 - 04:45:26 MDT
This archive was generated by hypermail 2.2.0 : Thu Oct 23 2008 - 12:00:05 MDT