Updated patch attached that addresses most of these issues.
On 21/05/2013 9:47 a.m., Alex Rousskov wrote:
> On 05/16/2013 08:51 AM, Amos Jeffries wrote:
>
>> Attached is an update which goes straight to the end and make
>> pipeline_prefetch directive accept an integer count of pipeline length.
>
>> + // if client_persistent_connections OFF, then we cannot service any pipeline >1
> Please replace "we cannot service any pipeline >1" with "we do not
> expect any pipelined requests" to avoid confusing "pipeline>1" math:
> Does pipeline==2 mean two concurrent requests or tree, since we are not
> counting the already read request? Your math is correct, but this
> phrasing is confusing IMO.
Used "we cannot service any more of the pipeline" instead. To make it
clear that some has been serviced, but no more will regardless of how
far down the pipeline we are..
>
>> + const bool result = (conn->getConcurrentRequestCount() < 1);
>> + if (!result)
>> + debugs(33, 3, conn->clientConnection << " persistence disabled. Concurrent requests discarded.");
> We do not know whether we discarded something or not. Drop the second
> sentence.
We do know that.
result is only false when there is 1 request already been parsed on the
connection and Connection:close will be sent in its reply
(client_persistent_connections off).
This check is called when a second is trying to be parsed. So at least
that second one *will* be dropped by the TCP close.
But we are missing a lot of cases like that already and I don't think
it is critical that we catch all of them. Just the ones which are easily
identified.
However...
> If this code stays, please compute getConcurrentRequestCount() once,
> before all the if-statements and then uses the computed number as needed.
Okay. Done this anyway.
>
>> + if (!Config.onoff.client_pconns) {
>> + const bool result = (conn->getConcurrentRequestCount() < 1);
>> + if (!result)
>> + debugs(33, 3, conn->clientConnection << " persistence disabled. Concurrent requests discarded.");
>> + return result;
>> + }
> The above addition to connOkToAddRequest() seems odd: If the caller does
> not check for persistency, then the above check is not sufficient --
> there are other reasons the connection may not be persistent (and the
> old code was broken since it lacked any checks at all!). If the caller
> checks for persistency, the above check is not needed.
Hmm. Thinking about this a bit more I've decided to move it up into the
config consistency checks next to the pipeline vs auth checks.
It makes alot more sense to only check once, and to provide -k parse
warnings about the config option clash, and will also catch the
three-way edge case races between TCP packet delivery, response writing
close handler events - when a second request comes in *after* the first
has already been responded and close scheduled for it.
> Consider the case where Config.onoff.client_pconns is "on" (default) but
> the client sent "Connection:close" or Squid is already responding with
> "Connection:close".
>
> If the caller does not check, I do not think you have to be responsible
> for fixing this, but if you add the above code without adding other (far
> more relevant in default cases) checks, then please at least add a XXX
> documenting that other checks are missing here.
Indeed.
I've added a loop to scan for Connection:close (and
Proxy-Connection:close), and CONNECT method as blockers on further
pipieline reads. There may be other conditions we find later.
It's not the most efficient way to do it, but before we can implement
tha better way we have to verify the request processing reaches the
header interpret functionality before the next read() operation is done
by ConnStateData to prevent a race condition.
>
>> +LOC: Config.pipeline_max_prefetch
> How about "pipeline_prefetch_max" in case we need to add more
> pipeline_prefetch options later.
Then IMHO they would be using different names pipeline_something_else,
or better the single options split into a structure with all of the
feature config options inside it:
struct pipeline_ {
int maxPrefetch; // max queue length
int delayTimeout; // eg. if found waiting more than N seconds in
queue do something special..
} pipeline;
>
>> To boost the performance of pipelined requests to closer
>> match that of a non-proxied environment Squid can try to fetch
>> - up to two requests in parallel from a pipeline.
>> + several requests in parallel from a pipeline. This sets the
>> + maximum number of requests to be fetched in advance.
> I think this needs to be clarified to better explain the confusion
> between the pipeline "depth" and the number of concurrent requests:
>
> HTTP clients may send a pipeline of 1+N requests to Squid using a
> single connection, without waiting for Squid to respond to the first
> of those requests. This option limits the number of concurrent
> requests Squid will try to handle in parallel. If set to N, Squid
> will try to receive and process up to 1+N requests on the same
> connection concurrently.
Great. Thank you. Done.
>
>> + debugs(0, DBG_PARSE_NOTE(DBG_IMPORTANT), "WARNING: 'pipeline_prefetch on' is deprecated. Please update to use a number.");
> Perhaps we should indicate what the replacement is, like we do in
> pipeline_prefetch off case? We could say "Please update to use 1 (or a
> higher number)."
Done.
PS. awaiting yoru parser update patch before commit of this.
Thank you
Amos
This archive was generated by hypermail 2.2.0 : Tue May 21 2013 - 12:00:17 MDT