lör 2010-05-01 klockan 08:55 -0600 skrev Alex Rousskov:
> Chunked requests: The current buffer-and-forward code is probably
> compliant. It is also inefficient and limits the size of the request,
> but I doubt that violates the standard.
buffer & forward with content-length in itself do not violate the
standard, but interacts badly with 100 Continue requirements.
Also, what happens when the buffer is full (request size over limit)?
And do the code deal properly with eating any pending request data when
Squid responds with an error?
And that dechunk code is in itself a DoS vector, because it buffers
before access controls. The default limit of 64KB per connection is
somewhat manageable, but also quite small
And right that's the other blocker for announcing 1.1 towards clients..
passing of 100 Continue messages and handling of 1xx messages in
general.
Announcing 1.1 without supporting 100 Continue may cause some clients to
wait indefinitely when trying to send request entities. The timer
suggestion is only when not knowing if the next hop is 1.1. But probably
the major browsers always implements the timer (not tested).
And always sending 417 Expectation failed in response to Expect:
100-continue is known to break other clients...
Looking at the code. Ok. default is to not ignore Expect: 100-continue
and respond with 417 which takes care of the timer worry above, unless
one needs to configure Squid to ignore 100-continue. And as seen in
squid-users discussions there is a significant population of broken
clients wrt 100-continue & 417 handling so this can be expected to be
configured in many setups.
Regards
Henrik
Received on Sat May 01 2010 - 15:35:37 MDT
This archive was generated by hypermail 2.2.0 : Sat May 01 2010 - 12:00:19 MDT