On 01/23/2014 06:23 AM, Amos Jeffries wrote:
>>> + if (size < 0) {
>>> + if (!ignoreErrno(xerrno)) {
>>> + debugs(5, 2, tcp << " read failure: " << xstrerr(xerrno));
>>> + return true;
>>> + } else if (!inBuf.hasContent()) {
>>> + debugs(5, 2, tcp << ": no data to process (" << xstrerr(xerrno) << ")");
>>> + }
>>> + }
>>> +
>>> + return false;
>>
>>
>> The ignoreErrno() == true case above does not work well for the caller:
>> If there was a minor error, the caller should keep reading, not
>> re-processing the old inBuf content that has not changed. More on that
>> further below (**).
>
> Added calls to try growing the buffer and read again without processing.
Why do we want to grow the buffer when nothing important has happened?
If we are ignoring the error, we should do nothing but update the stats
and read again IMO. Otherwise, we are not ignoring the error but
treating it specially.
Ideally, this ignore-and-reread code should be in Comm but there are
probably reasons it cannot be there today.
> For now I have also picked to treat the full buffer case same as 0-sized
> read. With a call to the maybeFinishedWithTcp() and setting up
> half-closed monitor to watch for full closure.
That also does not make sense to me. The case of a full buffer should be
treated after we are done parsing, not after we read something. It is
very different from the case of reading zero bytes, both in terms of
semantics of what has happened and in terms of the required reaction to
what has happened.
Perhaps it will all become clear once you post the patch that
corresponds to your comments (not sure why post comments discussing
invisible changes at all -- there is nothing I can do about most of
them!), but these specific comments in isolation just appear to raise
more red flags.
>>> + if (io.flag == COMM_OK) {
>> ...
>>> + } else if (io.size == 0) {
>>> + debugs(5, 5, io.conn << " closed?");
>>> + stopReceiving("zero sized read(2) result");
>> ...
>>> + /* It might be half-closed, we can't tell */
>>> + fd_table[io.conn->fd].flags.socket_eof = true;
>>> + commMarkHalfClosed(io.conn->fd);
>>> + fd_note(io.conn->fd, "half-closed");
>>> + }
>>> + }
>>> +
>>> + bool mayReadMore = true;
>>> + // pass handling on to child instance code
>>> + if (inBuf.hasContent())
>>> + mayReadMore = processReadBuffer(inBuf);
>>
>>
>> There seems to be a couple of logic bugs here:
>>
>
> FWIW this was cut-n-paste code for the most part. So these logic errors
> are what exist in trunk right now.
You have two options then: fix the bugs (since you are rewriting a lot
of code anyway) or add XXXs documenting the old bugs (so that reviewers
at least do not blame you for them). I sure hope that these major design
changes meant to eradicate such bugs would automatically fix most of the
old ones, but that remains to be seen.
>> * If we read zero bytes, we should call the parser again (this time with
>> an eof flag), even if the TCP connection is closed now. Parse-after-eof
>> is necessary because a zero-read may indicate the end of the message,
>> etc. The above code does not do that. This may not be very kosher for a
>> server and is wrong for a client.
>
> The 0-size read case calls maybeFinishedWithTcp(), where the child class
> is responsible for any such incompletely buffered message handling. I
> have adjusted the documentation in indicate that.
That approach does not work well because when maybeFinishedWithTcp()
returns nil (i.e. tells the caller to keep going), we may call the
parser twice with the same buffered data.
We want to call the parser once per successful read(), with the correct
"eof" flag.
>>> +void
>>> +Comm::TcpReceiver::releaseTcpConnection(const char *reason)
>>> +{
>>> + // Used by server-side to release the connection before
>>> + // storing it in Pconn pool
>>> + comm_remove_close_handler(tcp->fd, closed_);
>>> + closed_->cancel(reason);
>>> + stopReading();
>>> +}
>>
>> * What about the HTTP Server (client_side*.*) agent? It has to maintain
>> a connection pool as well, right? If the code is really specific to
>> clients/, why place it here?
>
> No the HTTP server has exactly one connection. I think you are possibly
> mistaking pconn (FD pool) for the pipeline list of active transactions.
> Or the ICAP pconn pool, which "side" I've not gone near yet.
I do not think I am mistaking a pconn pool for something else. Yes, each
agent has exactly one connection. However, all agents need to place that
single connection into their pconn pool (under certain conditions).
Thus, the code to place the connection into the pconn pool is not
specific to clients/ or servers/. The source code comment above claims
otherwise.
HTH,
Alex.
Received on Fri Jan 24 2014 - 17:05:28 MST
This archive was generated by hypermail 2.2.0 : Fri Jan 24 2014 - 12:00:13 MST