On 25/07/2013 12:46 p.m., Alex Rousskov wrote:
> On 07/24/2013 05:55 PM, Henrik Nordström wrote:
>> ons 2013-07-24 klockan 10:01 -0600 skrev Alex Rousskov:
>>
>>> That is what Squid does today, bugs notwithstanding. If the found store
>>> entry is the special "Vary" entry, then Squid does another lookup, with
>>> the appropriate header values added to the store key hash.
>> Yes, but is only part of what is needed for correct Vary support.
>>
>> The full logics (still missing from squid-3) involves an n-m map of
>> requests to response variants.
> I thought Squid already tries to support that. Or are you talking about
> responses with varying Vary header values (discussed further below)?
>
>
>>> Finally, as we are migrating to per-cache store indexes, more store
>>> lookups should be avoided when possible because the number of mandatory
>>> lookups has to be multiplied by (the number of cache_dirs plus one for
>>> the memory cache index) to check all the indexes.
>> There is a design decision that needs to be taken here.. should it be
>> possible to have different responses for the same Vary:ing URL to be
>> stored in different stores, or should they all need to go into the same
>> store?
> I think it should be possible for them to be in different stores -- the
> store selection decision should be done at a higher level. Multiple
> stores are usually a side effect of hardware layout or some size-related
> optimizations. They usually live on a lower level than "HTTP caching"
> and we should not add "all variants have to be in one area of the cache"
> restrictions unless there is a very good reason for them.
>
>
>> This has direct impact on where Vary logics can be performed.
> Can you elaborate on this point? I think there are ways to support full
> Vary logics with and without "all variants are in one store" restrictions.
>
>
>> Full Vary support requires the following store operations
>>
>> * Which responses (ETag values) is known for given URL? This list is
>> needed to construct a If-None-Match validation request as needed to ask
>> upstream which variant is the correct response.
> This can be done using the special Vary object updated whenever a new
> variant is cached or an old variant is purged. I do not know whether
> there is already code to support such updates.
If you mean the x-vary-marker object linking to any variants like I keep
going on about. No there is nothing stored in it at present. It just
contains the last Vary header seen inits header set so the recipient
code can get it out and generate a new lookup key.
>
>> * Add mapping of request headers to specific response variant
>> (identified by ETag).
> AFAIK, this is already supported by adjusting the store key.
Yes, but the code doing that readjustment needs to be shifted and
radically redesigned in line with any changes to the x-vary-marker
object for better references.
>
>> * Lookup the response matching this specific request.
> This is already supported by looking up the computed key in the store index.
>
>
>> To complicate matters further different responses MAY have different
>> Vary header value. But it is not very likely.
> True. To support that, we would need to add a list of Vary header values
> to the special Vary object and iterate it, looking up a cache object for
> every listed Vary header value.
We think alike. See my other reply. Although the lookup per-match is not
required. HTTP only requires that we identify a whole set of options and
pull the most appropriate - with some requirements around defining
"appropriate".
>
>> Or better yet allow the same response body to be
>> referenced by multiple responses (simplifies validation logics and aging
>> considerably). Response headers is the original response updated with
>> header data from the 304 response in response to If-None-Match.
> It feels like this is kind of separate optimization. It can be
> restricted to Vary transactions, but does not have to be.
I agree. It is not clear which cases of Vary handling this would be
appropriate for. We do however need to fix bug #7 once and for all.
Amos
Received on Thu Jul 25 2013 - 05:23:59 MDT
This archive was generated by hypermail 2.2.0 : Thu Jul 25 2013 - 12:00:11 MDT