On Fri, 29 Oct 1999, Michael Sparks wrote:
> Questions:
> * Squid keeps in memory 16 byte keys for URLs, and appears to utilise
> these in order to build the digest - yes?
Yes. You can consult with the Digest specs on Squid Web site for
details.
> * It appears that cacheDigestAdd is effectively called only during
> digest build time, effectively like this; (excuse the perlism)
> foreach $key (@keys)
> cacheDigestAdd(digest, key).
> At least in terms of logic is this right?
Yes, it is.
> If so then _one_ way (not necessarily the "best" :-) of building a cluster
> digest would be :
> * modify cacheDigestAdd to dump the keys to a file as it builds its
> normal digest.
> * allow an auxillary machine to come along and pick up these files from
> all the caches.
> * The auxillary machine would then be able to make a digest very simply
> from the keys.
>
> Whilst not necessarily a "reasonable" way of doing it - is this
> essentially correct?
As far as I can see, you will get a "correct" global digest this way.
Your Squid performance may be very sluggish while you are dumping the
keys because of large number of disk I/Os.
> If so then I'll probably knock up the small code
> required to do this since it would mesh very well with some other work on
> clustering I'm doing at the moment.
A much better (and possibly simpler!) approach would be to
1. Make sure all digests are of the same (excessively big size)
(fiddle with constants that control heuristics for sizing
a digest)
2. Request digests from individual Squids and merge them
(essentially, it would be a bitwise ORing of digest bodies).
$0.02,
Alex.
Received on Fri Oct 29 1999 - 09:37:47 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:18 MST