On Wed, Feb 02, 2000, Eric Stern wrote:
> On Wed, 2 Feb 2000, Henrik Nordstrom wrote:
>
> > A bit. However, to turn Squid into a really high performance proxy more
> > changes than only the object store is required. There are also very
> > obvious bottlenecks in the networking and data forwarding.
>
> Could you be a bit more specific? From a commercial point of view I'm
> interested in making squid perform better. For now I'm working on the
> filesystem, but after that I'll go after other areas. I was going to do
> some profiling to find the problem areas, but if you already know what
> they are it'll save me a bunch of time. :)
After I (finally) finish the squidfs mods that I've been trying to find time
to do over the last couple weeks, I'm going to hack more at squid's networking
code. I think what you've done with NT is a step in the right direction,
just have to make UNIX present a similar interface for IO .. :)
> > Have a large can of ideas on how to make a cache object store. The
> > cyclic one is only one of the designs. In it's purest circular form it
> > would require a lot of disk to maintain a good hit ratio, however with
> > some minor modifications and wasting some disk space you can still
> > preserve most of the properties of LRU, which should make it more
> > economic in terms of disk. The disk storage is cyclic, but object
> > lifetime more like LRU.
>
> I think I know what you are getting at here. In a pure cyclic filesystem,
> a well used object that is staying in the cache will get overwritten as
> the writing cycle comes around. The obvious solution here is to move the
> object back to the beginning of the store every time it is hit, thus
> keeping the whole store in LRU order. The downside here is all the extra
> writes involved for each hit.
>
> I've combated this in COSS two ways. First of all, writes are combined.
> COSS keeps large memory buffers (~1MB). As new objects are added to the
> store (or hits moved to the front), they are added to a membuffer. When
> the membuffer overflows, it is written to disk and a new membuffer
> created. Thus, we can conclude that each 100 hits will only result in 1
> extra disk write, which isn't too bad. (This is already implemented). We can
> combat this further by deciding that a hit object will only be moved to
> the front of the store if it is within the "top" 50% of the store. We
> assume that if it is within the top 50%, its not in immediate danger of
> being overwritten and we can safely leave it there for a while. (This part
> hasn't been done yet.)
I like this idea. Another idea is that if you have multiple stores, you can
look at miagrating popular objects across to a new filesystem. I mean,
you're writing to a new disk, if the object is popular it should be in
memory..
> Honestly, I can't imagine anything that could be more effecient than COSS.
Async COSS? :-) In fact, it'll be nice tok play with the concept of multiple
FIFOs once the code is going and is threaded..
> size of 1MB and 10k average object size).
>
> I did a little math today comparing UFS to COSS. This is based on 10k avg.
> obj size, 40% hit rate and 10000 requests.
>
> UFS:
> - number of seeks: 25000
> - number of reads: 12000
> - number of writes:13000
>
> COSS:
> - number of seeks: 4600
> - number of reads: 4000
> - number of writes: 600
>
> I did that from memory, I think its right.
>
> Yes, COSS does use more memory. Its a tradeoff, and I am more interested
> in obtaining performance than saving memory. Its not that bad anyways, in
> some informal tests with polygraph to compare UFS vs COSS, the squid
> process hovered near 15MB with UFS, vs 22 MB with COSS.
Memory is easier to get than faster disks.
Anyway, work is progressing. I'm about to get a whole lot more time to work
on this code again. It'll all appear in a Sourceforge project near you..
Adrian
Received on Wed Feb 02 2000 - 16:48:44 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:21 MST