Alex Rousskov wrote:
>On Mon, 24 Aug 1998, Stephen R. van den Berg wrote:
>> Since, in all practical uses the mem-usage of squid tends to grow and
>> then stabilise, I've changed the behaviour of mempool a bit to allocate
>> in larger chunks as the usage grows. This effectively reduces overhead
>> and fragmentation.
>The patch intends to replace memory pool's "stacks of individually allocated
>objects" with a never-free, rapidly growing blocks of memory.
>The extra memory overhead for keeping mempools stacks is usually a few KB
>only, negligible for any reasonable configuration. I do not have any hard
>numbers for fragmentation, but my guess it is not a big problem with a good
>malloc library.
I would guess it to be more of a problem than you'd suspect, especially
on long running (like >4 months without a restart) squids.
But, I guess, the only way to know for certain is to test it.
I'll see if I can improve on the ifdef and run some tests in the
old mode and in the new.
>Preallocation of memory could be easily done within the current scheme. If
>anybody thinks there will be a noticeable win from that, please let me know.
It reduces allocation overhead (both size and time) noticably especially
for StoreEntry structs. For most other allocation blocks, it has probably
negligible effects (due to the many StoreEntry structs allocated).
>For those developers who want to apply the patch, please note that not all
>new code is in #ifdefs. Thus, you cannot switch to the old behaviour once the
>patch is applied.
I thought that all the important parts had been ifdeffed, but I admit
that I never tried it without the define.
> Also note that your pools will never shrink after applying
>the patch. Actually, from what I can tell, there will be no way to enforce
>the idle memory limit at all.
That's true. It simply never frees anything, so there is no idle limit
to enforce. Not that this makes a difference in any regular squid setup.
>> + obj = xmalloc(step*pool->obj_size);
>> + memMeterAdd(pool->meter.alloc, step);
>> + memMeterAdd(TheMeter.alloc, step*pool->obj_size);
>> + while(--step) {
>> + memPoolFree(pool, obj);
>> + obj=(char*)obj+pool->obj_size;
>Not sure, but looks like you are freeing the memory that just got allocated:
>"obj = xmalloc(...); ... memPoolFree(pool, obj);" ?
That's an illusion :-). What I do is, grab a large block from malloc,
then chop it up in sizeable chunks, which are then entered into the
free list for this chunklist by giving it to memPoolFree().
>> memPoolFree(MemPool * pool, void *obj)
>> {
>> assert(pool && obj);
>> + assert(pool->obj_size >= sizeof pool->pstack);
>"Sizeof(ptr)" does not return the size of the object pointed by "ptr". It
>returns the size of "ptr". This added assertion disallows the pooling of
>objects smaller than "sizeof(void*)" bytes, as far as I can tell.
Which is exactly what we want to enforce, since smaller pool objects
will cause problems when creating the free-lists. We put a void
pointer at the start of each struct when we enter a block in the free
list.
>>- memset(obj, 0, pool->obj_size);
>>- stackPush(&pool->pstack, obj);
>>+ *(void**)obj = pool->pstack;
>>+ pool->pstack = obj;
>Sorry, but I failed to understand what this is supposed to do.
This enters the block on the singly-linked list of free blocks.
>> +#if 0 /* bogus assertion? srb */
>> assert(pool->meter.idle.level <= pool->meter.alloc.level);
>> +#endif
>The assertion is not bogus. It reads "idle memory must be at most the size of
>allocated memory". In other words, something that was not allocated cannot
>become idle. In fact, just a few days ago we detected a bug thanks to this
>assertion. If it fires after applying the patch, the accounting part of the
>pools probably got broken by the patch.
You are probably right. I had a bit of trouble to understand the accounting,
I got it right as far as everything that was being displayed in the
cachemgr display. The idle.level and alloc.level variables weren't
essential (apparently).
-- Sincerely, srb@cuci.nl Stephen R. van den Berg (AKA BuGless). This signature third word omitted, yet is comprehensible.Received on Tue Jul 29 2003 - 13:15:52 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:52 MST