On 28 Mar 2001, at 9:34, Henrik Nordstrom <hno@hem.passagen.se> wrote:
> Alex Rousskov wrote:
>
> > I do not think the conflict is a big deal though. When chunk
> > allocation is enabled, memory_pools_limit can be ignored (in favor of
> > a different option) or interpreted differently.
>
> Don't see the need for a new option, but only wondering how to you plan
> on implementing an effective limit in chunked allocation mode?
I was on that matter when I played around with chunked MemPools.
Below is what I ended up with.
> Ah, well, it might not be that big of a problem if chunked allocations
> are sorted per chunk (i.e. use entries from the most filled chunk
> first).
I decided to sort chunks in ram based on pointer value, so that all mem
used tends to compact the heap over time. Always pick chunk closest to
heap start. Idle chunks are released if not touched for some time, or
if over memory_pools_limit. Chunk size is variable depending on item
size.
/*
Old way:
xmalloc each item separately, upon free stack into idle pool array.
each item is individually malloc()ed from system, imposing libmalloc
overhead, and additionally we add our overhead of pointer size per item
as we keep a list of pointer to free items.
Chunking:
xmalloc Chunk that fits at least MEM_MIN_FREE (32) items in an array, but
limit Chunk size to MEM_MAX_SIZE (256K). Chunk size is rounded up to
MEM_PAGE_SIZE (4K), trying to have chunks in multiples of VM_PAGE size.
Minimum Chunk size is MEM_CHUNK_SIZE (16K).
A number of items fits into a single chunk, depending on item size.
Maximum number of items per chunk is limited to MEM_MAX_FREE (65535).
We populate Chunk with a linkedlist, each node at first word of item,
and pointing at next free item. Chunk->FreeList is pointing at first
free node. Thus we stuff free housekeeping into the Chunk itself, and
omit pointer overhead per item.
Chunks are created on demand, and new chunks are inserted into linklist
of chunks so that Chunks with smaller pointer value are placed closer
to the linklist head. Head is a hotspot, servicing most of requests, so
slow sorting occurs and Chunks in highest memory tend to become idle
and freeable.
event is registered that runs every 15 secs and checks reference time
of each idle chunk. If a chunk is not referenced for 15 secs, it is
released.
If mem_idle_limit is exceeded with pools, every chunk that becomes
idle is immediately considered for release, unless this is the only
chunk with free items in it.
In cachemgr output, there are new columns for chunking. Special item,
Fragm, is shown to estimate approximately fragmentation of chunked
pools. Fragmentation is calculated by taking amount of items in use,
calculating needed amount of chunks to fit all, and then comparing to
actual amount of chunks in use. Fragm number, in percent, is showing
how many percent of chunks are in use excessively. 100% meaning that
twice the needed amount of chunks are in use.
*/
Generally it showed very little fragmentation problems, even came back
to normal after sudden spikes of mallocs, but it is still prone to
leaving several chunks around that have only few or single used items.
But I guess this is unavoidable with any kind of chunked allocs.
Main source of chunked fragmentation was storeentry pool and its close
relatives. It seemed wanted to limit chunk size for multimillion item
pools. On the other hand, other pools enjoyed large chunks. So I was
wondering about adding a hint during mempool initialisation.
------------------------------------
Andres Kroonmaa <andre@online.ee>
CTO, Delfi Online
Tel: 6501 731, Fax: 6501 708
Pärnu mnt. 158, Tallinn,
11317 Estonia
Received on Thu Mar 29 2001 - 14:23:21 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:13:41 MST