>
> >3584 + 63 * 8192 + 64 * (8192 / 4) * 8192 = 1.0005 GB (big enough for any
> >conceivable caching purposes for the next 10 years I'm guessing).
>
> This I find dangerously low. Think everyone having 34Mb/s connections
> and everyone pulling in MPEG movies.
I guess the question is, are you going to want to dedicate more than 1Gb of
your on-disk space to cacheing a single item?
>
> On another note, what is to be gained by using direct and indirect
> block pointers?
Heh - that was one of my questions :) If you have direct links, you don't
have to chain through each file on deletion to find the blocks that need
free'ing up - you just grab the block numbers from the inode.
> How about a much simpler scheme, a linked list like:
>
> Say, 1KB block sizes, chunks consist of one or more blocks,
> whereas every chunk starts with:
>
> [ 1-byte: misc flags (reserved) ]
> [ 3-bytes: length of this chunk in bytes ]
> [ 4-bytes: blocknumber of the next chunk in the chain ]
> [ 1016-bytes: data for this chunk ]
> [ 1024-bytes: optional data to extend this chunk (no header) ]
One loss with this is the 50-odd% items that are within that first read. If
you can manage to cram close to 4K data into the inode, you cut most of your
off-disk serves to a single disk access.
The other question that arose was fragmentation - I _think_ we decided
that was better handled with bigger blocks, but Stew probably has more of
that documentation in front of him :)
The basic aim was to serve files in as few disk accesses as possible.
Removing the filename->inode lookup is a big part of that, then cramming
data in with each inode (if you can get ~4K in) is another big win. The
next one was to put direct links in for as much as possible to speed up
file deletion.
KevinL
Received on Tue Jul 29 2003 - 13:15:52 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:52 MST