Adrian Chadd wrote:
> On Sun, Feb 24, 2002, Joe Cooper wrote:
>
>
>>Stability is the primary purpose of this testing, and on that count it
>>is definitely very solid. I forgot that recent Polygraph versions don't
>>play nice with the datacomm-1 workload and never shutdown--and you may
>>recall that the working set in datacomm-1 grows infinitely throughout
>>the test. So at 11.92 hours into the test, the box is still holding
>>together very nicely--14.28GB is the current fill size (and we've got a
>>single 4GB cache_dir). Hits at this point are about what one would
>>expect from a too small cache_dir for the test (35%, roughly). Error
>>count is a lovely '2'. Yep, two errors in twelve hours of too much load.
>>
>
> I wonder what those two errors were.
Good answer! I didn't see them happen, and I wasn't running any logs
for the test so can't do a report on it. I usually ignore error counts
less than about 20 in a full run--out of 4.5 million transactions, 2 is
probably among the better results I've seen. I just wanted to see if it
would hold up to load, but will start doing some 'real' tests tonight
with logging and reports and other fun stuff. I might even see about
getting one of the old test boxes running (450MHz, two disks, 256MB RAM,
as in all of the tests I ran in 2000 and 2001), so the results can be
directly compared.
For fun, and with a little sadistic glee, I've started another run at
120. I know this box can't hold together for four hours at 120 under
any version of Squid I've ever tested. I doubt this one will either,
but it's always fun to see how far they get. The box does have a
patched 2.4.9 kernel that doesn't exhibit the awful vm/swap behavior
that happened at the cacheoff, so it should behave similarly to tests
run under 2.2.x kernels.
>>I guess I should be poking at range request tickling things to really
>>know if someting got broken, correct? (I know range handling has gone
>>away temporarily, correct? So we always get the whole object, or do we
>>not cache range requests at all and pass them through?)
>>
>
> If its a range request then the request is marked uncacheable and passed through.
So we can be pretty confident that they are handled correctly, I presume?
>>I'm going to see about turning on the checksums in Polygraph to insure
>>that data is coming through uncorrupted. My measly three or four
>>browsing requests per minute won't necessarily show up any bugs on that
>>count, but maybe a few thousand every hour will.
>>
>
> Yup.. that will be an interesting test. Can you let me know how it goes,
> and what you've done to the config to test that one out?
>
> Anyone else? Bueller? Bueller?
He's at home sick today, I think.
> Don't worry about the speed difference being in your head - you'll
> see the speed difference eventually, I'm just not sure if it'll be
> under Linux to begin with. :)
I guess it's only fair for FreeBSD to get some attention, since Linux
has been faster for Squid for so long. ;-)
Regardless of performance, if it makes Squid easier to code for, and
more importantly easier to debug, I'm all for it. Performance is always
good to have, though.
-- Joe Cooper <joe@swelltech.com> http://www.swelltech.com Web Caching Appliances and SupportReceived on Sun Feb 24 2002 - 18:14:01 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:14:48 MST