I will try the deadline scheduler tonight and see if that makes a difference.<div><br></div><div>I was mistaken the first time I posted, the server is running Ubuntu 8.04LTS with a 2.6.24 kernel.</div><div><br></div><div>I haven't found an easy way of installing an updated kernel (without compiling) so I am thinking upgrading to a newer Ubuntu release maybe easier...</div>
<div><br></div><div><br></div><div>David</div><div><br></div><div><br><br><div class="gmail_quote">On Wed, Mar 24, 2010 at 2:01 PM, Daniel Pittman <span dir="ltr"><<a href="mailto:daniel@rimspace.net">daniel@rimspace.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div class="im">David Harrison <<a href="mailto:david.harrison@stress-free.co.nz">david.harrison@stress-free.co.nz</a>> writes:<br>
<br>
> I ran the smartctl tests (both short and long) on all three physical drives<br>
> overnight. It showed all drives were working 100% correctly.<br>
><br>
> Overnight I also ran a number of read/write tests and monitored the i/o<br>
> status in vmstat and iostat.<br>
><br>
> It seems like performance falls through the floor as soon as the physical<br>
> memory on the server is exhausted.<br>
><br>
> The issue I am experiencing seems to be very similar to the issue which is<br>
> documented here:<br>
> <a href="http://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html" target="_blank">http://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html</a><br>
<br>
</div>If I recall correctly, and I may not, there was a known issue on some older<br>
kernels where the I/O scheduler introduced long stalls. It was a bug in the<br>
CFQ scheduler code, IIRC, which is why tuning the write periods, changing to<br>
another schedule like AS or deadline, or using a newer kernel would resolve<br>
it.<br>
<div class="im"><br>
> I've checked the kernel parameters that are mentioned in this article<br>
> (dirty_ratio and dirty_background_ratio) and they are the values that are<br>
> recommended.<br>
<br>
</div>You might try another I/O scheduler and see if it helped. A newer kernel, if<br>
your distribution has one, is another possible path.<br>
<div class="im"><br>
<br>
> Putting more RAM in the machine will certainly forestall the issue, but<br>
> beyond that it maybe a case of trying RAID1 instead of RAID5.<br>
<br>
</div>FWIW, I don't see this sort of behaviour on machines with MD RAID5 or RAID6.<br>
<br>
They are otherwise quite different to (my understanding of) your system<br>
configuration, so this just adds the data point that it isn't universal to all<br>
uses of those tools.<br>
<br>
Daniel<br>
<font color="#888888">--<br>
✣ Daniel Pittman ✉ <a href="mailto:daniel@rimspace.net">daniel@rimspace.net</a> ☎ +61 401 155 707<br>
♽ made with 100 percent post-consumer electrons<br>
</font><div><div></div><div class="h5"><br>
<br>
--<br>
Wellington Linux Users Group Mailing List: <a href="mailto:wellylug@lists.wellylug.org.nz">wellylug@lists.wellylug.org.nz</a><br>
To Leave: <a href="http://lists.wellylug.org.nz/mailman/listinfo/wellylug" target="_blank">http://lists.wellylug.org.nz/mailman/listinfo/wellylug</a><br>
</div></div></blockquote></div><br></div>