<div>I thought it was I/O bound too as it is running a software RAID5 array.</div><div>I would like it to be better, but the client can't afford hardware upgrades right now.</div><div><br></div><div>Does 6% IO wait time (from vmstat) constitute really bad disk performance?</div>
<div>I've got systems with much higher wait times that have far lower loads.</div><div><br></div>Here's the header output from top (the two active processes are registering 1% cpu load each):<div><br><div><div>top - 17:05:54 up 2 days, 20:25, 1 user, load average: 2.74, 1.17, 0.74</div>
<div>Tasks: 71 total, 2 running, 69 sleeping, 0 stopped, 0 zombie</div><div>Cpu(s): 0.0%us, 0.1%sy, 0.0%ni, 14.3%id, 85.4%wa, 0.0%hi, 0.1%si, 0.0%st</div><div>Mem: 2074112k total, 2020768k used, 53344k free, 13128k buffers</div>
<div>Swap: 3903608k total, 828k used, 3902780k free, 1770188k cached</div></div><div><br></div><div><br></div><div>Here's the vmstat output:</div><div><div>procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----</div>
<div> r b swpd free buff cache si so bi bo in cs us sy id wa</div><div> 0 0 828 53416 13228 1770304 0 0 12 22 0 24 0 4 90 6</div></div><div><br></div><div>And finally iostat</div>
<div><div>avg-cpu: %user %nice %system %iowait %steal %idle</div><div> 0.28 0.00 4.47 5.73 0.00 89.52</div><div><br></div><div>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn</div>
<div>sda 4.12 22.55 56.06 5558094 13814570</div><div>sdb 3.79 22.91 56.63 5646538 13955010</div><div>sdc 4.06 23.30 56.80 5742544 13997936</div>
<div>md0 0.00 0.02 0.00 3756 10</div><div>md1 0.00 0.01 0.01 1592 2176</div><div>md2 1.26 2.60 9.28 640370 2287456</div>
<div>md3 9.40 46.87 58.95 11551194 14527312</div></div><div><br></div><div><br><br><div class="gmail_quote">On Mon, Mar 22, 2010 at 5:02 PM, Daniel Reurich <span dir="ltr"><<a href="mailto:daniel@centurion.net.nz">daniel@centurion.net.nz</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div><div></div><div class="h5">On Mon, 2010-03-22 at 16:49 +1300, David Harrison wrote:<br>
> Hi,<br>
> Has anyone experienced high-load averages but haven't been able to see<br>
> processes that are causing it?<br>
><br>
><br>
> I've got an Ubuntu Server 9.10 instance who's load average ranges<br>
> between 1.0 and 3.0 for most of the day, yet tools like top and iostat<br>
> don't reveal any issues.<br>
> i.e. The load averages can be up around 1.5 whilst the maximum process<br>
> viewed in top is sitting at 5% of the CPU.<br>
><br>
><br>
> Anyone know of any other good tools for identifying the cause of<br>
> server load if the obvious ones fail?<br>
><br>
</div></div>What's the wait state like (in top it's the %wa value).<br>
<br>
Chances are that you have some serious I/O blocking going on which could<br>
be a slow or failing harddisk or something like that.<br>
<font color="#888888"><br>
<br>
<br>
--<br>
Daniel Reurich.<br>
<br>
Centurion Computer Technology (2005) Ltd<br>
Mobile 021 797 722<br>
</font><div><div></div><div class="h5"><br>
<br>
<br>
<br>
--<br>
Wellington Linux Users Group Mailing List: <a href="mailto:wellylug@lists.wellylug.org.nz">wellylug@lists.wellylug.org.nz</a><br>
To Leave: <a href="http://lists.wellylug.org.nz/mailman/listinfo/wellylug" target="_blank">http://lists.wellylug.org.nz/mailman/listinfo/wellylug</a><br>
</div></div></blockquote></div><br></div></div>