Reduce I/O Usage

Read 17669 times
Hi guys,
Im having really strange problem with the source i/o usage and my RAID 1 cant handle and the server load goes insane. The CPU is more than enough, the RAM is enough too but the HDDs are even brand new but from time to time it cant handle the ices-cc encoders so my question is if there is some way to reduce that I/O usage ? Im out of ideas. That is why i have a separated topic related to editing the configuration templates where to disable logging for example.

 load average: 26.13, 31.58, 40.05

[root@meg ~]# ps afux | grep -ir source | wc
     96    1440   13104

[root@meg ~]# iostat -x 1
Linux 2.6.32-358.2.1.el6.x86_64 (*********)       03/31/2013      _x86_64_        (8 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           9.49    0.00    5.64   22.22    0.00   62.65

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sdb               0.69   341.14    6.23  389.62  2799.45  5908.71    22.00     4.54   11.42   1.47  58.24
sda               0.78   329.48    6.29  401.49  2884.21  5908.74    21.56     3.97    9.72   1.29  52.57
md1               0.00     0.00    7.78  737.73  4901.73  5901.77    14.49     0.00    0.00   0.00   0.00

wow! I'm running 100's of Ices instances, yeah they hammer *a lot* on the I/O, but I'm dirty-caching and delay-writing as much as Linux lets me... This saves me for a while.
What do you mean with dirty-caching and delay-writing ? I wonder if i symlink source logs to /dev/null or maybe some way to reduce i/o rate with the encoding option
I found that every icescc is writing to file vhosts/username/var/log/ices.cue like crazy so i symlinked it to /dev/null . I hope this wont be a problem. No issues by now and the load, well, you can make the difference:

load average: 0.93, 7.35, 8.13
How did you exactly managed to symlink to /dev/null ?

I was talking about adding noatime and commit=600 for example to /etc/fstab on your active partition.
Wow.  Are you 100% sure it's the cue file that made this difference?

I've never really investigated how ices managed ices.cue (we don't read it at all as part of Centova Cast) but I had assumed it was some kind of fifo or somesuch -- something in memory rather than an actual file being constantly written to disk.  If it is, that is an absolutely braindead feature -- this will wreak havoc not just on RAID arrays, but also on NFS and other networked filesystems as well.  And there seems to be no way to turn the damned thing off.

If it's just constantly rewriting it as a normal disk file, I might be interested in patching ices-cc to completely remove the cue file as that's a horrible waste of disk bandwidth.
Wow.  Are you 100% sure it's the cue file that made this difference?

I've never really investigated how ices managed ices.cue (we don't read it at all as part of Centova Cast) but I had assumed it was some kind of fifo or somesuch -- something in memory rather than an actual file being constantly written to disk.  If it is, that is an absolutely braindead feature -- this will wreak havoc not just on RAID arrays, but also on NFS and other networked filesystems as well.  And there seems to be no way to turn the damned thing off.

If it's just constantly rewriting it as a normal disk file, I might be interested in patching ices-cc to completely remove the cue file as that's a horrible waste of disk bandwidth.

I'm 100% sure, with pointing the cue file location (in the ices-cc config file) to /dev/null the load decreased *a lot*.

Without that workaround my server was loaded at about 7-8 load average (dual Xeon e3-1245 with raid) now it's below 1, also doing the /etc/fstab mods decreased it further to about 0.15. The CPU load is the same, but the I/O writes are much less.
I've done some testing on my v2 install and redirecting ices.cue logging to /dev/null does indeed save ~400kb/s of writes per instance, so not a huge overhead but it soon adds up.

To apply this change cd to the vhost's var/log/ directory, stop it, run the command below and restart it.

ln -s /dev/null ices.cue
Last Edit: April 16, 2013, 04:34:58 am by hugh
Or set it up to point to /dev/null via the ices config file.

It's not an issue for 1,5,10 and even 20 ices instances. But for 300+ even on a *very* fast NAS it's a huge problem. Especially latency.
I'm not sure why but since I applied this change I've seen a big jump in write iowait time...