VPS

Read 16530 times
I have now setup my new VPS, with the following streams running;

Shoutcast v1
Shoutcast v2
Ices-cc
Icecastv2

Nope, it runs on our VPS with a load of:
0.07

But it matters on what your VPS configuration is.  Ours is 4 vCores & 4 Gigs of memory.  Probably a lot larger than yours.

With the correct set-up, it shouldn't take very much resourses. 

No problems here either, SC's right, depends on the resources allocated for the VPS, it should not effect the main server load.

I had it running since last night night, and even during compiling it barely registers.  A couple spikes for a couple minutes, so I'm looking for the process now to post. but overall, even with 100+ connects, no load. with ices


Yes the load for me is also fine -- however as soon as I start running icescc auto dj -- with either SHOUTcast1 or Icecast the load jumps a lot higher. It's not overloaded but if I run any more it will be. WIth only SHOUTcast2 and SC_TRANS load just hangs around the zero mark.
Steve is there a way you can run SHOUTcastv1 with SCTRANS ? I don't think the playlist scheduler will work but could it still be a option?

i actually think Steve's tests are very helpful as no one has generally been able to provide a like for like between sc_trans and something else (i know it's not fully scientific but doing on the same host is about as good as it's going to get generally anyway). and as things are different between one machine and another, the fact it's done all on the same machine is good (and actually gives me something to target for down the road).


the main thing that comes out which i do agree with is sc_trans shouldn't transcode input media which is already at the correct bitrate and is something i've noted down a while back (but requires some hefty changes to get it to not transcode unless needed).

with the higher memory and cpu usage, i cannot explain the memory other than it's likely from it using a lot more for buffers, etc but the cpu usage i know of a few things which can be leading to it being higher (which i've tweaked internally) but since the two are using different encoding libraries to my knowledge, that would explain some of it (like i know the AAC library now used uses more but gives better AAC output than the previous library used, so the same could be in effect with the MP3 encoding).

am not saying what it does at the moment is right (would love for it to be a lot more VM friendly) but then as it's beta and not been loved as mush as it should be when being started over from scratch, it's not as bad as i thought it was going to come out as being with the numbers that have been provided.


i don't get the whole keeping on v1 thing - yes i know it seems to freak out &/or confuse people that you have to get an authhash (with too many people referring to old sites rather than the official docs and then bitch at me there's no where to sign up) and that it's now oh so more difficult to run. yes the bugs in the current public build can cause issues but it should not be true once you have a build which doesn't have the bugs and can be used as a straight drop-in replacement of the v1 dnas.

with the authhash side of things, i've made changes so it is going to be automated now for the next release (unless explicitly disabled) which makes it just like running a v1 dnas but without the issues a v1 based listing has (changing stationid and the possibility of being incorrectly merged with another station - the authhash resolves that as there is finally a fixed piece of information so issues like i've mentioned do not happen). so connect source and as long as it's been setup correctly (how hard is it to specify 2 passwords? since that's all is needed at it's most basic) then it will just run and list itself without issue.

so is good to know someone out there sees the sense in not supporting the v1 dnas over the v2 dnas (even though i'm somewhat behind on a new public release) as officially v1 is not supported anymore (is 4years+ now since it last saw an update).

-daz

sc_trans & Centova v2 running the same server and it's the same way and has been for ever -- icescc always runs a higher load
Well at least in this screenshot, sc_trans1 is showing a reasonable CPU utilization of 4.2-4.8%, which is at least in a range I'd consider reasonable even if it *is* lower than ices. (And FTR, even if sc_trans v1 CPU usage is lower than ices-cc, it's still a no-brainer -- sc_trans v1 is the buggiest, crashiest, memory-leakiest piece of unfortunate code... ugh. :) )

I just don't understand how you're getting away with 0.8-1.2% CPU utilization in sc_trans2... especially when, on the exact same make and model of CPU, I'm getting totally different results.  Doesn't make sense.



Well if you liked this screenshot you'll love this one, I have the "other auto dj" that uses sc_trans & Centova v2 running the same server and it's the same way and has been for ever -- icescc always runs a higher load

http://myautodj.com/load_adv_2.JPG

please seem my screenshot -- icesscc is 3 times higher load vs sc_trans
I am completely and totally stumped by this.  On a whim I took a peek at one of our third-party VPSes that we use for redundancy and it happens to use the *exact* same CPU that your VPS is using:

Code: [Select]
xxxx:/# cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 44
model name      : Intel(R) Xeon(R) CPU           E5620  @ 2.40GHz
stepping        : 2
cpu MHz         : 2393.998
cache size      : 12288 KB
...

So I copied sc_trans2, sc_serv2, and ices to that machine, and eliminated ALL of the variables I could by configuring both ices and sc_trans2 to play the same single MP3 in a loop, with both of them connecting to sc_serv2 servers (rather than ices connecting to icecast).

Then I fired them up and checked their resource utilization:

Code: [Select]
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
10261 root      20   0 25652 2592 1688 S    6  0.5   0:14.56 ices
10290 root      20   0  292m 8056 3048 S    5  1.6   0:11.32 sc_trans

So ices is indeed using slightly more CPU time than sc_trans on this Intel machine -- roughly the same as it's using on your machine -- but sc_trans is only using 1% less.  The stark difference we're seeing on your machine is not present here.  And on a machine with these specs, I'd expect to see about 5% CPU utilization per autoDJ... I really can't see why you're only seeing 1% utilization with sc_trans on your machine.

Weird.


- I think if there are not any listeners on sc_trans there is little load
Unless I'm unaware of some fundamental element of how sc_trans v2 works, I don't think that's the case.  The source shouldn't have any knowledge of how many listeners are connected to the server -- it just performs its job of sourcing the server regardless of whether anyone's connected or not.

That seems to be confirmed by simply tuning in to one of the streams while watching the CPU utilization of its sc_trans process -- it doesn't change.

please seem my screenshot -- icesscc is 3 times higher load vs sc_trans
Oddly enough that does in fact seem to be true, and your configurations do seem to be identical.  But I haven't a clue why you would be seeing the polar opposite of the resource utilization I'm seeing.

I can't help but wonder if maybe there are some CPU optimizations or somesuch that ices is not taking advantage of... it hasn't been updated in quite some time, after all, and transcoding is a very math-heavy procedure.  All of the machines I've been testing on use AMD processors whereas you're using an Intel CPU -- I should run some tests on Intel machines and see if I can reproduce what you're seeing.


Load issue on this Node at present is a lot lower

load average: 0.48, 0.29, 0.22
George
Honestly, load average tells you nothing useful about the performance of the autoDJ software.  It's basically a measure of how many processes are waiting for CPU time, and you can't assume it's caused by a specific process unless you see a correspondingly high resource utilization (eg: via utilities such as "top" and "iotop") in that particular process.


as soon as I start running icescc auto dj -- with either SHOUTcast1 or Icecast the load jumps a lot higher. It's not overloaded but if I run any more it will be. WIth only SHOUTcast2 and SC_TRANS load just hangs around the zero mark.

I find this really, really hard to believe, assuming you're doing an apples-to-apples comparison.  You have to make sure that:
  • both tests are performed simultaneously (so there aren't any external factors affecting either test)
  • sc_trans2 is transcoding local MP3 files and sourcing the server normally -- NOT REBROADCASTING A LIVE SOURCE CONNECTION!
  • sc_trans2 is generating an MP3 stream, not AAC+
  • both sc_trans2 and ices are sourcing just ONE mount point (remember that if you have it sourcing two mount points, it'll use double the CPU time)
  • both sc_trans2 and ices are encoding at the same bit rate

Just to make sure I wasn't talking out of my ass, I decided to look into it just to make sure.  I ran a comparison between ices-cc and sc_trans2 on two different servers and got the results below.  This is with all autoDJs set to 128kbps 44KHz MP3, one encoder (one mount point), with "top" set to a 5-second interval.  Obviously this isn't a controlled, scientific test, but I can say that it's representative of my experience with sc_trans2 and ices.

Server 1:
Code: [Select]
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
14265 ccuser    20   0  232m  10m 3116 S 19.3  2.6   1:15.29 sc_trans
29381 ccuser    20   0 29160 6100 1336 R 13.4  1.5 517:35.34 ices
29375 ccuser    20   0 26136 2984 1304 S  0.3  0.7 474:09.30 ices

Server 2:
Code: [Select]
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
29684 ccuser    20   0  171m  13m 3104 R 10.4  2.6   1:15.99 sc_trans
28679 ccuser    20   0 26064 2832 1640 S 10.2  0.5   4:22.43 ices
16529 ccuser    20   0 25584 2412 1628 S  0.2  0.5 113:17.08 ices

As you can see, at best I recorded a 0.2% CPU utilization difference between sc_trans and ices, and at worst I recorded a nearly 6% difference between the two (in ices' favor).

sc_trans also uses *much* more memory than ices -- on one server there is nearly a ten-fold difference in shared memory utilization (and more than double the private memory utilization) between sc_trans and ices.

I also compared disk utilization between sc_trans2 and ices using "iotop", but the results were so boring as to be irrelevant... as you'd expect, you get a periodic block read every now and again (to read MP3 content from disk) and other than that, everything remains idle at 0 KB/s.

You'll also note that there is a third ices process in each of the above, with dramatically lower CPU utilization.  That's because ices is smart enough to autodetect when an MP3 file is already encoded at the correct bitrate/samplerate, and in this case, it does NOT go through the process of decoding it and then re-encoding it again -- it just uses it verbatim.  So on streams where users have pre-encoded their media with the correct parameters, ices CPU utilization can actually approach 0% while doing the *exact same job* as sc_trans.


Now, back on point, the reason I take issue with My Auto DJ's comments is because I have NEVER seen a situation where sc_trans2 was actually *doing* something, yet showed near-zero CPU utilization.  As best I'm aware that simply does not happen, which leads me to believe there is a flaw in his measurement technique.  It's possible that My Auto DJ performed his measurements while sc_trans was rebroadcasting a live source, which is NOT an accurate comparison -- the moment you connect to sc_trans with a live source, sc_trans' CPU utilization drops dramatically.

Having said all that, I'm not hating on sc_trans2 -- sc_trans2 offers FAR more features than ices, so it may be a worthwhile tradeoff... but make your choice based on functionality, not performance, as ices' performance ranges from "on par with" to "better than" sc_trans2 in every situation I've encountered.


Steve is there a way you can run SHOUTcastv1 with SCTRANS ? I don't think the playlist scheduler will work but could it still be a option?
There's no technical reason we couldn't do that, but IMHO it's pointless -- SHOUTcast DNAS v2 is a much better product than DNAS v1 (and is more actively maintained), so it makes sense to just use DNAS v2, which is already supported in Centova Cast for use with sc_trans v2.   Unless of course there is a specific reason why our client base would PREFER to use v1 over v2...?


My Server Load Details;

Code: [Select]
   
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
24161 centovac  15   0 13096 6792 2664 S  0.7  0.1   0:01.24 cc-appserver
21608 centovac  16   0 19808  13m 2988 S  0.0  0.2   0:06.59 cc-appserver
21878 centovac  15   0 18508  12m 2968 S  0.0  0.2   0:05.39 cc-appserver
28565 centovac  18   0 14840 1744  808 S  0.0  0.0   0:01.02 cc-web
28566 centovac  18   0 14408 1048  480 S  0.0  0.0   0:00.00 cc-web


  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
22066 ccuser    17   0  186m 8168 2768 R  4.0  0.1   2:00.61 sc_trans
22026 ccuser    17   0 70664 6196 2504 R  0.0  0.1   0:04.35 sc_serv
28070 ccuser    18   0 35808 2232  800 S  0.0  0.0   1:34.59 sc_serv
28074 ccuser    18   0  5404 1944 1340 S  0.0  0.0   0:12.69 ices
28567 ccuser    15   0  3448 1120  624 S  0.0  0.0   0:00.09 cc-control
30379 ccuser    15   0 11692 3252 2440 S  0.0  0.0   0:00.00 icecast
30387 ccuser    18   0  5224 1836 1332 S  0.0  0.0   0:00.00 ices

Time:                    Wed Nov  7 05:20:46 2012 +0300
1 Min Load Avg:          6.42
5 Min Load Avg:          6.00
15 Min Load Avg:        4.81
Running/Total Processes: 6/104

 

Cpu Details;

Code: [Select]
  root@server2 [~]# cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 30
model name      : Intel(R) Xeon(R) CPU           X3440  @ 2.53GHz
stepping        : 5
cpu MHz         : 2533.409
cache size      : 8192 KB
 

PS.

If you have a problem with Icecast ie;

Code: [Select]
/usr/lib/libxslt.so: undefined reference to `xmlXPathContextSetCache'
collect2: ld returned 1 exit status
make[3]: *** [icecast] Error 1
make[3]: Leaving directory `/usr/local/src/icecast-2.3.3/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/usr/local/src/icecast-2.3.3/src'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/usr/local/src/icecast-2.3.3'
make: *** [all] Error 2
icecast make failed, aborting
Installer exited with error, aborting

after running update;

Code: [Select]
/usr/local/centovacast/sbin/update
or

Code: [Select]
/usr/local/centovacast/sbin/update --add --icecast-fromsrc
go to here

then run

Code: [Select]
cd /usr/lib
rm libxml2.so
rm libxml2.so.2
ln -s /usr/lib/libxml2.so.2.6.26 /usr/lib/libxml2.so
ln -s /usr/lib/libxml2.so.2.6.26 /usr/lib/libxml2.so.2 

then run update again

Code: [Select]
  /usr/local/centovacast/sbin/update 
If all was well run the following;

Code: [Select]
  tail -n 15 /usr/local/centovacast/etc/cc-control.conf 
you should then end up with something like below;

Code: [Select]
root@server2 [~]# tail -n 15 /usr/local/centovacast/etc/cc-control.conf

# specify the full pathnames to your application binaries below; note that
# this is normally handled automatically by Centova Cast's application
# installer, i.e., to install IceCast, you'd just use:
#    /usr/local/centovacast/sbin/update --add icecast
# And the ICECAST_BIN option below would be configured automatically.
ICES_BIN=
ICES2_BIN=
EZSTREAM_BIN=
SCTRANS_BIN=
SHOUTCAST2_BIN=/usr/local/centovacast/shoutcast2/sc_serv
SCTRANS2_BIN=/usr/local/centovacast/sctrans2/sc_trans
ICESCC_BIN=/usr/local/centovacast/ices//bin/ices
SHOUTCAST_BIN=/usr/local/centovacast/shoutcast1/sc_serv
ICECAST_BIN=/usr/bin/icecast
root@server2 [~]#
 

Let me know , if you have any comments

George

Yeah, my head hurts from reading all of that.

What's your point ?
All i was trying to say was that i now have setup a fresh VPS & re-installed CentovaCast 3.0 & it appears some of my problems i had on my previous VPS are no longer present on my new VPS & i wanted to check that i had no more load issues;

Server details ;

Code: [Select]
     PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
22066 ccuser    18   0  186m 8460 2776 R  4.0  0.1  81:24.07 sc_trans
22026 ccuser    18   0 70664 6196 2504 R  0.0  0.1   0:37.22 sc_serv
28070 ccuser    18   0 35808 2232  800 S  0.0  0.0   2:39.65 sc_serv
28074 ccuser    15   0  5404 1944 1340 S  0.0  0.0   0:27.10 ices
28596 ccuser    15   0  3380 1168  724 S  0.0  0.0   0:00.01 cc-control
30379 ccuser    15   0 11824 3324 2440 S  0.0  0.0   0:12.71 icecast
30387 ccuser    18   0  5524 2100 1332 S  0.0  0.0   0:14.66 ices


  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
17413 centovac  16   0 13756 6664 2400 S  0.0  0.1   0:01.31 cc-appserver
17435 centovac  18   0 12760 5988 2400 S  0.0  0.1   0:01.28 cc-appserver
28594 centovac  15   0 14540 1372  688 S  0.0  0.0   0:00.13 cc-web
28595 centovac  18   0 14408 1044  464 S  0.0  0.0   0:00.03 cc-web

But in the last fews days i have been getting an email generated by my  server (Load related);

Code: [Select]
  [statscheck] Stats/Server Overload on server2

IMPORTANT: Do not ignore this email.
  This is cPanel stats runner on server2.xxxxxxxxxxx.co.uk!
  While processing the log files for user shoutcas, the cpu has been
maxed out for more than a 6 hour period.  The current load/uptime line on the server at the time of
this email is
  15:13:13 up 3 days, 23:37,  0 users,  load average: 6.14, 6.50, 6.67
  You should check the server to see why the load is so high and take
steps to lower the load.  If you want stats to continue to run even with a high load; Edit
/var/cpanel/cpanel.config and change extracpus to a number larger then 0 (run
/usr/local/cpanel/startup afterwards to pickup the changes).

Any thoughts on this issue??

George
What are the specs of the new VPS ?
Hi Dennis,

Here's VPS Specs;

Memory    4.44 GB
Burst    7.91 GB
Virtualization Type    (OpenVZ)
Operating System    CentOS 5 32bit
Disk Space    45 GB
Bandwidth    800 GB

Processor
======

vendor_id       : GenuineIntel
cpu family      : 6
model           : 30
model name      : Intel(R) Xeon(R) CPU           X3440  @ 2.53GHz
stepping        : 5
cpu MHz         : 2533.409
cache size      : 8192 KB

George
That should be OK ... I assume you have one vCore.

You should install HTOP and monitor what is chewing up the load when it goes high.  Perhaps it's not Centovacast but cPanel or something else.
Hi Dennis,

here's my output of top -c

Code: [Select]
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
17773 ccuser    15   0  186m 7928 2776 R  3.7  0.1  32:46.64 /usr/local/centovacast/sctrans2/sc_trans ../demo3/etc/source.con
17510 ccuser    15   0 35964 2136  728 S  0.0  0.0   0:25.16 /usr/local/centovacast/shoutcast1/sc_serv etc/server.conf demo1
17514 ccuser    15   0  5384 1940 1340 S  0.0  0.0   0:05.81 /usr/local/centovacast/ices//bin ices -v -c ../demo1/etc/source.
17703 ccuser    18   0 11544 3132 2408 S  0.0  0.0   0:05.31 /usr/bin/icecast -c ../demo2/etc/server.conf
17711 ccuser    18   0  5572 2100 1332 S  0.0  0.0   0:05.74 /usr/local/centovacast/ices//bin ices -v -c ../demo2/etc/source.
17766 ccuser    20   0 70576 5832 2384 R  0.0  0.1   0:13.31 /usr/local/centovacast/shoutcast2/sc_serv ../demo3/etc/server.co
32234 ccuser    15   0  3380 1160  724 S  0.0  0.0   0:00.01 cc-control [rpc]


  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
22209 centovac  16   0 11344 2564  756 S  0.0  0.0   0:00.00 cc-appserver: pool webintf
22238 centovac  15   0 11344 2564  756 S  0.0  0.0   0:00.00 cc-appserver: pool webintf
32232 centovac  15   0 14572 1384  688 S  0.0  0.0   0:00.06 cc-web: worker process
32233 centovac  15   0 14408 1056  464 S  0.0  0.0   0:00.01 cc-web: cache manager process

my cPanel / Whm say's Server load    7.18 (4 CPUs)  ( Shoutcast v1,ShoutCast v2, Icecast)


server Load withut  CCv3 running;

root@server2 [~]# uptime
 23:08:42 up 5 days,  7:33,  1 user,  load average: 0.24, 0.34, 0.53

server load with CCv3 with Shoutcast v1 running

root@server2 [~]# uptime
 23:12:58 up 5 days,  7:37,  1 user,  load average: 2.01, 1.05, 0.75

 
I have installed Htop see attached image

George
See also

Code: [Select]
  root@server2 [~]# top -n1 -b
top - 22:40:43 up 6 days,  7:05,  1 user,  load average: 6.32, 6.18, 6.32
Tasks:  87 total,   5 running,  78 sleeping,   1 stopped,   3 zombie
Cpu(s):  2.0%us,  0.3%sy,  0.0%ni, 95.9%id,  1.7%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   8147636k total,   688684k used,  7458952k free,        0k buffers
Swap:        0k total,        0k used,        0k free,        0k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
17773 ccuser    18   0  186m 7948 2776 R  3.9  0.1  47:17.46 sc_trans
 

Most of the load seems to be Shoutcast v2  - Sc trans

With Sctrans killed off Load drops to 3.91

George
I have installed Htop see attached image
According to htop it's not Centova Cast, it's sc_trans and (to a lesser extent) ices.  Just a reminder (to everyone) -- please don't confuse the two, especially when posting in the beta forums, as sustained high CPU utilization in Centova Cast itself would be a bug, whereas high CPU utilization in sc_trans/ices is standard behavior that has nothing to do with Centova Cast.

Unfortunately you just have to face the reality that transcoding media is a CPU-intensive process.  It's the exact same process as ripping audio from a CD and encoding it into MP3 files.  It takes a lot of CPU time.  There are several articles in our KB about this, and have been since the days of CCv1.x.

The only real indicator of whether your VPS is fast enough to handle the load you're throwing at it is whether or not your load average remains reasonable while all of those transcoder processes are running.  Your load is climbing, which tells you that if you want to run that many transcoder processes, you need a faster machine, preferably with multiple cores.

On most machines you can only put a few transcoders on each core.  There's no hard-and-fast rule, but for experimentation I'd start with maybe 3-4 on each core and see how the machine copes... if the load stays reasonable, add more one at a time until it starts to struggle.
Cheers thanks Steve,

That has answered all my problems

Thanks again

George