On another note.. The Sam BC request feature does create the list and displays it on the site with barely any bandwidth usage even with 50+ people looking at the list...
Well, that's interesting. OK, let's look at this logically then.
I did a quick check of my own media library and saw an average track title length of 16 characters, an average album name of 20 characters, and an average artist name of 13 characters. That's 49 characters of raw data per track, on average.
Add in about another 50 characters or so for JSON overhead for a string like
{ artist: "", album: "", track: "" }
Now you're at 100 characters per track.
It's not at all unreasonable for a station to have well in excess of 10,000 tracks. In my own personal library alone, I have over 18,000. I've seen production users with multiple tens, and even hundreds of thousands of tracks.
So taking a modest scenario, multiply 100 characters by 10,000 tracks, and you get 1,000,000 characters, or 1MB of data. That's one megabyte of data to feed the track list to the browser ONCE.
Now, assuming all browsers can even cope with that much data in a single selectbox (hint: not all of them can) you have to also consider that this is loaded every time someone hits the start page. Many clients use the start page as the de-facto go-to page for their stream, instead of using it as the example it's intended to be. So you may end up getting dozens of hits per minute downloading this 1MB of data.
Obviously that will be mitigated somewhat by gzip compression, but you're still looking at 200KB or so per request, plus taxing the browser in trying to parse 1MB of data and stuff it into a selectbox on every page load.
That, in a nutshell, is why I don't see this as viable for the bulk of our clients unless it's implemented via an autocomplete solution of somesuch, which allows us to whittle down the result set before trying to feed the data to the browser.