On Tue, Mar 11, 2003 at 11:56:31PM -0600, David Phillips wrote: > Duncan Shannon writes: > > is this not really that much bandwidth or IO? I guess if 200 units > > over 1 hr dl a 4 meg file, its 800 megs over an hour, thats not all > > *that* much. > > With those numbers, 2mbit should be enough. But be aware that if they all > try to download at the same time, the downloads will be slow (approximately > 1k/sec). It might be a good idea to have them randomly stagger the > requests. > > > I need to plan hardware/bandwidth wise to make sure this process works > > smoothly. Should i be looking at a dedicate box to run this? > > Currently its going on our main server which has other things like > > qmail, apache, and jabber servers running on it. > > Apache is not a great web server if you have a high number of connections. > One of the main issues with Apache is that the processes tend to get very > big when you add stuff like PHP to them. Building an application server > into a web server is poor design. A single threaded web server such as > Zeus, Boa or thttpd can handle much more traffic. > > With only a few hundred total clients (hopefully not all hitting it at the > same second), you don't need to worry about PHP performance. What you do > need to worry about is having several hundred Apache processes running while > all the clients download the file at once. An alternative would be to use > Apache for PHP and Boa or thttpd for the file downloads. > > I wrote a patch for thttpd that lets it run .php scripts natively. It runs > them using CGI, so it has to fork a process off for each PHP request. The > PHP performance is slow compared to Apache, but thttpd is much faster for > static files. If you are interested, grab the last patch from here: So you say that it's bloated to put in php as a module, but you never the less implemented something for thttpd to run php as cgi? Nice. > http://titan.hpcs.com/thttpd/ > > If you want use one web server for everything and feel confident that you > have a rock solid web hosting platform, then get Zeus. It will handle > whatever you can throw at it and more. Zeus is by far the best web server > available. > > You don't necessarily need a separate server, but it wouldn't hurt. Memory > is going to be your main issue if you are using Apache for everything. Your > requirements are pretty easy. If you have enough bandwidth so that > downloads are as fast as possible and have your clients randomly stagger the > connections, then Apache should work fine. [much biased] I believe apache.org is much of a testimony to Apache being able to handle the load for whatever Duncan can throw at it 1). What Duncan needs to make sure is that he has the pipe that can serve it and as you point out, enough memory to have enough childs running. 1) http://www.apache.org/server-status 3788 GB over the last 20 days, which is about 8 GB/hour or about 2MB/s (byte, not bit). And this is without a new release within those 20 days afair) _______________________________________________ Twin Cities Linux Users Group Mailing List - Minneapolis/St. Paul, Minnesota http://www.mn-linux.org tclug-list at mn-linux.org https://mailman.real-time.com/mailman/listinfo/tclug-list