On Thu, Mar 13, 2003 at 01:38:52AM -0600, David Phillips wrote:
> Thomas Eibner writes:
> > But most people that do use php, likely use it all over their website.
> 
> That doesn't matter.  Don't fall into the trap of thinking that just because
> the page uses PHP that every request is PHP.  On a typical site, you will
> have at least 10-20 images for every page request.  For many sites, the
> number is much higher.  As an example, loading the front page of Slashdot
> required an extra 54 image requests.

Funny you should mention slashdot. They actually use a seperate image
server. Their main server still has mod_perl installed even though they
say they mostly serve the front page of static html.
http://slashdot.org/faq/tech.shtml#te050 :
* 3 load balanced Web servers dedicated to images 

> > Furthermore, when you load php you might see your memory usage
> > increase, but if it follows suit with other apache modules I've
> > worked with (mod_perl) the majority of it is shared memory and thus
> > doesn't have as large an impact as you would think.
> 
> Unfortunately, that's not the case.  I have a box running Apache with
> mod_php4 and mod_ssl and the processes take at least 15-30mb.  That is real
> memory, not shared (the RES line from top on FreeBSD).

I have a FreeBSD box w/ mod_perl,mod_php,mod_gzip that is mildly used. 
Both the perl (which is a huge memory-spender) and php part get
excercised once in a while, but none of my childs have RES lines going
anywhere above 6MB (82 childs ~3-4MB each RES). Granted, I kill off my
childs after they've served 250 requests for the same reason.

> > Even if it did
> > make a difference, it should not have an impact on the number of
> > requests that you can serve from Apache when it's static images
> > unless you goofed up and made php handle the mime-types for the
> > images.
> 
> The impact is that each process contains PHP, therefore requiring a lot of
> memory, thus limiting the total number of processes you can have running.
> If I can only run twenty Apache processes before running out of memory, then
> that limits me to serving twenty clients at once.  The number of actual
> users can be significantly less, since some browsers will use multiple
> connections.

My experience is that more browsers actually use keep-alive, and that boosts
throughput more than more connections when it's not a slow server.

> > And if you're in a high traffic/hit environment chances are
> > you have a seperate webserver for serving pictures.
> 
> If you use a well designed web server like Zeus, you don't need two servers
> to work around a bad design.  A possible alternative might be to use PHP
> under FastCGI with Apache.  I don't think many people do that, but it could
> have significant performance advantages.

Just because it can handle it doesn't mean that it's a good idea maxing out
what a single machine can do. Why would you want to run php under fastcgi?
You could just as well do the Proxy solution that Troy suggested then. 

> > As to pushing 80mbit with Apache, I didn't have a problem doing just
> > that at home right now.
> > Serving a 10240 byte file over 100mbit network: (no keep-alive)
> > Transfer rate:          11294.81 [Kbytes/sec] received
> 
> There is a huge difference between serving over a local network and serving
> real traffic.  A good number of clients, perhaps a majority, will be modem
> users.  This means connections stay open for much longer and have a lot of
> latency.  Even with broadband, there is still a significant amount of
> latency involved.  Try a concurrency of at least 500-1000 if you want to get
> anywhere close to real world usage.

I just did, it made a difference in throughput of -200kbit/s. It's still
a worthless test, but it's still worth nothing that my machine was very
responsive and the load only spiked because of the sheer amount of 
connections that where setup/shutdown. 

> > Serving a 10240 byte file over localhost:
> > Transfer rate:          22876.10 [Kbytes/sec] received
> 
> Any tests over localhost are basically worthless for a number of reasons.
> 
> > These "tests" are about as trustworthy as any other test that is
> > put up on the web
> 
> Correct.  It is difficult to adequately simulate web traffic.
> 
> > Of course this is not on a live site, nor is it from real clients
> > whose behaviour would be much different, but noone in their right
> > mind would try to serve all this from one machine anyway.
> 
> Wrong.  It is easily possible to serve this much traffic from one box.
> People were doing this at least three years ago.

Asside from the occasional /.'ing or cdrom.com I don't see many sites
running with that kind of page-load for a longer period of time from
a single machine.

-- 
  Thomas Eibner <http://thomas.eibner.dk/> DnsZone <http://dnszone.org/>
  mod_pointer <http://stderr.net/mod_pointer> <http://photos.eibner.dk/>
  !(C)<http://copywrong.dk/>                  <http://apachegallery.dk/>
          Putting the HEST in .COM <http://www.hestdesign.com/>

_______________________________________________
Twin Cities Linux Users Group Mailing List - Minneapolis/St. Paul, Minnesota
http://www.mn-linux.org tclug-list at mn-linux.org
https://mailman.real-time.com/mailman/listinfo/tclug-list