On Wed, Mar 12, 2003 at 11:28:10AM -0600, David Phillips wrote:
> Thomas Eibner writes:
> > So you say that it's bloated to put in php as a module, but you never
> > the less implemented something for thttpd to run php as cgi? Nice.
> 
> No.  As I said, it runs as a CGI, not part of the web server.  My patch adds
> handler support so that .php scripts are executed using the PHP CGI binary.
> Without this type of support, you would need to make your PHP scripts CGI's
> (i.e. make them executable and include the #!/usr/local/bin/php line).
> 
> The advantage here is that you only pay for PHP when you are using it, not
> on every request.  If you compile PHP statically, the performance is decent.
> On most sites, the majority of files are static (images), so this works out
> pretty well.  I have been running this patch for a few months on my
> development box.

But most people that do use php, likely use it all over their website.
Furthermore, when you load php you might see your memory usage increase,
but if it follows suit with other apache modules I've worked with
(mod_perl) the majority of it is shared memory and thus doesn't have as
large an impact as you would think. Even if it did make a difference, 
it should not have an impact on the number of requests that you can serve
from Apache when it's static images unless you goofed up and made php
handle the mime-types for the images.
And if you're in a high traffic/hit environment chances are you have a
seperate webserver for serving pictures. 
 
> > I believe apache.org is much of a testimony to Apache being able to
> > handle the load for whatever Duncan can throw at it 1).
> > What Duncan needs to make sure is that he has the pipe that can serve
> > it and as you point out, enough memory to have enough childs running.
> >
> > 1) http://www.apache.org/server-status
> >    3788 GB over the last 20 days, which is about 8 GB/hour or about
> >    2MB/s (byte, not bit). And this is without a new release within
> >    those 20 days afair)
> 
> That's only 38 req/sec, which is not much.  My guess is that the Apache
> server on apache.org is not running mod_php or many other modules, causing
> the processes to be a lot smaller and making it irrelevant to his needs.
> 16mbit is nothing when it's large files (like the Apache source).  Get
> Apache (especially with mod_php) to push 80mbit when it's 10k images, then
> I'll be impressed.

Just because that is not much doesn't mean it isn't worth anything.
In Duncan's application I don't think he would see the kind of hits
apache.org gets when there are new releases of their software. 

As to pushing 80mbit with Apache, I didn't have a problem doing just
that at home right now.
Serving a 10240 byte file over 100mbit network: (no keep-alive)
Transfer rate:          11294.81 [Kbytes/sec] received

Serving a 10240 byte file over localhost:
Transfer rate:          22876.10 [Kbytes/sec] received

These "tests" are about as trustworthy as any other test that is
put up on the web
Of course this is not on a live site, nor is it from real clients
whose behaviour would be much different, but noone in their right
mind would try to serve all this from one machine anyway.

Server: Apache/1.3.27 (Unix) PHP/4.3.1
Vanilla compile on hardware turning two years in july. No excessive
memory use from my 200 childs with PHP loaded.
(Number of servers modified to fit benchmark of course)

> There are some good benchmarks here:
> 
> http://www.zeus.com/products/zws/capacity/scalability.html

I would expect nothing but good benchmarks showing how well Zeus performs
on the product website. 



_______________________________________________
Twin Cities Linux Users Group Mailing List - Minneapolis/St. Paul, Minnesota
http://www.mn-linux.org tclug-list at mn-linux.org
https://mailman.real-time.com/mailman/listinfo/tclug-list