hey matt-

misc. comments inline

On Jul 8, 2005, at 1:21 AM, Matthew S. Hallacy wrote:

> On Fri, 2005-07-08 at 00:58 -0500, steve ulrich wrote:
>
>
>> candidly, i have a bit of incredulity associated with these numbers
>> given that typically poor instrumentation available at the
>> application layer for measuring this type of stuff.  if you're really
>> interested in the number of bits you're moving i wouldn't look too
>> the instrumentation available from ncftp.  i take my instrumentation
>> right from the interfaces, but then that's just me.
>>
>> when using TCP based protocols for file transfer i haven't seen the
>> 1.2x10^n Mbyte numbers that mr. hallacy quotes.  i've seen numbers
>> better than the numbers you've initially quoted, but i haven't seen
>> the numbers mr. hallacy quotes.
>>
>
> I assume you agree with everything but the GE numbers, I can see  
> why. In
> most applications (Internet based) you'll have a hard time ever
> saturating a single GE link due to MTU issues. On the local network
> (where I'm coming from) we're using 9k MTU's because the servers in
> question never need to talk to the 'net. This leads to much higher
> performance (the most I've ever squeezed out of a 1500 byte MTU  
> over GE
> is around 450mbit/s). This is also UDP (NFS) pulling data off
> a striped dual 12-disk 3ware array. Data gets off the disk a lot  
> faster
> than it will ever go over the wire (at least, in our application).

you're correct, the only thing i really take issue with are the GE  
numbers.  most stacks that i've interacted with haven't been quite up  
to the task of saturating a GE link in an intelligent manner.  at  
least not without a fair amount of tweaking.  the higher MTU is a big  
win for throughput, though in my experience single flow performance  
on a GE from a PC server is more in the 700-800Mbit range.  a lot of  
this has to do with the manner in which the OS moves the data around  
between the constituent IO elements.  or the application developer  
does in some cases.


>>   in fact, there's quite a body of
>> interesting work taking place in the research community that points
>> to further optimization in the L4 protocols to improve performance.
>> most of these enhancements focus on improving the windowing
>> mechanisms on TCP.  for the most part TCP implementations haven't
>> kept pace with the improvements in network capacity and the ability
>> to clock data into larger payloads more efficiently.  TCP has a
>> nagging thing about "fairness".
>>
>
> Yes, but that's only per-stream. I'm talking about many connections.
>

many streams will definitely drive up b/w use.  though, my comments  
relative to performance improvements and research were directed in a  
more general sense. i do believe the initial discussion was  
surrounding backups which i interpreted to be single flow  
applications of relatively long duration.  kind of tangential at  
this  point in the discussion.

-- 
steve ulrich                       sulrich at botwerks.org
PGP: 8D0B 0EE9 E700 A6CF ABA7  AE5F 4FD4 07C9 133B FAFC