Matthew, (et al)

I stand corrected.

I should restate my understanding and relate it to the framework in which I
originally comment.

With your MTU jacked up high, acknowledging that you won't be speaking to
the internet or machines without equal MTU settings, I expect to see approx
70-80 MB per second on a SINGLE stream in a GbE environment,  I don't see
why you would NOT be able to fully saturate the bus and get VERY close to
wire speed on ANY interface where you've made those setting changes.

When I was originally posting to the list-serve to respond to Randy, my
point of view was (probably still is) skewed that I work with customers
every day in heterogeneous environments looking to increase backup and
restoral speeds.  The restrictions under which many of them labor include
the inability to add interfaces, the need to have the interface for backup,
also do all the connectivity duty (thus, NO MTU tuning) and often the
Operating System and/or CPU being unable to wring out more performance
immediately.

Thus my comments on TCP/IP Offload Engine cards, which free up the computer
from doing all the work.

I fully concur with your statement that with MTU set to 9K and utilizing
many connections, you can get wire speed or close on a given interface.

Ted

-----Original Message-----
From: tclug-list-bounces at mn-linux.org
[mailto:tclug-list-bounces at mn-linux.org] On Behalf Of Matthew S. Hallacy
Sent: Friday, July 08, 2005 1:21 AM
To: tclug-list at mn-linux.org; steve ulrich
Subject: Re: [tclug-list] Data Transfer Speeds - LAN


On Fri, 2005-07-08 at 00:58 -0500, steve ulrich wrote:

> candidly, i have a bit of incredulity associated with these numbers
> given that typically poor instrumentation available at the  
> application layer for measuring this type of stuff.  if you're really  
> interested in the number of bits you're moving i wouldn't look too  
> the instrumentation available from ncftp.  i take my instrumentation  
> right from the interfaces, but then that's just me.
> 
> when using TCP based protocols for file transfer i haven't seen the
> 1.2x10^n Mbyte numbers that mr. hallacy quotes.  i've seen numbers  
> better than the numbers you've initially quoted, but i haven't seen  
> the numbers mr. hallacy quotes.

I assume you agree with everything but the GE numbers, I can see why. In
most applications (Internet based) you'll have a hard time ever saturating a
single GE link due to MTU issues. On the local network (where I'm coming
from) we're using 9k MTU's because the servers in question never need to
talk to the 'net. This leads to much higher performance (the most I've ever
squeezed out of a 1500 byte MTU over GE is around 450mbit/s). This is also
UDP (NFS) pulling data off 
a striped dual 12-disk 3ware array. Data gets off the disk a lot faster than
it will ever go over the wire (at least, in our application). 


>   in fact, there's quite a body of
> interesting work taking place in the research community that points  
> to further optimization in the L4 protocols to improve performance.   
> most of these enhancements focus on improving the windowing  
> mechanisms on TCP.  for the most part TCP implementations haven't  
> kept pace with the improvements in network capacity and the ability  
> to clock data into larger payloads more efficiently.  TCP has a  
> nagging thing about "fairness".

Yes, but that's only per-stream. I'm talking about many connections.



_______________________________________________
TCLUG Mailing List - Minneapolis/St. Paul, Minnesota tclug-list at mn-linux.org
http://mailman.mn-linux.org/mailman/listinfo/tclug-list