ted -

i routinely see numbers approaching these, from a slightly different  
perspective.  i do a lot of work in the service provider domain and  
as of late have been working a lot with video servers that will  
routinely light up NxGE links.  this is with MPEGoUDP so we can  
obviate discussions about tcp efficiencies and fairness here.  i  
won't speak  to the data transfer rates this translates into when  
using uninteresting protocols.  (uninteresting to me, that is)  i'll  
take this opportunity to point out that protracted saturation of NxGE  
links from a PC hardware based server requires a fair amount of  
tuning and it's typically fed from memory to the NIC.  it makes for  
some very interesting server designs.

candidly, i have a bit of incredulity associated with these numbers  
given that typically poor instrumentation available at the  
application layer for measuring this type of stuff.  if you're really  
interested in the number of bits you're moving i wouldn't look too  
the instrumentation available from ncftp.  i take my instrumentation  
right from the interfaces, but then that's just me.

when using TCP based protocols for file transfer i haven't seen the  
1.2x10^n Mbyte numbers that mr. hallacy quotes.  i've seen numbers  
better than the numbers you've initially quoted, but i haven't seen  
the numbers mr. hallacy quotes.  in fact, there's quite a body of  
interesting work taking place in the research community that points  
to further optimization in the L4 protocols to improve performance.   
most of these enhancements focus on improving the windowing  
mechanisms on TCP.  for the most part TCP implementations haven't  
kept pace with the improvements in network capacity and the ability  
to clock data into larger payloads more efficiently.  TCP has a  
nagging thing about "fairness".

methinks both perspectives could stand to move a little in towards  
the center. ;-)

iirc - the padding of the ssh traffic was done to address various  
timing and data analysis attacks.   the actual overhead on the wire  
is somewhat variable.  though you do have that nasty crypto process  
which when not done in hardware makes it hard to do at line rate.

On Jul 7, 2005, at 1:24 PM, Ted Letofsky wrote:


> Whoa!
>
> Good afternoon!
>
> I realize the "wire speed" of those interfaces makes that  
> theorhetically
> possible.... HOWEVER!
>
> When you say you see those numbers EVERY DAY on a real network, I  
> have to
> ask. No, Really.
>
> HOW do you see those numbers?
> What kind of file transport are you doing?
> How large are your transports.
>
> What protocol are you using?
>
> If you're really pushing 120 MegaBYTES per second across glass gigabit
> ethernet, there's very little reason in the industry to have Fibre  
> Channel
> SANs.
>
> And, if you are, I'll happily study at the feet of the master.
>
> Doing all kinds of file transfers, I normally see the numbers I  
> priorly
> quoted...EVERY DAY.
>
> I often use iometer to benchtest my numbers (runs on Linux AND  
> Windows and
> SPARC) so I can standardize my testing model.
> <shrug>
>
> My understand of encryption was that it ends up using padded bits  
> and ate up
> a fair amount of overhead AND bandwidth.
> I'll happily read anything you put before me to correct my  
> understandings.
>
> Ted Letofsky
> Linux newbie, and apparent clueless network user <grin>
>
> -----Original Message-----
> From: Matthew S. Hallacy [mailto:poptix at poptix.net]
> Sent: Wednesday, July 06, 2005 8:52 PM
> To: Ted S. Letofsky
> Cc: 'Randy Clarksean'; tclug-list at mn-linux.org
> Subject: RE: [tclug-list] Data Transfer Speeds - LAN
>
>
> On Wed, 2005-07-06 at 10:59 -0500, Ted S. Letofsky wrote:
>
>
>> Hi Randy
>>
>> The WIRE speed of a 10bT NIC is approximately 1 MB / Second The WIRE
>> speed of a 100bT NIC is approximately 10 MB / Second The WIRE  
>> speed of
>> a 1000bT NIC is approximately 100 MB / Second
>>
>> In reality, you can get all of 1MB / Second in 10bT
>> You can get, (IF YOU PUSH HARD) about 6.5MB /Second in 100bT You can
>> get, (IF YOU PUSH REALLY HARD) about 37 MB Second in 1000bT.
>>
>>
>>
>
> What kind of CRACK are you smoking? 10mbit ethernet will move 1.2MB/s,
> 100mbit ethernet will move 12MB/s, and 1000mbit ethernet/fiber will  
> move
> 120MB/s. There is no 'push really hard'.
>
> These are real numbers that I see *every day* on a real network,  
> there is no
> 'i get a better signal with this monster cable gold plated ethernet  
> so my
> network goes faster' when it comes to ethernet, it's either there  
> (full
> speed) or it isn't (framing errors and collisions aside).
>
> [snip mostly correct jumbo frames info]
>
>
>> Also, as is obvious, you're likely going to get WAY better  
>> performance
>> across NFS or (god help you) SAMBA, than you will over SSH, due to
>> encryption taking up lots of bandwidth.
>>
>>
>>
>
> The overhead with SSH is CPU, the actual encryption data isn't much  
> larger
> than the original unencrypted data.
>
>

{ snipped - misc. signatures }

-- 
steve ulrich                       sulrich at botwerks.org
PGP: 8D0B 0EE9 E700 A6CF ABA7  AE5F 4FD4 07C9 133B FAFC