On Thu, 2005-07-07 at 13:24 -0500, Ted Letofsky wrote:

> When you say you see those numbers EVERY DAY on a real network, I have to
> ask.
> No,
> Really.
> 
> HOW do you see those numbers?

byte counters on Cisco and Foundry switches, gathered via SNMP, graphed
with rrdtool.

> What kind of file transport are you doing?

Some of the systems are serving HTTP requests, some are FTP servers, NFS
servers, rsync transfers for backups...

> How large are your transports.

We have systems connected via gigabit and 100mbit ethernet, some of the
100mbit systems are forced to 10mbit on each end to keep them from
exceeding their bandwidth quotas. Gigabit links are using 9kbyte MTU
when possible.

A 100mbit port forced to 10mbit on both sides:

http://poptix.net/10mbit.png

Here is an example of the first loaded 100mbit server I could find:

http://poptix.net/100mbit.png

A network nfs server (tx/rx directions are wrong on gige iface polling):
http://poptix.net/1000mbit.png

Unfortunately our most bandwidth intensive gigabit links are directly
between systems since it's a waste of backplane capacity (and expensive
gigabit ports) to run them through a switch.

> What protocol are you using?

See above.

> 
> If you're really pushing 120 MegaBYTES per second across glass gigabit
> ethernet, there's very little reason in the industry to have Fibre Channel
> SANs.

Sorry? Just because two systems can talk to each other at gigabit speeds
doesn't mean they can access each others storage at those speeds. SANs
were the answer to a problem that would not have been fixed by ethernet
protocols. (You may as well ask why USB or Firewire exists)

> And, if you are, I'll happily study at the feet of the master.
> 
> Doing all kinds of file transfers, I normally see the numbers I priorly
> quoted...EVERY DAY.

Just because your systems are unable to make use of available bandwidth
doesn't mean others aren't. You have a bottleneck somewhere -- it's not
the technology, perhaps the implementation.