Here's how I understand it (I'm open to corrections):

At 10:24 AM 9/28/2005 -0500, Erik Anderson wrote:
>On 9/28/05, Mike Olson <molson4 at operamail.com> wrote:
>> Could someone please settle an argument between my friend,
>> and I?  Is it possible to increase the transfer rate
>> between two computers by putting two network interface
>> cards (NICs) in each computer, and putting two Ethernet
>> connections on each computer, and connecting the two
>> computers with two Ethernet cables?

Yes, this is possible, but it isn't done very often.  I've read about it
being done and seen descriptions about how to do it, but it seems to be
uncommon, except where people *really* need higher bandwidth than a single
NIC can handle.

>> I said that it would
>> not, and may even slow transfer rates because the
>> processor is switching between two NICs.

Whether the CPU is the bottleneck will depend on the type of ethernet card,
and how powerful the CPU or CPUs are.  High end NICs tend to offload the
packet processing from the main CPU, while cheaper NICs do their work in
the drivers (like hardware modems vs controllerless modems vs
host-signal-processing (HSP) modems).

>> Also, since each
>> computer can have only one IP address since each MAC
>> address is unique, and that computer will process the
>> packets of information it receives one at a time.

It is possible to assign multiple IP addresses to a single NIC, and also
multiple IP address to a single PC.  Doing the former usually requires
digging into .conf files (Linux) or playing with the Registry (Windows),
but it can be done.  The later is trivial...just configure each NIC
separately.  The end result is a PC that is dual-homed or multi-homed (a
member of multiple networks).  Most gateway firewalls (e.g. IPCop) use
multiple NICs, one public IP and one private IP.

>> He
>> thinks that the NICs have buffers in them that allow the
>> packets of information to be stored until the CPU
>> processes them.

There are receive buffers, and some NICs do packet processing to offload
work from the main CPU.

>>  So according to him, you can send a chunk
>> of data faster by splitting it in half, sending the halves
>> over two cables, and receive the halves with the other
>> computer and NICs, and put the chunk of data back together
>> again.

Yes, in theory.  But you would need something (likely software) to
coordinate this splitting and recombining of the data.  See my notes below
on software bonding the NICs or configuring multiple routes.

>> He thinks that transfer rates would increase if
>> you increased the CPU speed, since each CPU could split
>> the info and put it back together faster, and faster.

Increasing the CPU speed will only increase the transfer rates if the CPU
is the performance bottleneck (and it most likely is NOT the bottleneck).
Watch 'top' (Linux) or Task Manager (Windows) while transferring files and
see what happens to CPU usage during the transfer.  If you have a recent
PC, I think it is unlikely that your CPU will be maxed out by the network
data transfer.

>>  I
>> told him that transfer rates are dependent upon the rate
>> of your NIC and your transfer medium (ex. Ethernet,
>> optical, wireless) and cannot be affected by simply adding
>> more NICs and transfer mediums between two computers.  I
>> think he's confusing processing rates with transfer rates.
>> Whose right?

The transfer rate of any single NIC is dependant upon that NIC's quality
and specifications, and also upon the transfer medium's characteristics
such as length, quality, and environment.  Transfer rate is also affected
by packet size, as larger packets have less overhead in relation to the
data, and tend to be a more efficient use of bandwidth.  By itself, adding
multiple NICs won't help, unless you figure out how to get them to share
the data transfer load either by bonding them together or by using multiple
routes or other means of load balancing.

>If I remember right, some of the high-end Intel network cards have a
>"bonding" driver that will allow you to do just this.  I've never done
>it, though.

If you bond the NICs together at the interface level, you should be able to
increase throughput for one or more data transfers.
See http://www.devco.net/archives/2004/11/26/linux_ethernet_bonding.php

If you use multiple IP addresses and multiple routes, you should be able to
increase aggregate throughput for several data transfers, although any
single data transfer will be limited to the max throughput of a single NIC.

-Haudy Kazemi

Also:
http://www.tummy.com/journals/entries/jafo_20050223_002900

Google search terms: 'ethernet bonding', 'bonded ethernet'

http://www.scl.ameslab.gov/Projects/MP_Lite/dox_channel_bonding.html
>From http://www.scl.ameslab.gov/Projects/MP_Lite/perf_cb_fe.html (there are
graphs on the page):
"Channel-bonding is the striping of data from each message over multiple
interfaces. While channel-bonding of multiple Fast Ethernet cards in a
cluster can increase the throughput dramatically, the low cost and high
performance for Gigabit Ethernet makes this work useless at this point.
I'll still present the somewhat dated information though to illustrate the
beneficial role that reducing the overhead can play, as was accomplished by
using M-VIA in this case.

The Linux kernel has the ability to do low-level channel bonding. This
works alright at Fast Ethernet speeds, where a doubling of the throughput
can be achieved using 2 cards. It is not tuned for Gigabit speeds yet.

MP_Lite can do channel bonding at a higher level by striping data from a
single message across multiple sockets set up between each pair of
computers. The algorithm also tries to hide latency effects by increasing
the amount of data being striped exponentially, starting with small chunks
to get each interface primed, then doubling the size each time to hide the
latency. This is a flexible approach, working for any Unix system, but will
always suffer from a loss of potential performance due to the higher
latency involved. A nearly ideal doubling of the throughput has been
achieved using 2 Fast Ethernet cards, but little benefit was produced from
using a 3rd Fast Ethernet card.

M-VIA is an OS-bypass technique for Ethernet hardware. Using the MP_Lite
via.c module running on M-VIA to reduce overhead costs, a nearly ideal
tripling of the throughput can be achieved using 3 Fast Ethernet cards,
while 4 cards produces a 3.5 times speedup. This illustrates the benefits
of channel bonding at a low level, providing encouragement for tuning the
Linux kernel bonding.c module. "