You can aggregate 100Mbit cards fairly easily.  How much data could you be
pumping across the link that you would actually need 1000Mbit?  

We have around 100 webservers pulling data from one of our databases, and 2
100Mbit cards using fast etherchannel is plenty fast for us, at least for
now.  I think your limit is 4 cards though, so anything above 400Mbit, you'd
want to go with gig connections.  If you're only connecting those 2 boxes
together, you don't need a switch.  Gig switches are expensive, but you
could always get yourself a cisco 2924 with the dual GBIC ports and do
fiber, I think those run around $2k or so but I'm not sure what fiber
ethernet cards cost.

Jay

-----Original Message-----
From: Michael Burns [mailto:sextus at visi.com]
Sent: Tuesday, October 30, 2001 6:43 PM
To: tclug-list at mn-linux.org
Subject: Re: [TCLUG] High Speed Network Connection


On Tue, Oct 30, 2001 at 03:27:58PM -0600, Paul Rech wrote:
> I need a high speed network connection between 2 database servers on
> IBM Netfinity servers.
> What are my options and what hardware do I need?
> I was looking to gigabit ethernet first, as I figured it would be the
> cheapest.

Estimate the maximum throughput between the database servers before getting
the gigabit hardware. I don't know what the specs are on your servers, but
I'd think you'd be hard pressed to saturate a full duplex 100Mb link. 

If you do need more bandwidth, depending on the O/S and hardware you can 
aggregate 2 or more 100baseT NICs or use 1000baseT cards with a crossover
cable.

-- 
Michael
_______________________________________________
Twin Cities Linux Users Group Mailing List - Minneapolis/St. Paul, Minnesota
http://www.mn-linux.org
tclug-list at mn-linux.org
https://mailman.mn-linux.org/mailman/listinfo/tclug-list