On Fri, Mar 12, 2010 at 08:29:18AM -0800, Wayne Johnson wrote:
> I was wondering if there was a way to create a common file system between both sites (here in MN and TX).  GFS sounds like it might work, but I have not found anyone who claims to have done this on Google.  I remember there use to be AFS which worked in a similar fashion.
> 
> Guess I'm hoping to set something up where files will exist on both networks.  When a file is opened, the network compares the local and remote file systems and the newest version is used.  If the remote is the newest, it's transferred to the local as it is used so the local cache is updated and the next use will be entirely local.  Am I dreaming?

Another thing to try is Cache-FS as a front-end to NFS.  A guy from
RedHat did most of the work and they finally got it in mainline a few
releases ago, but it should be in Centos/RedHat already as they were
claiming the customers wanted it.

So, you just dedicate a large partition on the remote site as a cache
and then mount NFS through it.  I'm not sure what kind of latency
you'll get, but the speed of light is what it is.

Cheers,
florin

-- 
Bruce Schneier expects the Spanish Inquisition.
      http://geekz.co.uk/schneierfacts/fact/163
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
Url : http://mailman.mn-linux.org/pipermail/tclug-list/attachments/20100312/97cfbf18/attachment.pgp