On Thu, Jan 22, 2009 at 07:46:00AM -0600, Isaac Atilano wrote:
> 
> On Thu, 22 Jan 2009 07:17:58 -0600, "Raymond Norton" <admin at lctn.org>
> said:
> > I'm running into the "Arguments too long" error when attempting to scp 
> > 16000+ flash files to a server.
> 
> Using xargs with string replacement should work:
> 
> find /location -name "*.fla" -print | xargs -IXXX scp XXX id at host:/path

Don't forget about -print0 on find and xargs -0.  It will safely handle
special characters and spaces in filenames.

find /location -name "*.fla" -print0 | xargs -0 -IXXX scp XXX id at host:/path

Additionally, there are two methods I've used in the past, depending on
needs.  One problem with the scp method is it may take a long time.  If
time is an issue, consider using tar inline like this: 

tar cvzf - *.fla | ssh id at host "cd /location && cat | tar xvzf -"

This will bundle (and zip) all of your files before transferring, and
then untar and unzip on the other side.  The transfer will go faster
since it's one large file instead of many small files, but it won't
create or leave huge tar files laying around on your file system.

Keep in mind that you may also enable compression on the ssh/scp
routine.  I usually have compression set in my .ssh/config file:

host *
    Compression yes

You can get clever and only specify it for non-local hosts, but I have
done some testing in the past and found that it's a decent default for
local hosts, too.

One last thing I will mention is scp -r and rsync.  You could have used
'scp -r dirname id at host:/path' to avoid globbing the filenames on the
command line as well.  Additionally, you can use rsync to do the initial
transfer, and to keep the remote directory in sync with your local copy.  

Say you copy over your 16k files and then you change 100 of them.  You
can use rsync to automatically copy over the 100 that changed, leaving
the other 15,900 untouched.  You could even run it periodically on cron,
and only the changed files would ever be copied.  

Cheers, 
drue