On Fri, 27 Aug 2010, Mr. MailingLists wrote:

> dd if=/dev/random of=175MBFile bs=1024k count=175
> scp 175MBFile chickenclucker:~/
> 175MBFile 100% 175MB 35.0MB/s 00:05
> scp -C 175MBFile chickenclucker:~/
> 175MBFile 100% 175MB 9.7MB/s 00:18
>
> time gzip -c 175MBFile > 175MBFile.gz
> 12.513s
> 183500800 Aug 27 11:58 175MBFile
> 183556808 Aug 27 11:58 175MBFile.gz
>
> dd if=/dev/zero of=175MBFile bs=1024k count=175
> scp 175MBFile chickenclucker:~/
> 175MBFile 100% 175MB 35.0MB/s 00:05
> scp -C 175MBFile chickenclucker:~/
> 175MBFile 100% 175MB 35.0MB/s 00:05
>
> time gzip -c 175MBFile > 175MBFile.gz
> 0m2.552s
> 183500800 Aug 27 12:03 175MBFile
> 178393 Aug 27 12:03 175MBFile.gz
>
> Interesting results and I learned something new today (I friggan love when that happens!).


Me too because I don't think I've ever used dd.

Related question:  I'm pretty sure there's a way to pipe the stdout to ssh 
and have it transfer to /dev/null on the other end so that you can compare 
speeds for arbitrarily large transfers without making files.  Anyone know?

dd if=/dev/zero bs=1024k count=4000 | ssh ...

I think if you were to make your file much bigger, maybe several 
gigabytes, you'd see a big benefit of compression.  It's not a realistic 
example though because your file is just the same null character repeated 
a gazillion times.  So, on your network, running at 250 Mbps or so, you 
probably never want to use compression.

Mike