On Jan 10, 2008 2:16 PM, Mike Miller <mbmiller at taxa.epi.umn.edu> wrote:
[snip, with slightly less zeal :^) ]

> I wonder how the various methods compare in speed.  With a lot of files
> they must all be pretty slow, so speed is important.
>

A worthy question!

% find . | wc -l
22531
% time perl -MFile::Find::Rule -MList::Util=max -le 'print scalar
localtime(max map { (stat($_))[9] } find->in("."))'
Thu Jan 10 15:20:58 2008
real    0m1.122s
user    0m0.918s
sys     0m0.203s

% perl -le 'print 1000 * 1.122 / 22531'
0.0497980560117172

So that's about 50 usec per file.  Comparing with find + awk:

% find . -type f -printf "%T@\n" | awk '{ if ($1 > the_max) { the_max = $1;
} } END { print the_max }'
1200000058
% time !!
time find . -type f -printf "%T@\n" | awk '{ if ($1 > the_max) { the_max =
$1; } } END { print the_max }'
1200000058

real    0m0.168s
user    0m0.071s
sys     0m0.116s
% perl -le 'print 1000000 * 0.168 / 22531'
7.45639341351915

So something like 7usec per file.  And just for grins (that's a lot of
zeroes!):

% perl -le 'print scalar localtime (1200000000)'
Thu Jan 10 15:20:00 2008
% perl -le 'print scalar localtime (1300000000)'
Sun Mar 13 01:06:40 2011

Mark your calendars, nerds.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mn-linux.org/pipermail/tclug-list/attachments/20080110/8936929f/attachment.htm