On Tue, 16 Nov 2010, Isaac Atilano wrote:

> On Tue, 16 Nov 2010 01:50 -0600, "Mike Miller" <mbmiller+l at gmail.com>
> wrote:
>
>> Using these tricks I have reduced the processing time from 158 seconds 
>> to 4.7 seconds.  I'll write a script that does this for me.
>>
>> So the finding here that might be useful in many situations is that 
>> when searching for a regexp in a big file, you might do much better to 
>> filter lines of the big file with a simpler, more inclusive grep, then 
>> do the regexp search on the stdout from the simple grep.
>
> Mike, My experience is that Perl does text file processing much faster 
> than grep/sed/awk and it saves programming time rather than having to 
> utilize all the neat tricks you've done.

I think the simple grep of the large file is the key to reducing the work 
done by the regexp search -- it tosses out about 99.9% of the file so that 
grep -E has a lot less work.  Perl might do better than grep -E, but I 
really doubt it would beat ordinary grep at filtering the large file.

What's the trick to using perl for grepping?

Mike