Thanks, Jack.  Now the cut-like list of lines will work:

seq 10000000 | print_ranges.awk 1-5,55,27

One big problem is that in this example your script uses about 1.9 GB of 
memory and takes 20 seconds when the memory is immediately available.  My 
friend's perl script uses 0.002 GB of memory and uses 0 seconds.  The only 
difference is that it does not reorder the lines.  When the last line of 
the 10 million input lines is in the output, then the perl script is 
slower, taking about a minute, but still only using 0.002 GB memory.  Your 
script takes the same amount of time and uses the same amount of memory 
whether it has to read to the end or not.  Thus both these use 1.8 GB of 
RAM and take 20 seconds:

seq 10000000 | print_ranges.awk 1
seq 10000000 | print_ranges.awk 10000000

The perl script finishes the first one almost instantly but it takes 
longer than the awk script on the second one, though it uses minimal 
memory.

Mike


On Mon, 3 Jun 2013, zedlan at invec.net wrote:

> Mike,
>
>
> I made a few changes to the script per your request:
>
>
> #!/usr/bin/gawk -f
>
>
> # print_ranges.awk
> # usage: takes csv arg string to-from1,to-from2, ...
> # ex: cat file | print_ranges.awk 92-97,5-8,23-42,55-71
>
>
> BEGIN {
> range_cnt = split(ARGV[1], ranges, ",");
>
>
> while(getline < "-") {
> line_arr[++n] = $0;
> } close("-")
>
>
> for(i = 1; i <= range_cnt; i++) {
> num = split(ranges[i], start_stop, "-");
>
> if(num == 1) {
> start = ranges[i];
> stop = start;
> } else {
> start = start_stop[1]
> stop = start_stop[2]
> }
>
>
> for(j = start; j <= stop; j++) {
> print line_arr[j];
> }
> }
> }