Phil Mendelsohn <mend0070 at tc.umn.edu> writes:

> On Thu, 17 May 2001, Matt Waters wrote:
> 
> >     When one of you guys put your SGI Indy up for sale, I told my friend
> > about it, and we began arguing over the power of a risc processer. He argues 
> > that because of a risc's limited instruction set, a risc is slower than a 
> > comperable x86 because it requires more intructions to accomplish a task. I 
> > argue that risc's have more bang per computational cycle, because the 
> > instructions they use take up fewer clock cycles, and fewer instructions 
> > means the cpu can transmit data more quickly.
> 
> Um, the thing that both of you are leaving out is that we're talking about
> a programmable machine.  The reason it's a religious war starter, as *I*
> understand it, is that it's really hard to compare apples to apples
> anyway.  It depends largely on the task you want to do.  So, you might be
> able to do the same thing equally efficiently, but the algorithm might
> need to be completely different for the two machines.  Like a risc machine
> might win using a bubble sort, but a cisc machine might win using an
> insertion sort, and if you let each play to their strengths, they might
> tie.  
> 
> Another example might be that x86 processors won't move memory to memory
> without involving a register.  Maybe a MIPS (or other risc chip) does
> allow this sometimes.  So for some things that just *move* data, the risc
> processor kills, but slows down when you have to figure out how to make it
> process.

Nope, you won't find memory-to-memory moves on any RISC architecture;
it's completely outside the envelope.

> The only thing I'd say about your statement is that saying that a risc
> proc. has more bang per clock cycle is incorrect as phrased.  The
> principle is that you puposely have *less* bang per cycle, but you "make
> it up in volume."  

You optimize the common operations, and you expose more of the "real"
hardware (so you get things like delayed branch and delayed memory
load). 

> Doing a little looking at assembly code would be most instructive.  If you
> write a little program in C, 'g++ -S ' will compile to a *.s file, which
> is assembly code.  I think if you do the same and include the switch for
> the MIPS target, you  will get some MIPS assy code.  You still don't know
> how many cycles each instruction takes, but you'd get an idea of how
> different the paths are to get to the same point.

On a RISC architecture, each instruction takes one cycle.

Another theoretical issue, which doesn't work out in this universe in
practice, is that RISC processors are simpler and smaller (smaller die
size).  This should, theoretically, allow them to be fabricated in
newer, faster, processes, which aren't yet up to the complexity of a
Pentium 4.  This hasn't worked out in practice because Intel and AMD
are throwing such tremendous resources at making Intel-compatible
chips as fast as possible.  The RISC architectures keep up pretty well
*given the disparity of resources*. 
-- 
David Dyer-Bennet      /      Welcome to the future!      /      dd-b at dd-b.net
SF: http://www.dd-b.net/dd-b/          Minicon: http://www.mnstf.org/minicon/
Photos: http://dd-b.lighthunters.net/