The ACM Queue has an interview with two of my absolute computer architecture heroes – John Hennessy and David Patterson. I have dug my way through their famous book Computer Architecture: A Quantitative Approach right before my diploma thesis and the only reason I did not buy it back then was, because I feared they would bring out a new edition soon (and because I was cheap at that time). Turns out I was right, and they did bring out a new edition. But thats not the topic of the interview. It’s in large parts about parallelism and the changes in todays computer architecture related to it.
To wet your appetite a little, here are some quotes from the article:
Now we’re into the explicit parallelism multiprocessor era, and this will dominate for the foreseeable future. I don’t see any technology or architectural innovation on the horizon that might be competitive with this approach.
A fellow believer he is, as it seems 8).
That led to a project involving 10 of us from several leading universities, including Berkeley, Carnegie-Mellon, MIT, Stanford, Texas, and Washington. The idea is to use FPGAs (field programmable gate arrays). The basic bet is that FPGAs are so large we could fit a lot of simple processors on an FPGA. If we just put, say, 50 of them together, we could build 1,000-processor systems from FPGAs.
FPGAs are close enough to the design effort of hardware, so the results are going to be pretty convincing. People will be able to innovate architecturally in this FPGA and will be able to demonstrate ideas well enough that we could change what industry wants to do.
We call this project Research Accelerator for Multiple Processors, or RAMP. There’s a RAMP Web site (http://ramp.eecs.berkeley.edu).
I hear the big players in the microprocessor industry have been doing that for years – simulating their newest processor designs on FPGAs first in huge and very well cooled buildings. To do this at a university on a considerable scale, you need big sponsors, though – but when you take a look at the universities involved and at their sponsors, this may not be a big problem for them ;).
Architecture is interesting again. From my perspective, parallelism is the biggest challenge since high-level programming languages. It’s the biggest thing in 50 years because industry is betting its future that parallel programming will be useful.
Industry is building parallel hardware, assuming people can use it. And I think there’s a chance they’ll fail since the software is not necessarily in place. So this is a gigantic challenge facing the computer science community. If we miss this opportunity, it’s going to be bad for the industry.
And we are riding the front of the wave. Ain’t these exciting times?
Parallelism has changed the programming model. It’s way beyond changing the instruction set. At Microsoft in 2005, if you said, “Hey, what do you guys think about parallel computers?” they would reply, “Who cares about parallel computers? We’ve had 15 or 20 years of doubling every 18 months. Get lost.” You couldn’t get anybody’s attention inside Microsoft by saying that the future was parallelism.
In 2006, everybody at Microsoft is talking about parallelism. Five years ago, if you had this breakthrough idea in parallelism, industry would show you the door. Now industry is highly motivated to listen to new ideas.
Good times ahead, I am telling you, for people that are doing their homework now and getting familiar with parallel programming – people just like you!