Thinking Parallel

A Blog on Parallel Programming and Concurrency by Michael Suess

Moore’s Law is Dead – Long Live Moore’s Law!

Gordon MooreWhile not exactly related to parallel programming, I am so sick of hearing this: Moore’s Law no longer holds! Clock speeds are no longer increasing, therefore the world is going parallel! I have shortly commented on this a couple of weeks ago. You can sometimes read it in blogs or hear it in conversations. And while the second sentence is actually true, the first one is not and comes from a severe misunderstanding of what Moore actually said and meant. This article comes to the rescue of all the poor victims to this misunderstanding :-).

So let’s approach this the wrong way around and see what many people think Moore’s Law says:

The clock speed of chips doubles every 24 month.

This misunderstanding is so common, because that’s how the chips actually did behave for quite some time in the past. Those times are gone now, as heat problems have forced the chip makers to abandon the MHz-race. The sentence in itself builds on a severe misunderstanding – the one that clock speed equals computing power. As could be observed for the Pentium 4 architecture, this is not the case. Computing power is of course not merely dependent on how many clock cycles you manage to do each second, but also on how much work can be done in each clock cycle. Which heavily depends on your architecture – but since GHz are so much easier to work with for the marketing department, this part of the equation used to be dropped.

Anyways, this is the past and history. Many people know this by now. Another commonly heard version of Moore’s Law is this:

The computing power of chips doubles every 24 month.

Seems logical, right? Computing power is what counts and Mister Moore must have been talking about that. While this version is better than the first (for the reasons stated above), it is still not accurate. Does the computing power double when you have two cores on a chip? Depends on your problem, right? And for most problems the answer is unfortunately: no (read up on Amdahl’s Law if you don’t know from the top of your head why that’s the case). Therefore this interpretation of Moore’s Law does not hold up in court anymore as well.

Which needs not concern us, really, because it is not what Moore said, either :-). I am not going to hold you off (or bore you :-) ) any longer, but instead tell you what Moore’s Law really is about:

The density of transistors on chips doubles every 24 month.

Actually, thats the short version and easy to comprehend version, if you want to know what he has really said, you need to look here:

Reduced cost is one of the big attractions of integrated electronics, and the cost advantage continues to increase as the technology evolves toward the production of larger and larger circuit functions on a single semiconductor substrate. For simple circuits, the cost per component is nearly inversely proportional to the number of components, the result of the equivalent piece of semiconductor in the equivalent package containing more components. But as components are added, decreased yields more than compensate for the increased complexity, tending to raise the cost per component. Thus there is a minimum cost at any given time in the evolution of the technology. At present, it is reached when 50 components are used per circuit. But the minimum is rising rapidly while the entire cost curve is falling (see graph below). If we look ahead five years, a plot of costs suggests that the minimum cost per component might be expected in circuits with about 1,000 components per circuit (providing such circuit functions can be produced in moderate quantities.) In 1970, the manufacturing cost per component can be expected to be only a tenth of the present cost.

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year (see graph on next page). Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000.

But since this is more difficult to work with, I am going to stick to the easy version above, which basically means the same thing. And it is still true today. All that has changed is the way these massive amounts of transistors are used. Now they are being invested into more cores on a single chip.

I have been told by people working for the chip makers that they don’t believe this law will last forever, but that they expect it to be true for a couple more years. And who knows what kinds of chips we will be having in a couple of years – think quantum computing or stuff like that.

In the end I am going to ask a little favor from you: the next time you hear or read somewhere that Moore’s Law is dying, please leave a comment with a link to this article. Or to the Wikipedia one. Or this FAQ on Or just kindly explain what I have just told you here, so we can finally get rid of this misconception. Thank You!

11 Responses to Moore’s Law is Dead – Long Live Moore’s Law! »»


  1. Comment by Petrov Alexander | 2007/07/10 at 15:54:44

    Thank you for insight post!

  2. Comment by Mark Miller | 2007/07/11 at 03:00:46

    You’ve reminded me of the true meaning of Moore’s Law. I had been under the misimpression that it had to do with speed, because that’s the popular notion of it. Probably the last time I heard this definition was when I was getting my CS degree 14 years ago.

    Moore’s Law = greater speed with time seemed like a good analogy for most people, because that’s what actually happened for so many years.

    It seems to me though that there are practical limits being reached, because of the move to multiple cores. It’s not as if we all chose this path. If you ask software developers (me included), none of us are looking upon this with much glee. I’m not sure what’s holding up 64-bit adoption. That would seem to be the next logical step. Maybe it’s the same heat issue. Ideally us developers would like to see the single-core approach continue on until effective methods of parallel computing are developed, beyond the use of threads and semaphores or RPC between multiple processes. This isn’t the world we live in today. The single-core approach has reached its limit, and we’re now having to deal with that.

    Gaining more speed out of these multiple cores is going to be a greater challenge than it was to take advantage of 32-bit processing when that was a new thing in the PC world.

  3. Comment by Michael Suess | 2007/07/11 at 21:46:18

    I know many software developers don’t like the move to multi-cores, because it turns their world upside down. But if you see it as a chance to separate yourself from the crowd, the future may look brighter – and thats why you are here and reading this, right? The future is parallel, you either accept that and ride the wave or it rides you ;-)

  4. Comment by Sonila | 2007/07/24 at 00:26:05

    The problem is not as easy as it seems to be. When calculating the processing power more than one factor are to be taking into account. Clock speed is one of the metrics but it is very relative as a concept. In dual-core or cell processors era, the computing power makes more sense when speaking about how fast a certain task can be executed.
    According to parallel processing, Amdahl’s Law is stressing the sequential heritage on the algorithms that can be used on a certain problems and the architecture plays

  5. Comment by Sonila | 2007/07/24 at 00:29:48

    a crucial role. That’s why one needs to be very careful in mentioning X or Y law, without taking into account a profound insight and conditions that the law might apply.

  6. Comment by Michael Suess | 2007/07/25 at 17:19:10

    @Sonila: I am not sure I can follow your argumentation here. Are you disagreeing with the points raised in my post? You seem to be agreeing to at least some of them, yet I fail to see your point. I might be being dense, but I just cannot twist my head enough to follow your arguments…

  7. Comment by Remi Chateauneu | 2007/08/03 at 16:55:47

    “The density of transistors on chips doubles every 24 month.”
    Yes, higher transistors density is used for more cores, deeper RISC pipes, more complex caches and instructions processing (branch prediction, hyper-threading), etc..

    But I am surprised of how many new instructions sets ( MMX, SSE3, SSE4 ) appear in the Intel architecture ? Can instructions as MPSADBW ( Compute eight offset sums of absolute differences ) or PHSUBD (Packed Horizontal Substract ) still be qualified as RISC ? Or will the trend be to the return of CISC-like instructions, as the safest way to have more performance (Remember Vax’s POLYD: Evaluate polynomial D_floating …) ?

  8. Comment by Michael Suess | 2007/08/03 at 21:45:00

    @Remi: I could be wrong here, but isn’t the x86 a CISC-architecture? Still is and has always been. Internally, microcode is used to translate the instructions into more RISC-like ones, but the architecture is still a CISC. See this for details:

  9. Comment by L505 | 2008/02/04 at 02:30:35

    Moore’s law is not a law. It is a trend he charted. The fact that foreign exchange or stock market goes up a certain amount in a certain period of time is not a law. It is a trend.

Trackbacks & Pingbacks »»

  1. [...] of variants––I pondered this morning an equation for the infocalypse, derived from Moore’s Law  and including some constants to represent the accelerating construction of mega-server-farms, the [...]

  2. [...] Moore’s law, contrary to what is often thought still holds true, the exponential processor transistor growth predicted by Moore does not always translate into [...]

Leave a Reply

HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>