Mark Nelson does not believe in the hype about multi-cores. And he is right with several of his arguments. The world is not going to end if we cannot write our applications to allow for concurrency, that’s for sure. Since I am working on parallel machines all day, it is easy to become a little disconnected with the real world and think everybody has gotten the message and welcomes our new parallel programming overlords. Some of Marks arguments are a little shaky, though, as I hope to show you in this article. Is Mark right? I suspect not, but only time will tell.
Let’s go through his arguments one by one (for this, it helps if you read the article in full first, as my argument is harder to understand without the context).
Linux, OS/X, and Windows have all had good support for Symmetrical Multiprocessing (SMP) for some time, and the new multicore chips are designed to work in this environment.
I completely agree here, the problem is not on the operating system’s side of the equation.
Just as an example, using the spiffy Sysinternals Process Explorer, I see that my Windows XP system has 48 processes with 446 threads. Windows O/S is happily farming those 446 threads out to both cores on my system as time becomes available. If I had four cores, we could still keep all of them busy. If I had eight cores, my threads would still be distributed among all of them.
This argument I don’t understand. He claims that he has enough threads on his system to keep even a four core system busy. Yet, at the same time the CPU-monitor he depicts shows a CPU-Usage of merely 14.4% – which proves that those threads are not really doing anything useful most of the time. Most of them are sleeping and will therefore not be a burden on the CPU anyways. As I see it, Marks picture shows that he has by far not enough tasks to be done on his system to keep even his dual-core system from going into power-saving mode. It’s not how many threads there are, it’s how much they do that’s important.
Modern languages like Java support threads and various concurrency issues right out of the box. C++ requires non-standard libraries, but all modern C++ environments worth their salt deal with multithreading in a fairly sane way.
Right and wrong, if you ask me. The mainstream languages do have support for threads now. Whether or not that support is sane is another matter, altogether. 🙂 I know one thing from looking at my students and my own work: parallel programming today is not easy and it’s very easy to make mistakes. I welcome any effort to change this situation with new languages, tools, libraries or whatever magic available.
The task doing heavy computation might be tying up one core, but the O/S can continue running UI and other tasks on other cores, and this really helps with overall responsiveness. At the same time, the computationally intensive thread is getting fewer context switches, and hopefully getting its job done faster.
That’s true. Unfortunately, this does not scale, since we have already seen in the argument presented above, that all the other threads present can run happily on one core, without it even running hot. Nobody says that you need parallel programming when you have only two cores. But as soon as you have more, I believe you do.
In this future view, by 2010 we should have the first eight-core systems. In 2014, weâ€™re up to 32 cores. By 2017, weâ€™ve reached an incredible 128 core CPU on a desktop machine.
I can buy an eight-core system today, if I want to. Intel has a package consisting of two quad-core processors and a platform to run them on. I am sure as soon as AMD gets their act together with their quad-cores, they will follow. I am not so sure anymore when the first multi-cores where shipped, but this press-release suggests it was about two years ago in 2005. I can buy eight cores now from Intel. Or I can buy chips from Sun with eight cores supporting eight threads each. My reader David points me to an article describing a new chip with 64 cores. Does this mean, that the cores today are going to double each year? When you follow this logic, we are at 64 cores in 2010. The truth is probably somewhere in the middle between Mark’s and my prediction, but I am fairly sure the multi-core revolution is coming upon us a lot faster than he is predicting…
He also pointed out that even if we didnâ€™t have the ability to parallelize linear algorithms, it may well be that advanced compilers could do the job for us.
Obviously, this has not quite worked out as good as expected.
Maybe 15 or 20 years from now weâ€™ll be writing code in some new transaction based language that spreads a program effortlessly across hundreds of cores. Or, more likely, weâ€™ll still be writing code in C++, Java, and .Net, and weâ€™ll have clever tools that accomplish the same result.
I sure hope he is right on this one. Or on a second thought, maybe I prefer a funky new language with concurrency support built in, instead of being stuck with C++ for twenty more years. 😛
You have heard my opinion, you have read Mark’s, what’s your’s? Is the Multi-Core Revolution a Hype? Looking forward to your comments!