Anwar Ghuloum has posted his opinion about What Makes Parallel Programming Hard at the Intel Research Blogs (which are buzzing with activity with regards to parallel programming at the moment). The funny thing is, I have asked myself the same question last week, because I have just written a section in my PhD.-thesis about it. And since at least some of my answers are different from his, I have decided to post them here. (more…)
A Blog on Parallel Programming and Concurrency by Michael Suess
I see this mistake quite often: people have a performance or scalability problem with their threaded code and start blaming locks. Because everyone knows that locks are slow and limit scalability (yes, that was ironic ). They start to optimize their code by getting rid of the locks entirely, e.g. by messing around with memory barriers. Or by just not using locks at all. Most of the time the result happens to work on their machine but has subtle bugs that surface later. Let’s see what the experts say about using locks, in this case Brian Goetz in his excellent book on Java Concurrency in Practice:
When assessing the performance impact of synchronization, it is important to distinguish between contended and uncontended synchonization. The synchronized mechanism is optimized for the uncontended case (volatile is always uncontended), and at this writing, the performance cost of a “fast-path” uncontended synchronization ranges from 20 to 250 clock cycles for most systems. While this is certainly not zero, the effect of needed, uncontended synchronization is rarely significant in overall application performance, and the alternative involves compromising safety and potentially signing yourself (or your successor) up for some very painful bug hunting later.
Could not have told you any better than that and I can assure you this man knows what he is talking about. Although he speaks about Java specifically, the result is the same for other parallel programming systems: the first step when you have synchronization problems is not to get rid of locks, but to reduce lock contention. This article will tell you 10 (+1) ways to do just that.
Update: As Brian notes correctly in his comment below this article, these are advanced techniques and should only be employed if you are absolutely sure, locks are the problem! The only way to be sure of course is to use your profiler of choice, or else you will be guilty of the number one sin of ever programmer: premature optimization (and you don’t want to go to programmers hell for this one, do you? ). (more…)
Exactly one year ago, the first post on this blog went live. I don’t want to bore you with yet another review of the times passed, but nevertheless have some announcements to make with regards to this site. And of course, if you decide to not read this article because you don’t like announcements for some reason, there is still one thing I want you to know: thanks for being here and reading this ! (more…)
It’s time again for a short survey of what has been going on lately on the net with regards to parallel programming. Actually, I wanted to post something different this week, but since this list has been growing so fast and I was not quite satisfied with my other article anyways, I decided to launch this one early. I hope you enjoy what I have dug up for you. And if not, be sure to leave a comment or leave me a note! Also, in case I have forgotten / not found an interesting article, feel free to add it for my benefit and that of your fellow readers. Thanks for caring! (more…)
While not exactly related to parallel programming, I am so sick of hearing this: Moore’s Law no longer holds! Clock speeds are no longer increasing, therefore the world is going parallel! I have shortly commented on this a couple of weeks ago. You can sometimes read it in blogs or hear it in conversations. And while the second sentence is actually true, the first one is not and comes from a severe misunderstanding of what Moore actually said and meant. This article comes to the rescue of all the poor victims to this misunderstanding . (more…)
I have been quite late with my post this week, I apologize for that, but I have been sick with a cold (and I still am). Couple that with the fact that I did not find the time to write one or two posts in advance since Iwomp and you see why I am late this week. Hope my sickness does not influence my writing style too badly . Anyways. From time to time, readers write me their problems related to parallel programming. Especially with C++ and OpenMP. And while firstname.lastname@example.org is probably the better place to ask those questions, I still try to help as much as I can and as much as my time permits, at least if its an interesting problem. The problem I would like to talk about today falls into this category for me, as it requires an interesting workaround: its the problem of how to throw an exception out of a parallel loop in OpenMP. Can’t do that, I hear you say? If you have read my past articles about exceptions and OpenMP you know that it is not trivial. But its doable with a little trick. (more…)
Its been a while since I have done my last news roundup and some very interesting posts/articles have been produced during that time. Especially in the week that I have been away to Beijing. Go figure, when I am away for one week, everyone is talking about multicores and concurrency. But on the other hand as always, most of the posts shown here have not been produced yesterday (I am not that current in my reporting ), but are worth reading even after some time has passed. And now enough of the preface, here are the articles that have managed to catch my attention: (more…)
It’s been a short while since I came back from IWOMP but I would still like to share some of the news and experiences I have brought back from there. It’s been a really great experience for me, both at the conference and in the city of Beijing - which is always worth a trip, if you ask me. So here they go: (more…)
I am a little late with my post this week, I apologize for that but I am still in Beijing at IWOMP with little time to post. Actually, now that the conference is over, I am staying for two more days to see the city, the Great Wall and a couple of other sites - which still leaves me with no time to post . Therefore you are going to have to live with a timeless one that I have recorded beforehand just for this purpose. Here it goes.
It is difficult enough already to teach my students about parallel programming, but when even some of the terms used are misleading, something is seriously wrong. Take this very simple example: the term reentrancy. I have just found the fourth definition of it, all from the field of parallel programming and all of them mean something (at least slightly) different. (more…)
From time to time, one of my readers asks a question via email that just keeps me thinking. And sometimes, when I realize the question is not only interesting for me, but maybe for you as well, I may decide to post it and make an article out of it. Just like last week, when a reader asked: How much of the software industry will have to deal with the concurrent computing problem? This article contains my thoughts on the subject and also allows me to let my mind wander in search for the applications that this kind of power will allow. (more…)