Thinking Parallel

A Blog on Parallel Programming and Concurrency by Michael Suess

Why OpenMP is the way to go for parallel programming

I guess it’s time once and for all to explain the reasons why I think OpenMP is the future of parallel programming, at least in the short term. I will start with the short version:

  • relatively high level of abstraction
  • performance
  • maturity


Are you still with me? OK, then let’s hear the long version – actually you will hear the really long version now ;-). When I started my PhD., my supervisor and me relatively quickly agreed that we wanted to do something to make parallel programming easier. Therefore I started looking around and tried to figure out, what the present state of the art regarding parallel programming looked like (you can still see the project page for that subproject here, some more results will be published shortly). I found that the dominant languages for parallel programming were still C/C++ and Fortran. I tried several others (e.g. Java, Haskell, Ocaml, Python), but each had some shortcoming making it inferior for parallel programming. I did not want to invent a new language from scratch, simply because I did not have the experience required to do a good job with it and also because I do not like the idea of investing four years of my life into building something that no one except me will ever use – which might sound overly pessimistic but judging from all the abandoned webpages I encountered during my search this seems to be what happens to the majority of newly invented languages.

Since I also do not speak Fortran too well and do not like it too much either, I had to stick with C or C++ and all the existing parallel programming systems for it (which eliminated High Performance Fortran from the equation). At that point, OpenMP was the logical choice, because it already was designed for ease of use. Other advantages are the reasons stated at the beginning of this article. To come back to the points made there, first of all it has a higher level of abstraction than most of the other parallel programming systems I know. Which is great. Consider the following program snippet as an example:

const int N=120000;
int arr[N];

#pragma omp parallel for
for (int i=0; i

  • you do not see how each and every thread is created and initialized
  • you also do not see a function declaration for the code each thread executes
  • you also do not see exactly how the array is divided between the threads
  • In most other parallel programming systems, you will have to write code for all this and a lot more, as soon as it gets more complicated. Or maybe you have to write code for explicitly sending messages to different processes if you are using a message passing system such as MPI or PVM. Not with OpenMP, and that is the very first reason I like it – it makes my life as a programmer easier, because I have to do less. Higher level of abstraction, as I said.

    The other two reasons I mentioned earlier are quickly explained: the performance of OpenMP is on par with the performance of other, lower-level parallel programming systems. Since performance is an important factor (after all, if I did not care about performance, I would not parallelize my application at all, simple as that), this point is of critical importance.

    The last advantage OpenMP has over a lot of parallel programming systems is its maturity. Version 2.5 of the specification is out presently, and work on 3.0 is progressing. Since OpenMP has been around since 1997, the compilers are relatively advanced by now (at least for C and Fortran, there are still some problems with C++, which I might describe in more detail in a future article). There is also more than one compiler available (e.g. the Intel, Sun and Portland Group Compilers, Visual Studio recently added support and GCC will have support with the upcoming version 4.2, to name just a few), which is way more than I can say for most research parallel programming systems.

    Because of these three reasons, I chose OpenMP as the basis of my research and have been active since this time to find problems with it, come up with solutions for these problems and generally try to improve the system. You can see part of my work at this project description page, but since I have joined the very kind folks at the OpenMP language committee recently, I have done some work there as well, which is not publically visible.

    I guess now you know, why in my humble opinion OpenMP is the way to go for parallel programming at this point in time, what I am presently doing for my PhD. and why this blog will be heavily biased towards OpenMP.

    If you still want to read it, please consider subscribing to it using either the RSS-feeds or the newly added email subscription option, both conveniently located right at the top of the sidebar. The email subscription service is powered by Feedburner and their privacy policy assures that no email address will ever be sold or misused by them and I have no reason not to believe them in this point – and of course I will not do so either.

    8 Responses to Why OpenMP is the way to go for parallel programming


    Comments