Thinking Parallel

A Blog on Parallel Programming and Concurrency by Michael Suess

Archive for 2006/08

Locality optimization experiments

Chris over at HPCAnswers has written a nice introductory article about locality optimization. He uses matrix multiplication as an example. The rest of this article assumes you have read his post and therefore know at least some basics about loop reordering and locality optimization. I remember very clearly, when my supervisor first introduced the concept […]

Thoughts on Larry O’Briens article at devx.com

Larry O’Brien has written an introductory article on parallel programming with OpenMP on Windows and announced it in his blog. I enjoyed reading the article and think it is a really nice resource for people new to parallel programming. I would like to comment some parts of his article and since it does not have […]

Scoped locking vs. critical in OpenMP – a personal shootout

When reading any recent book about using C++ and parallel programming, you will probably be told that scoped locking is the greatest way to handle mutual exclusion and is in general the greatest invention since hot water. Yet, most of these writers are not talking about OpenMP, but about lower level threading systems. OpenMP has […]

More information on pthread_setaffinity_np and sched_setaffinity

Skimming through the activity logs of this blog, I can see that many people come here looking for information about pthread_setaffinity_np. I mentioned it briefly in my article about Opteron NUMA-effects, but barely touched it because I had found a more satisfying solution for my personal use (taskset). And while I do not have in […]

More reasons why OpenMP is the way to go for parallel programming

Expanding on my earlier article about Why OpenMP is the way to go for parallel programming, I would like to point out a couple more strengths of OpenMP. And the best thing about this is: I do not have to do it myself this time, because OpenMP evangelist Ruud van der Pas has already done […]

Why OpenMP is the way to go for parallel programming

I guess it’s time once and for all to explain the reasons why I think OpenMP is the future of parallel programming, at least in the short term. I will start with the short version: relatively high level of abstraction performance maturity

My views on high-level optimization

This article is about high-level-optimization, i.e. I will explain how I usually approach optimizing a program without going into the gory details. There are a million books and web pages about low-level-optimizations and the tricks involved out there, therefore I will not dive into that (could not possibly hope to cover this in a single […]