It has been a while since I have done a news-roundup – therefore it is time for a new one. But before I start, let me pass on a few personal remarks about my present situation. I am in the process of finishing my PhD. right now and hope to submit it for review next week. Of course, this also means that I am rather busy at the moment, therefore the comments to each article I present here are not as verbose as you may be used to. I have also moved back to Leipzig, which is the beautiful city where I was born and raised. Starting in October, I will be working at a company called TomTom WORK, which is a division of TomTom. You may or may not know that company from the label on your navigation system in your car. I will be doing software development using C++. As far as I know, my job has nothing to do with parallel programming, but since I still have my pet project in the works (more on it really soon now) and one article per week is easily sustainable without working in the field directly, I intend to just continue this blog as is.
That’s it from me, here are the links I found interesting during the last few weeks:
- In Hitting The Memory Wall, an interesting experience using a parallel sorting implementation is described. I do not have the time to dive into this and find out if the claims there are correct for the examples cited, but I know saturating the memory bus is easily done on todays architectures (especially the Intel ones that do not have a memory controller for each processor).
- Dr. Dobb’s is having a whole issue dedicated to high performance computing with lots of interesting parallel programming content.
- Uncle Bob over at the Object Mentor blog does not like threadprivate storage, but would rather have taskprivate storage. His arguments do make sense, especially as people start thinking more in terms of tasks when doing parallel programming with Intel’s TBB advocating them heavily and OpenMP 3.0 with task-support right around the corner.
- There is a conversation going on about whether or not transactional memory is useful. My take: transactional memory is not a silver bullet. But every small thing helps and that’s why I am still looking forward to it.
- If you are into complexity-theory and parallelism, you may have fun reading this.
- Bob Warfield over at his blog really starts to get into concurrency. He has a call to waste more hardware and he is actually serious with it. And although many of the HPC people may not like it: I think he has some very valid points.
- A guy called Juergen asks Guido van Rossum (the benevolent dictator of Python) to please get rid of the GIL. And Guido responds. Just in case you are wondering what the heck this infamous GIL is, it is the Global Interpreter Lock that prevents the python runtime from properly utilizing multiple cores. Guido’s main argument is that right now removing the GIL would slow down Python dramatically, but he is open for experiments, as long as somebody else does them. When I started my PhD., I actually looked into scripting languages and parallelism and the GIL was what finally convinced me to look for easy-to-use parallelism elsewhere. But I am also sure that the GIL will not be there forever, especially as scalability is getting more important than performance in our days.
- And last but not least: insidehpc is dead. Or maybe not. It appears John has managed to raise enough support in the community to keep the site alive with an impressive list of new contributors. The site has been alive and well during the last few days, let’s hope it stays that way once the initial excitement wears off. So if you want to help and make yourself a name in HPC, I am sure John appreciates any help and will happily add you to his list of contributors…
This has taken longer than I wanted, but I hope you enjoyed it anyways!