Thinking Parallel

A Blog on Parallel Programming and Concurrency by Michael Suess

Ten Questions with David Butenhof about Parallel Programming and POSIX Threads

David ButenhofThis is the fourth post in my Interviewing the Parallel Programming Idols-Series. I don’t think I need to introduce my interview-partner today to anyone who has done threads-programming (except maybe to the Windows-folks). His name is David Butenhof, and he has left a big footprint in the POSIX Threads standard. David presently works for Hewlett Packard and can frequently be found on comp.programming.threads (the newsgroup on everything threads-related). And let’s not forget he wrote a great book on POSIX threads: Programming with POSIX Threads. I have called this book the bible of POSIX Threads in the past, and I will repeat it here.

But since you are not reading this article to hear me praising him, but for his answers, I will start with the interview now. Five questions about parallel programming in general first:

Michael: As we are entering the many-core era, do you think parallel computing is finally going to be embraced by the mainstream? Or is this just another phase and soon the only people interested in parallel programming will be the members of the high performance community (once again) ?

David: Multi cores are pointless and useless without “parallel programming”. In fact, the unused communications hardware and supporting firmware costs chip space and cycles. A multi-core chip used strictly for “nonparallel” programming could never compete with a single core chip of the same technology family, on cost, speed, or anything else. But you have to remember that “parallel programming” extends way beyond the use of fine-grain application-level threading APIs.

ALL modern operating systems, including Mac OS X and Windows, have been “inherently multiprocessing” long before multi-core chips trickled down from the enterprise to the desktop. And even a user who only launches sequential programs benefits from having multiple cores. OS daemons and drivers, for example, can be scheduled on the other core(s). And many modern support libraries (including especially the OS bundled graphics frameworks) are inherently parallel even if the application doesn’t think it is. Pop up the Mac OS X activity viewer, or the Windows task manager, and see how many processes have a single thread. Sure, there ARE some; but a tiny minority of the processes running on any typical system.

I don’t think any of this is going to go away. After all, it started long before multi-core chips, for completely different reasons, and only ACCIDENTALLY allows trivial exploitation of them.

What will change is the methodologies and techniques by which the capability is exploited at the user/application level. We’ve seen widespread adoption of basic threading API technologies; and even though there are indeed still a lot of programmers hanging back in fear of the unknown, the front wave is pressing on beyond that simplicity into generalized lock-free techniques and alternate language metaphors.

“Always in flux, the future is.” I don’t think the genie is going back into the bottle anytime soon; but the thing about genies is that almost nothing is impossible… but a lot is “inconceivable”.

Michael: From time to time a heated discussion evolves on the net regarding the issue of whether shared-memory programming or message passing is the superior way to do parallel programming. What is your opinion on this?

David: That people like to argue. This is good, because the tradeoffs depend greatly on the specific application AND on the current shifts in technology. Both are tools any programmer should have in the toolbox. You’ll use both, in various combinations and alone, for various jobs. Which jobs go best with which tool will change over time. If I had to choose whether to have oxygen or water, I’d stay the choice is moot: the mere fact that I have to make it means I’m in a great deal of trouble. It may be great fun to argue over which is “superior”, but in practice we do, and will, depend on both to a degree that any distinction becomes academic. Academic arguments are great fun, because they can never be factually or permanently resolved; but unless you have a vested interest in selling one or the other, (or just love to argue), my advice is to simply stand back and observe, and learn what you can from both. Use what you can from each, and laugh at all those who claim “the one true path”.

Michael: From your point of view, what are the most exciting developments /innovations regarding parallel programming going on presently or during the last few years?

David: Generalization of lock-free technology has come a long way, to the point where it’s feasible to consider incorporating some basic enabling techniques into language standards. (Though whether this will actually happen is a different matter.) Calmly rational synthesis of lock and lock-free mechanisms has a lot of promise for a wide range of parallel workloads.

Transactional memory has a lot of promise to simplify synchronization and visibility issues across all models of shared memory multiprocessing; but (and?) it’s still young.

Michael: Where do you see the future of parallel programming? Are there any “silver bullets” on the horizon?

David: Sure; but the silver bullets are coming AT you, from all directions, and your job is to dodge or absorb. No, there is no one answer, and the future lies in all possible directions. We’re likely to diverge on API and language paths, and argue for years or decades on which is better and why, before eventually realizing the common principles and formalizing them into something that at least comes close to a unification. But by then the physical technology will have changed (quantum computing?) and it’ll all become academic and open ended again…

Michael: One of the most pressing issues presently appears to be that parallel programming is still harder and less productive than its sequential counterpart. Is there any way to change this in your opinion?

David: I don’t even think it’s true. Large serial programs are hard to build, hard to debug and test, and hard to maintain. The “state of the practice” in parallel programming is a little behind only because it’s more recently come into widespread use, and true “state of the art” practices and technologies take a long time to work their way into actual practice. But the gap isn’t nearly as wide as a lot of people would like to believe. You really just need to cast off your illogical and unnatural preconceptions that only one thing happens at a time (no child would last long in the real world with such ideas!) and embrace asynchrony.

So much for the first part of this interview. Without further ado, here is the second part about POSIX Threads:

Michael: What are the specific strengths and weaknesses of POSIX Threads as compared to other parallel programming systems? Where would you like it to improve?

David: The principal strength is that it’s there. We have a nearly universal fine-grain parallelism API; with the near-exception of always standards-shy Microsoft. (And the semantics and even syntax of Win32 threads are heavily based on the same ancestry as POSIX threads, with strong influences from UNIX and POSIX. Except for its dependency on archaic “events”, moving back and forth isn’t that difficult.)

Michael: If you could start again from scratch in designing POSIX Threads, what would you do differently?

David: Probably not much, really. Some things might be easier now, but that’s only because POSIX threads exists already and it’s raised the degree of knowledge and experience across the industry.

But if you go deeper, the biggest flaws in POSIX threads really aren’t in POSIX threads at all, but in the fact that we relied on existing language syntax and semantics to support the API. We did it deliberately, and had no practical choice at the time because language designers wouldn’t adapt until thread technology had been proved in the real world. THAT needs to be addressed, and still can be. A real parallel-safe memory model for C++ and C, for example, is long overdue. ANSI C needs to borrow some simplistic form of exception technology from C++ — even if it’s only based on Microsoft’s minimalist try/except syntax. For consistency and reliability, for example, POSIX cancellation needed to be an exception, as it was in the original CMA. And truly portable POSIX thread code would be a lot easier with a real parallel-safe C memory model.

If such changes happen, they’ll enable (in fact, require) the POSIX thread standard to become more like what it should have been in the first place.

Michael: Are there any specific tools you would like to recommend to people who want to program in POSIX Threads? IDEs? Editors? Debuggers? Profilers? Correctness Tools? Any others?

David: Tru64’s ladebug and Visual Threads were awesome tools, and ATOM allowed constructing simple analyzers. Nobody else really has anything that comprehensive, despite various gdb add-ons. (Then again, Intel has ladebug… but hasn’t really done anything with it.) Totalview is a great portable thread debugging environment, although the GUI is a bit “opaque”.

There are a wide range of others, but I’ve never really kept up on all of the development; and the landscape changes awfully fast.

The biggest problem people see is asynchronous memory corruption. It’s really hard (and expensive) to monitor and detect. (Though Visual Threads had hooks to help, at a big cost.) Hardware monitoring technologies to express and detect “rule violations” may be essential here. There might even be work on it, though I haven’t seen any. Transactional memory is, in a sense, an attempt to get around all of this; but if it ever gains widespread deployment, that day is a long way off.

Michael: Please share a little advice on how to get started for programmers new to POSIX Threads! How would you start? Any books? Tutorials? Resources on the internet? And where is the best place to ask questions about it?

David: Beware of the net. A lot of thread references are ancient and poorly (or un-) maintained. Many are simply wrong, others wildly misleading. Read critically! There are a lot of good books. Mine, of course. Bil Lewis has a thrice-recycled book on Solaris UI threads, POSIX threads, and Java threads. I’ve seen a couple that look OK even focusing on Windows threading; though that’s a whole area that doesn’t interest me much. 😉

The comp.programming.threads newsgroup is a great resource for anyone involved in threads. The discussions can be fascinating and educational. (I’ve usually followed it fairly regularly and frequently contributed… between an extremely hectic schedule and unreliability of the HP news server I’d been using, I haven’t “been there” much in the past few months.)

Grab a system and experiment, once you know anything at all. See what happens, think about why, and figure out experiments to increase your knowledge. “The Scientific Method” applies to engineering, too; and hard-won personal experience is often far more valuable than something you’ve read.

Michael: What is the worst mistake you have ever encountered in a POSIX Threads program?

David: No, no; too many contenders. I couldn’t choose. Nor, I’m afraid, can I really justify the time it’d take to write up even a few anecdotes. It’s easy to do stupid things when programming. It’s easier when you don’t fully understand what you’re doing.

Then again, as the old saying goes, “wisdom comes from experience, and experience comes from lack of wisdom”.

Michael: What a very fine quote to end an interview with, thank you very much for your answers!

One Response to Ten Questions with David Butenhof about Parallel Programming and POSIX Threads


Comments


Trackbacks & Pingbacks »»

  1. […] A tal proposito, una lettura superinteressante è la serie Interviewing the Parallel Programming Idols su ThinkingParallel.com, dove l’autore va a far domande a mostri sacri della programmazione parallela, toccando ambiti molto diversi (al momento: OpenMP, MPI, Erlang, e l’ultima sui pthread). […]

Comments are closed