Thinking Parallel

A Blog on Parallel Programming and Concurrency by Michael Suess

Ten Questions with Joe Duffy about Parallel Programming and .NET Threads

Joe DuffyThis is the fifth and last post in my Interviewing the Parallel Programming Idols-Series. A lot of other potential interview partners have been suggested by some of my readers and I may get back to doing another round of this series after a while. My interview partner today is Joe Duffy, who used to be the concurrency program manager on the Common Language Runtime team from Microsoft. He is now back to being a developer and works on parallel libraries, infrastructure, and programming models in Microsoft’s Developer Division. This puts him into a unique position to talk about threading and .NET, especially since he also has a widely-know blog about the topic and has authored a book on .NET: Professional .NET Framework 2.0. I am therefore very grateful that he agreed to answer my questions!

Five questions about parallel programming in general first:

Michael: As we are entering the many-core era, do you think parallel computing is finally going to be embraced by the mainstream? Or is this just another phase and soon the only people interested in parallel programming will be the members of the high performance community (once again) ?

Joe: It’s difficult to predict how, where, and when parallel programming will become mainstream. More performance provides a lot of value for many (but not all) people, and in fact helped to drive PC sales (and indirectly, commercial software sales) during the glory days. So, sure, parallelism will absolutely be mainstream. But note that “mainstream” may not mean that every programmer needs to think about fine-grained parallelism when s/he writes application code.

In some sense, however, most software developers wouldn’t be comfortable “embracing” parallelism: it is quite difficult to program parallel computers today (with the technologies most PC developers use, at least), and most people don’t even know what to do with the amount of processing power that the many-core era is bringing to everyday desktop machines anyway. A smaller number of early adopters will learn how to exploit mainstream parallel computers to add customer value, make a lot of money, show the rest of us what great things can be done, and then we will likely see a breakthrough. Abstracting as much of it away as possible will be key. We’re not quite at that point yet.

Michael: From time to time a heated discussion evolves on the net regarding the issue of whether shared-memory programming or message passing is the superior way to do parallel programming. What is your opinion on this?

Joe: Shared-memory vs. message-passing is a red herring. What is the difference between sending and receiving a message (as in message-passing systems) and forking and joining on a data parallel computation (as in a shared-memory system)? Logically they are in fact quite similar, aside from the specific logistics of data management, communication, and access. In terms of parallelism, however, you still have to worry about dependencies, causality, composability, fault tolerance, and so on. Shared memory doesn’t magically do away with these very important (and difficult) issues.

The important question to me is: how do we unify the two worlds? How do we incorporate the atomicity and isolation properties that messaging often gives you in a first class way into the programming constructs that most developers use? (Or, alternatively, what programming languages should people use for parallel programming if we can’t change the ones they are already using?) Message passing systems are special in the sense that they are often built with these ideas in mind. I fully believe that immutability and isolation are two things that all modern type systems should support (and encourage use of!) in a 1st class way, even for imperative, shared memory, C-style languages. Few of them do (yet).

Michael: From your point of view, what are the most exciting developments /innovations regarding parallel programming going on presently or during the last few years?

Joe: There’s been a steady explosion of work on program verification techniques, new synchronization mechanisms (like transactional memory), and new ways of expressing fine-grained parallelism (like data parallelism). The Fortress, X10, and Chapel work aside from transactional memory support is also quite interesting. The Haskell community is doing a lot of neat stuff with data parallelism, and NESL-like nesting, so keep an eye out there too. These are all very exciting to me.

Michael: Where do you see the future of parallel programming? Are there any “silver bullets” on the horizon?

Joe: I don’t believe there will be any parallel programming silver bullets. Hardware parallelism is a great example of how we can combine various ILP techniques (wide issue superscalars, branch prediction, pipelining, etc.) to give a respectable parallel speedup, but the magic is in the combination of these things. The story will be similar for software I think. We’ll have a high-level structure to the programs that permits large independent chunks to run in parallel (probably isolated), moderate-level structure to permit logically independent activities to run in parallel, and very fine-grained parallelism (data- and task-driven) to make the actual statements that comprise the activities run in parallel. These will all come from different abstractions: from libraries to language extensions to runtime smarts.

Michael: One of the most pressing issues presently appears to be that parallel programming is still harder and less productive than its sequential counterpart. Is there any way to change this in your opinion?

Joe: It will always be harder for some select programmers (i.e. the ones providing the new, useful capabilities), but I believe we’ll make a lot of progress toward making it less hard for most programmers through abstraction. There are simply more design and implementation issues to worry about when parallelism enters the picture, and program state machines become dramatically more complex. Programmers actually need to understand the dependencies in their code, and to architect with a looser coupling and with less sharing. Things will get better with time.

So much for the first part of this interview. Without further ado, here is the second part about .NET threads:

Michael: What are the specific strengths and weaknesses of .NET threads as compared to other parallel programming systems? Where would you like it to improve?

Joe: The main strength of .NET threading is that there are many useful basic building blocks available from which to create other abstractions. The main weakness is that many of the useful abstractions you’d like to have aren’t available “out of the box”.

Michael: If you could start again from scratch in designing .NET threads, what would you do differently?

Joe: It’s hard to say we could have done it differently, but there are a lot of hard issues in the current .NET Framework and Win32 APIs which inhibit parallelism. We’ve grown up in a shared memory, mutable world, and there are a lot of subtle dangers that have arisen as a result. A select few people deeply understand these dangers, and can teach others (or build abstractions that hide them), but I believe we’d have been better off if parallelism were a 1st class concern from day one. The reality of our single-CPU-minded world for the past 20+ years simply made this end result infeasible to anticipate.

Michael: Are there any specific tools you would like to recommend to people who want to program in .NET threads? IDEs? Editors? Debuggers? Profilers? Correctness Tools? Any others?

Joe: Visual Studio is pretty good at allowing you to step through concurrent programs, and WinDbg exposes plenty of useful raw data (like OS threading data structures). The performance profiler that comes with Visual Studio 2005 is also useful at analyzing where the cycles went and also at various hardware performance counters, like L2 cache misses. Intel’s VTune and related threading tools are also quite useful at analyzing tricky thread interactions, though there are limitations when working with .NET.

Michael: Please share a little advice on how to get started for programmers new to .NET threads! How would you start? Any books? Tutorials? Resources on the internet? And where is the best place to ask questions about it?

Joe: I’d recommend reading my blog, Chris Brumme’s blog, and Vance Morrison’s blog. There is also a great index on the MSDN website of past MSDN Magazine articles on .NET concurrency. Also keep an eye out for my forthcoming book, Concurrent Programming on Windows.

Michael: What is the worst mistake you have ever encountered in a .NET threads program?

Joe: .NET threading is a very low-level and raw way to program. People have difficulties with races and deadlocks and other reliability related things, unexpected reentrancy, GUIs, thread affinity, and many other problematic things. I wrote an article on my blog reviewing many of the most common mistakes library programmers make, which may be interesting and relevant.

Michael: Thank you very much for your answers!

4 Responses to Ten Questions with Joe Duffy about Parallel Programming and .NET Threads


Comments