How to judge the best parallel programming paradigm?

Anyone who’s passionate about music has a discussion from time to time where they try to convince someone else that their own favourite album is the greatest album ever. Not just in opinion, but fact. They’ll argue the production is superior to anything else, or they’ll emphasise the songwriting or the performance. It ends up a bit like playing a game of Top Trumps where the players aren’t comparing the same item on the card: it’s an argument that can never be won, because if you say nothing beats the production on Tubular Bells, someone else will veto it on the basis that it doesn’t say anything, and surely the lyrics on Dark Side of the Moon earn it the accolade of greatest album ever.

When I was reading Eric Merritt’s overview of concurrency, I found myself wondering how you can judge the best parallel programming paradigm. In his story, he talks about the advantages and disadvantages of shared memory communication. Software Transaction Memory (STM) is optimistic, so there is no waiting for resources. It comes at the cost of performance because the transaction subsystem creates an overhead, and in some cases there might be an impact on memory too. Futures and promises can help to mitigate the problems of shared memory and are conceptually simple, but as with STM, they involve shared memory, Merritt notes. He says message passing is the best solution “where best is defined as the most conceptually simple and scalable”.

That raises some interesting questions. At this stage of the multicore industry’s development (and perhaps, forever), the choice of technology is going to require a compromise. So where do you place the emphasis? Here are some ideas of how you could judge or differentiate different programming languages, tools and paradigms:

  • Robustness: in some ways this is a given, because I can’t imagine somebody using a solution that isn’t robust. But it’s on the list because it will be a key factor in eliminating candidate tools before others are differentiated using the other ideas in this list.
  • Productivity: how quickly can you code?
  • Performance: how quickly will the software run?
  • Memory: what impact will it have on your available memory?
  • Scalability: how easily can software scale in line with the number of cores?
  • User base: how big is the community of support for the tool or technology? Might be a good indicator of how likely it is to survive.
  • Similarity: how similar is the tool or technology to another one that you already know how to use?
  • Trust: how confident are you in the tool or technology and its ability to solve the problem? This covers the unquantifiable ‘gut instinct’ that underpins many decisions which are only truly rationalised later.
  • Cost: what is the total cost of the software and consequential costs such as training or hardware upgrades?
  • Simplicity: how easy is it to understand the technology, and to communicate it to others?
  • Elegance: how beautiful is the solution? This one is hard to explain, but it’s the idea that the solution should be clean, logical and inspired. You know elegant code or technology when you see them.

There might be other factors too (please leave a comment if any occur to you). Sometimes tools or technologies will tick different boxes, so how do you compare them? Which of these factors will be most influential in the long run?

In the short term, I think elegance and trust will rank highly. Parallel programming will be restricted to advanced programmers over that period, and they tend to want to work with tools that offer conceptually great solutions and that they have confidence are worth their time investment. They might be pressured to emphasise similarity and consider productivity a high priority, but in an ideal world, I think they will recommend tools that they believe offer the best technical solution.

In the longer term, as parallel programming becomes much more commercial and much less experimental, I think we’ll see cost, productivity, similarity and simplicity become more important. A business manager will have different priorities to a programmer, and could be reflected in the tools they require the business to use.

What do you think will become the most influential factors? As technologies evolve, only the strongest will survive. But it all depends on how you define strength.

Oh, by the way. The greatest album ever is Wish You Were Here. Fact ;-)

9 Responses

  1. Have you guys tried parallel programming using XMOS? We would love to hear what you think – let us know.

  2. You forget expressiveness/flexibility: how likely that you will have to rewrite whole codebase for a new technology after addition of a new requirement or a change in the algorithm. Will knock out such inflexible techs as Ct, Erlang, etc.

  3. #0) It has to work on NOC SOC manycore architecture, a new paradigm where cache philosophy and size must not melt the silicon. These new manycores are in the lab today, which will see the light of day, be the solid foundaton upon which to build software, and systems and succeed in competitive markets? May be this is obvious? Axiomatic?

  4. Good article.

  5. I would order the requirements as:
    1. Robustness
    (one really does not want non robust technology)
    2. Performance
    3. Scalability
    (one really does not want non fast technology -> go single-threaded otherwise)
    4. Flexibility
    (one really does not want to switch technology with every requirements change)
    5. Productivity

  6. Thank you all for these great comments.

    Dmitriy – Can’t believe I forgot flexibility. Clearly, this is going to be essential, and has a significant impact on the productivity and cost too. I was interested to see that you ranked performance so highly, which suggests that you’d accept a less flexible and more cumbersome tool if it delivered faster code in the end. I’m not sure how many other coders would be so committed to the runtime performance!

    Art – Thank you for your comment. I think the scalability requirement covers this to some extent, but it’s good to mention the upcoming manycore architectures specifically, because what we’re really concerned about is how fluid the change will be as we go to that architecture. So perhaps this is really about ‘future proofing’ today’s investment, so that we don’t have to start over with the new architectures that emerge.

  7. [...] I blogged about how developers can choose between different parallel programming technologies, tools and paradigms. I guess two of the possible answers are “it depends on the program” and “all of [...]

  8. > I was interested to see that you ranked performance so highly, which suggests that you’d accept a less flexible and more cumbersome tool if it delivered faster code in the end.

    Yes, I am committed to performance. It does not necessary penalize flexibility. It penalizes something else, you know there is always a choice. My current choice is C/C++ and it’s A+ in:
    – Robustness
    – Performance
    – Scalability
    – Memory
    – Flexibility
    – User base
    – Similarity
    – Trust
    – Cost

    and F in:
    Productivity
    Simplicity

    and I think subjective in Elegance.

    I think that the whole thing is senseless w/o performance/scalability. Maybe you know Wide-Finder 2 prolonged multicore programming challenge:
    http://wikis.sun.com/display/WideFinder/Wide+Finder+Home
    Before implementing concurrent version I especially implemented reasonably-optimized *single-threaded* version, you may see it here called “Narrow-Finder”:
    http://wikis.sun.com/display/WideFinder/Results
    You know, everything below it is in quite questionable status – you forced to plunge into concurrency, but in the end you are slower than what single core can do…

  9. Hi Dmitriy

    Thanks for your follow-up comment. I absolutely agree that there’s not much point in parallelising for the sake of it. Some apps (particularly those where the bottleneck is I/O) show little (if any) speedup today, so there’s a cost-benefit analysis to be done on the parallel programming activity if it requires additional work. Sometimes that’s pretty rough and ready, and amounts to the programmer deciding it just isn’t worth it.

    I think you’ve articulated one of the key sentiments in the industry: C++ ticks most of the boxes, so there’s a reluctance to start over with something else just to tick the lack few boxes, even if they’re important.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: