A friend who is a writer told me about the time he met someone at a party and she said she’d love to try his career. “I’ve always thought I’d give it a go. I think it’s pretty easy, really.” He asked her what she did for a living and she said she was a teacher. “I reckon I could that,” he said. “It’s only talking to a bunch of kids.”
Sometimes it’s hard to get other people to understand how difficult your work is, and in the case of parallel programming that’s particularly true. Aater Suleman has written an article on what makes parallel programming hard, partly to educate hardware designers, and the story has ignited some debate online.
Suleman points out the challenges presented by artificial inter-task dependencies, which result from code being written with a single-threading mindset. One of the commenters on his post pointed out that the most popular languages in use mostly force us to write with artificial dependencies, because you have to write either x++; y++ or y++; x++ and imply a sequence. Suleman says that removing artificial dependencies can be prohibitively hard work, and risks breaking the program if real dependencies are removed instead. His story also covers the issues of debugging non-deterministic programs, the difficulty of optimising for performance, and the challenge of making code scale with the number of cores. The author cites the example of one puzzle solving program he wrote which slowed down when additional cores were added, which is a good illustration of how hard it can be to future-proof parallel code.
When the story was posted on Slashdot, it provoked 188 comments, showing there’s a lot of interest in the subject, but also showing there’s a lot of disagreement on it. One of my favourite comments used an analogy of getting 50 kids to sort 50 toy cars, or 50 footballers to line up in order of height, and noted that the difficult thing is that it all has to happen at once. The article was seen by one commenter as supporting the argument that you’re better off recoding from scratch than trying to modify serial code for multithreaded systems. Another commenter said that this hasn’t been news for 30 years, but the fact that we’re still struggling with these problems and that the article has provoked such a response strongly suggests otherwise to me: it’s still news in the sense of something that’s a current and serious problem.
Why do you think parallel programming is hard? Do you even agree with the premise that it is?