Did you know there’s a programming language that’s already proven itself in a huge number of sensitive applications and which inherently supports task-based programming? It’s not something that’s just come out of the labs in Silicon Valley. It’s been around for decades, and it’s called Ada. No! Don’t run away. I’ve got some interesting research to share that might persuade you it’s worth a second look.
On a previous blog post about Ada, I asked whether anybody was using Ada for multicore programming, and Karl Nyberg has got in touch to share his experience. He published some research in 2007 that married Ada’s task-based programming model with multicore hardware (8 cores, with 32 virtual CPUs). For the full details, read his research paper here (PDF).
He compared the serial and task-based performance of two applications: one for word counting and another for calculating checksums. The serial application ran the same code but implemented it using a loop instead of tasks. He recorded the programs’ performance using various numbers of cores (from 1 to 32), which gives some insight into scalability and any overhead inherent in the tasking model.
On the word count algorithm, the elapsed time decreased as the number of tasks increased. The performance increase wasn’t quite linear, because of the effect of file I/O. The checksum program used fewer I/O operations, and so saw a decrease in elapsed time that increased more linearly as more tasks were used. The tasking overhead limited total throughput to about 50%, though, Nyberg said.
The research was published a couple of years ago, so it’s interesting to think about what’s changed since 2007, and what hasn’t. The research paper comes from a time when multicore chips still commanded a price premium which customers might be reluctant to pay. Now, they come as standard on the desktop PC, although machines with more than four cores are still relatively rare, and customers might be reluctant to pay for additional cores. For high performance computing, massively multicore hardware still requires a significant investment, although it’s much easier to get low-cost systems for modelling multicore hardware.
In his 2007 presentation, Nyberg speculated that it will take years for industry to leverage multicore hardware, which turned out to be spot on. We’re still working on that problem, and while the tools and techniques are getting better, multicore programming isn’t rapidly becoming more accessible to the vast majority of programmers. Nyberg noted that Ada was ready in 2007, and it’s still ready now. So why aren’t people using it?
Nyberg believes the reasons are largely cultural: Ada is still tainted by its defence background and lacks cool, he says. It also works in a smaller universe. It might be used on hundreds of planes, but Java is on 5 billion devices, he notes.
The lack of cool is something that probably bothers programmers more than they’d be willing to admit. Which isn’t to say they use languages because they’re fashionable, but more that they like to be part of a community with plenty of buzz and innovation, and Ada doesn’t offer that as well as other platforms do. While Ada has tasks built in to the core language, it will require many programmers to learn a new language from scratch. Many will find it easier to work with bolt-on parallel programming solutions that extend a language or toolset they already know.
I wonder whether programmers are missing an opportunity, here, though? What do you think?