Thu 19 January 2012
Concurrent programming idioms are on a spectrum of complexity.
Obviously, writing code that isn't concurrent in any way is the easiest. If you never introduce any concurrent tasks, you never have to debug any problems with things running in an unexpected order. But, in today's connected world, concurrency of some sort is usually a requirement. Each additional point where concurrency can happen introduces a bit of cognitive overhead, another place you need to think about what might happen, so as a codebase adds more of them it becomes more difficult to understand them all, and it becomes more challenging to understand subtle nuances of parallel execution.
So, at the simplest end of the spectrum, you have callback-based concurrency. Every time you have to proceed to the next step of a concurrent operation, you have to create a new function and new scope, and pass it to the operation so that the appropriate function will be called when the operation completes. This is very explicit and reasonably straightforward to debug and test, but it can be tedious and overly verbose, especially in Python where you have to think up a new function name and argument list for every step. The extra lines for the function definition and return statement can be an impediment to quickly understanding the code's intentions, so what facilitates understanding of the concurrency model can inhibit understanding of the code's actual logical purpose, depending on how much concurrent stuff it has to do. Twisted's Deferreds make this a bit easier than raw callback-passing without fundamentally changing the execution dynamic, so they're at this same level.
Then you have explicit concurrency, where every possible switch-point has to be labeled somehow. This is yield-based coroutines, or inlineCallbacks, in Twisted. This is more compact than using callbacks, but also more limiting. For example, you can only resume a generator once, whereas you can run a callback multiple times. However, for a logical flow of sequential concurrent steps, it reads very naturally, and is shorter, as it collapses out the 'def' and 'return' lines, and you have to think of at least two fewer names per step.
However, that very ease can be misleading. You might gloss over a 'result = yield ...' more easily than a 'def whatever(result): return result; something(whatever)'. Nevertheless, if you have 'yield's everywhere you might swap your stack, then when you have a concurrency bug, you can look at any given arbitrary chunk of code and know that you don't need any locks in it, as long as you can't see any yield statements. Where you do see yield statements, you know that you have some code that needs to be inspected.
To continue down that spectrum, a cooperatively multithreading program with implicit context switches makes every line with any function call on it (or any line which might be a function call, like any operator which can be overridden by a special method) a possible, but not likely culprit. Now when you have a concurrency bug you have to audit absolutely every line of code you've got, although you still have a few clues which will help you narrow it down and rule out certain areas of the code. For example, you can guess that it would be pathological for 'x = ; ...; x.append(y)' to context switch. (Although, given arbitrary introspection craziness, it is still possible, depending on what "..." is.) This is way more lines than you have to consider with yield, although with some discipline it can be kept manageable. However, experience has taught me that "with some discipline" is a code phrase for "almost never, on real-life programming projects".
All the way at the end of the spectrum of course you have preemptive multithreading, where every line of code is a mind-destroying death-trap hiding every possible concurrency peril you could imagine, and anything could happen at any time. When you encounter a concurrency bug you have to give up and just try to drink your sorrows away. Or just change random stuff in your 'settings.py' until it starts working, or something. I never really did get comfortable in that style. With some discipline, you can manage this problem by never manipulating shared state, and only transferring data via safe queueing mechanisms, but... there's that phrase again.
Some programming languages, like Erlang, support efficient preemptive processes with state isolation and built-in super-cheap super-fast queues to transfer immutable values. (Some other languages call these "threads" anyway, even though I would agree with Erlang's classification as "processes".) That's a different programming model entirely though, with its own advantages and challenges, which doesn't land neatly on this spectrum; if I'm talking about left and right here, Erlang and friends are somewhere above or below. I'm just describing Python and its ilk, where threads give you a big pile of shared, mutable state, and you are constantly tempted to splash said state all over your program.
Personally I like Twisted's style best; the thing that you yield is itself an object whose state can be inspected, and you can write callback-based or yield-based code as each specific context merits. My opinion on this has shifted over time, but currently I find that it's best to have a core which is written in the super-explicit callback-based approach with no coroutines at all, and then high-level application logic which wraps that core using yield-based coroutines (@inlineCallbacks, for Twisted fans).
I hope that in a future post, I may explain why, but that would take more words than I've got in me tonight.