It all started a couple days ago, when Ian Bicking posted about his attempt at using generic functions for a simple JSON-ification task.
Then, Rene Dudfield posted comments to the effect that generic functions were a poor fit for the task, and slower to boot. He included a benchmark that was supposed to show that generic functions were 30 times slower than a hand-optimized version of the same operation, although the numbers he posted actually showed only a 23.4 times slowdown.
Well, I didn’t think the benchmark was a very good one, but what the heck. I tried it out for myself, made a couple of minor tweaks, and spent 30 minutes or so writing a C version of one part of RuleDispatch that I’d been meaning to get around to anyway, and got the benchmark down to only a 1.37 times slowdown – a mere 37% slower than the hand-tuned version.
But since it’s still not fair to compare a function you’re supposed to extend by adding new methods, with a hand-tuned version that has all the methods it will ever have, I devised a slightly fairer benchmark. Since Rene proposed that monkeypatching – that is, replacing the original function with a new version – was a better way to implement extensibility, I added a couple of types to his version with monkeypatching, and added a couple of types to the generic function version as well.
And then the worm turned: the generic function version was now 35% faster than the monkeypatched version of the hand-tuned function. I was a bit surprised by that, I thought it would’ve taken more layers of monkeypatching first. But no, just one extra layer in the typical case made it the same speed as the generic function, and two layers made it slower. (Presumably, additional layers would continue to degrade performance at a linear rate.)
Now, before anybody gets the wrong idea, I don’t promote monkeypatching in the general case, and in the specific case of a framework function like Ian’s
jsonify(), it would be crazy to recommend that people monkeypatch it. Monkeypatching as a recommended extension technique is little short of lunacy – it’s trivial to accidentally break it or change the semantics due to a change in import order, you can’t import the function normally (e.g
from jsonify import jsonify), and as Rene’s own benchmark shows, it doesn’t even come close to being scalable from a performance perspective.
But thanks to that benchmark, RuleDispatch users everywhere can now benefit from my speeding up of
isinstance() checks, without making any changes to their code. Perhaps other people can now design and post other bogus benchmarks, so that I then can go ahead and spend a few minutes making those cases faster too. Ah, the wonders of blogging and open source. 😉
Seriously, though, I do want to thank Rene for his comments, despite the fact that I think he’s still quite thoroughly missing the point, which is that generic functions are for people creating extensible libraries and application platforms, not writing one-off scripts or applications. Nonetheless, if he hadn’t taken the time to write his comments, I still wouldn’t have gotten around to writing that bit of C code, and RuleDispatch wouldn’t now be so much faster for
isinstance() dispatching. And I wouldn’t be feeling quite as smug right now, about how little monkeypatching overhead is required before RuleDispatch kicks some serious ass!