Deciphering
Glyph
( )
2L2T: DjangoCon Feedback

Fri 09 September 2011

I've been having a great time over here at DjangoCon, but now that I've had an opportunity to relax and process some feedback from my talk, I have noticed a couple of themes to that feedback.  This isn't really a full article, just a response, but it's too long to tweet.  If you're curious about the talk, I believe it will be showing up on blip.tv under http://blip.tv/djangocon somewhere next week.  (I'll try to remember to update this post when it's available.)
For the most part, the talk was exceedingly well-received and I want to thank the Django community both for the opportunity to speak and for the overwhelmingly positively response.  Thanks for making an outsider to your community feel welcome and appreciated.
There have been a couple misconceptions though, and perhaps I didn't express myself clearly on a few points.
  1. I realize that there are times – plenty of times, even – when using some component that's in a different language from your main application is the right choice.  I wasn't trying to say "all Python all the time no matter what, no exceptions".  I just want you all to consider that there is a cost to using a component that's in a different language, and you should be aware of that cost.  It's not as simple as a tick-a-box feature comparison of the features and drawbacks of multiple products.  If I came out as sounding really extreme on this, it just was to provoke a response.
  2. You can have an architecture which is driven by Python and organized by Python without actually having all the implementation be in Python.  For example, an inordinate number of people asked me about memcache.  If you want using something like that, sure, use memcache, there's not a lot that it being in Python would buy you.  Some might say that the whole point of memcache is that it isn't very deeply configurable and doesn't have much in the way of behavior.  Plus, it's an internal component, not an externally visible service, so even my usual flimsy "no buffer overflows" argument doesn't really hold up; it's more like a library than a server.  You can incorporate memcache into a Python-in-the-driver's-seat architecture by spawning memcache from your Python process instead of making memcache a configuration dependency.  That way, you don't need a separate configuration file and a separately managed service or a chef script that boots memcache for you before your application.  This applies equally well to any other, similar services: write their config files from your Python code, and start them automatically.
Finally, thanks to everyone who really thought about what I said, took the time to respond, and prompted me to write this.

Update: The video of my talk is now available on blip.tv.