What's new in Django community blogs?

ImageField and edit_inline revisited

Feb 24 2008 [Archived Version] □ Published at Flapping Head under tags  django

A while back I wrote about using edit inline with image and file fields. Specifically, I suggested adding an uneditable BooleanField as the core field of the related model. This means you don’t have to set the ImageField or FileField to be core (which would cause confusing behaviour). Removing the related model The downside to [...]


Beautiful and free icons

Feb 23 2008 [Archived Version] □ Published at zoe.vc under tags  beautiful famfamfam free icon icons

While building websites there can be the need for a beautiful and meaningful icon.
Up to now I haven't seen many icon collections that satisfied me, either the icon itself is badly drawn or you can't assign an accurate meaning.
But that is different with the silk icons of Mark James...
Somehow I embosomed his collection, it's the first where I have a look when I need an icon - and not until yesterday. The collection is free (doesn't happen that often), the icons in 16x16 pixels are beautiful and good to use for the things use usually use an icon for ;) There is also an overview over all icons with their titles where you can derive their purpose. All 1000 icons are in the PNG format, the collection currently is in version 1.3.

If you like beautiful icons, have a look!


Beautiful and free icons

Feb 23 2008 [Archived Version] □ Published at zoe.vc under tags  beautiful famfamfam free icon icons

While building websites there can be the need for a beautiful and meaningful icon.
Up to now I haven't seen many icon collections that satisfied me, either the icon itself is badly drawn or you can't assign an accurate meaning.
But that is different with the silk icons of Mark James...
Somehow I embosomed his collection, it's the first where I have a look when I need an icon - and not until yesterday. The collection is free (doesn't happen that often), the icons in 16x16 pixels are beautiful and good to use for the things use usually use an icon for ;) There is also an overview over all icons with their titles where you can derive their purpose. All 1000 icons are in the PNG format, the collection currently is in version 1.3.

If you like beautiful icons, have a look!


Hosting a Django Site with Pure Python

Feb 22 2008 [Archived Version] □ Published at Eric Florenzano's Blog

Hosting a Django Site with Pure Python

Feb 22, 2008

Developing a site with Django is usually a breeze. You've set up your models, created some views and used some generic views, and you've even created some spiffy templates. Now it's time to publish that site for everyone to see. Now if you're not already familiar with Apache, Lighttpd, or Nginx, you're stuck trying to figure out complicated configuration files and settings directives. "Why can't deployment be just as easy as running the development server?", you scream.

It's tempting to just attempt to use the development server in production. But then you read the documentation (you do read the documentation, right?) and it clearly says:

DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone through security audits or performance tests. (And that-- s how it-- s gonna stay. We-- re in the business of making Web frameworks, not Web servers, so improving this server to be able to handle a production environment is outside the scope of Django.)

Looks like it's time to fire up Apache, right? Wrong. At least, you don't have to.

CherryPy to the Rescue

One of the features that CherryPy touts quite highly is that they include "A fast, HTTP/1.1-compliant, WSGI thread-pooled webserver", however a lesser known fact about that webserver is that it can be run completely independently of the rest of CherryPy--it's a standalone WSGI server.

So let's grab a copy of the CherryPy WSGI webserver:

wget http://svn.cherrypy.org/trunk/cherrypy/wsgiserver/__init__.py -O wsgiserver.py

Now that you've got a copy of the server, let's write a script to start it up. Your choices may vary depending on how many threads you want to run, etc.

import wsgiserver
#This can be from cherrypy import wsgiserver if you're not running it standalone.
import os
import django.core.handlers.wsgi

if __name__ == "__main__":
    os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
    server = wsgiserver.CherryPyWSGIServer(
        ('0.0.0.0', 8000),
        django.core.handlers.wsgi.WSGIHandler(),
        server_name='www.django.example',
        numthreads = 20,
    )
    try:
        server.start()
    except KeyboardInterrupt:
        server.stop()

Consequences

Now you've got the server up and running, lets talk about some consequences of this approach.

  1. This is a multithreaded server. Django is not guaranteed to be completely thread safe. Many people seem to be running it fine in multithreaded environments, but thread safety may break at any time without notice. It might be an interesting project to convert cherrypy.wsgiserver to use processing instead of threading and see how the performance changes.
  2. This server is written in Python, and as with any other Python program, it will be difficult for it to match the speed of pure C. For exactly this reason, mod_wsgi is probably always going to be faster than this solution.
  3. You can have a completely self-contained server environment that can be run on Mac, Windows, and Linux with only Python and a few Python libraries installed. Distributing this wsgiserver.py script along with your Django project (or with a Django app, even) could be a great way of keeping the entire program self-contained.

Conclusion

Would I use this instead of a fully-featured web server like Apache or Nginx? Probably not. I would, however, use it for an intranet which demands more performance and security than the built-in development server. In any case, it's a nice nugget of information to have in your deployment toolbox.


Syntax Highlighting

Feb 22 2008 [Archived Version] □ Published at Eric Florenzano's Blog

Syntax Highlighting

Feb 22, 2008

Over the past week, I've had several people write me asking how I prefer to do syntax highlighting. It's funny that this question cropped up now, just as I changed the way that it's handled on this blog. The way that I used to do it was what I posted to djangosnippets almost a year ago: use a regular expression to parse out <code></code> blocks, highlight the stuff in-between, and spit it back out.

The problem with that method was that that would require some more sophisticated logic now that I'm using RestructuredText to write all of my posts. Unwilling to think any harder than necessary, I did a quick google search, and the second result was exactly what I was looking for: a RestructuredText directive, ready-made by the Pygments people.

The trick is to put this file somewhere on your python path. Then, in the __init__.py of one of the Django apps that will use syntax highlighting, just import the file. It's that simple! (I love RestructuredText.) But it's not only RestructuredText that benefits from this style of plugin. Markdown, too, has a similar plugin--again provided by the Pygments people.

.. code-block:: python

    print "This is an example of how to use RestructuredText's new directive."

I hope that this answers some of the questions that people had. On a similar note, I'm extremely happy to see that people have been finding the Contact Me link on the right side of the page. Please continue to send me any questions and comments that you have for me!


Test IE6 on PC with IE7

Feb 16 2008 [Archived Version] □ Published at zoe.vc under tags  browser ie6 ie7 internet explorer

If you created webpages you probably want the page to look nearly the same in all browsers. Because there are still many Internet Explorer 6 users out there you should adjust your site for this browser, too. But that's not that easy if you have installed the IE7 because you don't have the IE6 anymore. But a VirtualPC image of Microsoft finds a remedy. It comes with Windows XP and IE6 (or IE7 if you still have the IE6 and want to test IE7). Simply open it in the (free) VirtualPC software and enjoy the IE6 browserworld!

go to Download


Test IE6 on PC with IE7

Feb 16 2008 [Archived Version] □ Published at zoe.vc under tags  browser ie6 ie7 internet explorer

If you created webpages you probably want the page to look nearly the same in all browsers. Because there are still many Internet Explorer 6 users out there you should adjust your site for this browser, too. But that's not that easy if you have installed the IE7 because you don't have the IE6 anymore. But a VirtualPC image of Microsoft finds a remedy. It comes with Windows XP and IE6 (or IE7 if you still have the IE6 and want to test IE7). Simply open it in the (free) VirtualPC software and enjoy the IE6 browserworld!

go to Download


TYPO3: Limit creatable elements per page

Feb 16 2008 [Archived Version] □ Published at zoe.vc under tags  php typo3 typoscript

Sometimes it is necessary to limit the creatable elements on a page. This is especially useful if you only want to save specific elements within a SysFolder, e.g. within "users" there should be only users.

Within the TSConfig of the appropriate page add the following TypoScript having db_tablename be the database table with the elements you want to limit to:

mod.web_list.allowedNewTables = db_tablename,db_tablename_2


TYPO3: Limit creatable elements per page

Feb 16 2008 [Archived Version] □ Published at zoe.vc under tags  php typo3 typoscript

Sometimes it is necessary to limit the creatable elements on a page. This is especially useful if you only want to save specific elements within a SysFolder, e.g. within "users" there should be only users.

Within the TSConfig of the appropriate page add the following TypoScript having db_tablename be the database table with the elements you want to limit to:

mod.web_list.allowedNewTables = db_tablename,db_tablename_2


Django Tip: A Denormalization Alternative

Feb 15 2008 [Archived Version] □ Published at Eric Florenzano's Blog

Django Tip: A Denormalization Alternative

Feb 15, 2008

In creating an any website with textual content, you have the choice of either writing plaintext or writing in a markup language of some kind. The immediately obvious choice for markup language is HTML (or XHTML), but HTML is not as human-readable as something like Textile, Markdown, or Restructured Text. The advantage of choosing one of those human-readable alternatives is that content encoded using one of them can be translated very easily into HTML.

When one of my friends started designing his blog using Django, it got me thinking about how best to deal with that translated HTML. It seems like a waste to keep re-translating it every time a visitor views the page, but it also seems like it's redundant to keep the translated HTML stored in the database.

Here's my solution to the problem: cache it. For a month. Here's an example, using Restructured Text:

from django.db import models
from django.contrib.markup.templatetags.markup import restructuredtext
from django.core.cache import cache
from django.utils.safestring import mark_safe

class MyContent(models.Model):
    content = models.TextField()

    def _get_content_html(self):
        key = 'mycontent_html_%s' % str(self.pk)
        html = cache.get(key)
        if not html:
            html = restructuredtext(self.content)
            cache.set(key, html, 60*60*24*30)
        return mark_safe(html)
    content_html = property(_get_content_html)

    def save(self):
        if self.id:
            cache.delete('mycontent_html_%s' % str(self.pk))
        super(MyContent, self).save()

What I'm doing here is writing a method which either gets the translated HTML from the cache, or translates it and stores it in the cache for a month. Then, it returns it as safe HTML to display in a template. The last thing that we do is override the save method on the model, so that whenever the model is re-saved, the cache is deleted.

There we go! We now have the HTML-rendered data that we want, and no duplicated data in the database. Keep in mind that this way of doing things becomes more and more useful the more RAM that your webserver has.


On Context Processors in Django

Feb 11 2008 [Archived Version] □ Published at Eric Florenzano's Blog

On Context Processors in Django

Feb 11, 2008

This started out as a response in the comments to James Bennett's latest post, but I think that there's enough here to warrant its own post. If you haven't yet read it, then I suggest you do--it's a well-put argument for Django's application-level modularity and pluggability.

But I do disagree with him on one point. One of the things that he highlights is about "how easy it is for one Django application to expose functionality to others through things like context processors". I don't find this to be true. Currently there are only two ways of adding processors to the list of context_processors for a particular view:

  1. Adding them as an argument to the RequestContext (per-view).
  2. Adding them to the global context processors list in settings.py (global).

What these methods lack is a middle ground: per-app specification of context processors. This is what James Bennett seemingly alludes to which simply doesn't exist. What if I'd like all of the views in my blog app, and all views in flatpages to get a certain context processor list? Currently in Django that is not possible. I do think that there is demand for this, and it's something that probably wouldn't be too hard to add to trunk.

But really, if I can think of this particular use case of context processor loading, I'm sure there are other people who could think of others. For example, what about a different set of processors based on URL, or based on IP address, or something even more strange? What Django really needs is a pluggable context processor loader similar to how it loads session backends, authentication backends, database backends, urls, etc. That way, people could provide their own loaders to do any kind of context processing differentiation that they want.

The only thing that this could do is make Django applications more pluggable--and that's always a good thing! The good news is that PyCon is coming up, and I can try to tackle this during the sprinting days.

UPDATE: Malcolm Tredinnick has posted an excellent followup to this post that suggests a simple solution for those who want to do something similar to application-level context processor loading right now.


Twitter user timeline with a Django templatetag

Feb 11 2008 [Archived Version] □ Published at Nuno Mariz

Twitter
A simple templatetag for adding to the template context a variable with the user timeline from Twitter.
It uses the CachedContextUpdatingNode snippet for caching from Jacob Kaplan-Moss.
The reason that is necessary to cache content is because Twitter limits the number of accesses to the API.
This only works if the cache is enabled on your settings.py.
class TwitterNode(CachedContextUpdatingNode):

    cache_timeout = 1800 # 30 Minutes, maybe you want to change this
    
    def __init__(self, username, varname):
        self.username = username
        self.varname = varname

    def make_datetime(self, created_at):
        return datetime.fromtimestamp(mktime(strptime(created_at, '%a %b %d %H:%M:%S +0000 %Y')))

    def get_cache_key(self, context):
        return 'twitter_user_timeline_cache'

    def get_content(self, context):
        try:
            response = urllib.urlopen('http://twitter.com/statuses/user_timeline/%s.json' % self.username).read()
            json = simplejson.loads(response)
        except:
            return {self.varname : None}
        for i in range(len(json)):
            json[i]['created_at'] = self.make_datetime(json[i]['created_at'])
        return {self.varname : json}
    
@register.tag
def twitter_user_timeline(parser, token):
    bits = token.contents.split()
    if len(bits) != 4:
        raise TemplateSyntaxError, "twitter_user_timeline tag takes exactly three arguments"
    if bits[2] != 'as':
        raise TemplateSyntaxError, "second argument to twitter_user_timeline tag must be 'as'"
    return TwitterNode(bits[1], bits[3])
Usage:
{% twitter_user_timeline username as twitter_entries %}
{% if twitter_entries %}
  {% for entry in twitter_entries %}
  {{ entry.created_at|date:"d M Y H:i" }} - {{ entry.text }}
  {% endfor %}
{% endif %}
Use the source, Luke.


OpenCalais beats me to the punch

Feb 11 2008 [Archived Version] □ Published at Glenn Fanxman

I've been playing with various entity and information extraction frameworks for the past couple weeks with the goal of creating an web service for extracting the major topics from news articles. SP far, my work, such that it is, has shown promise, but is not as robust or reliable as I would have hoped. I just noticed that Reuters has apparantly been working along the same lines and have opened their work to the public. On the one hand, I feel beaten. On the other, their service does not appear to be much better than my own - although they'll have a larger set of people re-training their decision trees than I'll ever have. I've been using posts from my own blog as my testing ground, so I thought I'd throw my last post through and see what it churns out: Relations: PersonProfessional Organization: Entrepreneur Fund IndustryTerm: rubber Person: Len Gilbert, Tom Berreca, Darline Jean, Glenn Franxman, Matthew de Gannon, Sam Parker, Clark, Beth Higbee, Eleanor Cippel, Martha Stewart City: Naples Well, I don't even try to extract relations, so that's pretty cool. IndustryTerm is pretty puzzling, although I understand why they were fooled by it. My organization extraction appears to handle this better, inso far as I am extracting things like The Weather Channel, etc from that post. My City extraction is better in so far as it fetches the state along with the city name. Person extraction is interesting. They got Martha Stewart, where as I keep tripping over it, and call her Martha Stewart Living, and I pull out Omnimedia as an Orgnaization. For comparisons sake, here's the result of my own project: 13LOCATION:Summit 23LOCATION:Naples,Fl 40ORGANIZATION:Ritz-CarltonGolfResort 101PERSON:Matthew 103PERSON:Gannon 105ORGANIZATION:SVP 107ORGANIZATION:TheWeatherChannelInteractive 112PERSON:LenGilbert 117PERSON:Hill 119PERSON:DarlineJean 131ORGANIZATION:Products 133ORGANIZATION:TheWeatherChannelInteractive 153ORGANIZATION:Weather 171ORGANIZATION:TomBerreca 175ORGANIZATION:SVPDigital&Emerging 179PERSON:Media 181PERSON:MarthaStewartLiving 184ORGANIZATION:Omnimedia 186PERSON:SamParker 189ORGANIZATION:CNET 194PERSON:Tom 201ORGANIZATION:Scripps 204PERSON:Martha 257PERSON:EleanorCippel 260ORGANIZATION:Scripps 268PERSON:BethHigbee 271ORGANIZATION:ScrippsNetworksInteractive 367PERSON:Eleanor Fugly formatting aside, I'm being way more aggressive in my extraction. I think I'm going to need better confidence heuristics. Also, I'm a lot slower than OpenCalais. There's all sorts of interesting differences, many probably come from the body of text they've got to train against. I'm still fooled by names like Matt de Ganon. The most interesting bit, though, is how the two systems treated the phrase "Martha Stewart Living Omnimedia". Oh well. They have a bounty for anyone who can create a wordpress plugin for this. I don't use wordpress, or I would have done tonight. As it stands, I might plug it into this site or create a javascript badge to make it usable anywhere.
Post to Del.icio.us


Porting to numpy

Feb 09 2008 [Archived Version] □ Published at orestis.gr

As you may know (you probably don't :), I've built a greeklish to greek converter for my Diploma thesis, using Python.

It worked well enough (for a diploma thesis), but it was slow. I mean, really, really slow. It was coded in Pure Python (tm) and had no optimizations whatsoever. The main part of the code was an implementation of a Hidden Markov Model:

In a regular Markov model, the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters. In a hidden Markov model, the state is not directly visible, but variables influenced by the state are visible. Each state has a probability distribution over the possible output tokens. Therefore the sequence of tokens generated by an HMM gives some information about the sequence of states.

I was using dicts mainly, to map between a state and a probability, but what I really wanted was a matrix. I went and replaced my dicts with lists of lists, which yielded a performance improvement (a dict lookup may be constant-time, but an index in a list will be faster, because its constant is smaller). I also did some cacheing and other minor stuff, like instead of waiting for KeyErrors or IndexErrors I went and filled up the "matrix", which also helped. Unfortunately, when all that is done, you realize that Python isn't a language designed for number crunching. You have to use something else to do the heavy lifting.

Why numpy?

A usual argument of dynamic language fans when confronted with the "dynamic languages are slow" issue, is, well "you can always rewrite the critical parts in C". Python is supposed to make that easy, using ctypes or SWIG or other libraries. I don't know, I'm not really comfortable doing C, and doing numeric stuff in C seems scary to me, as you'll have to deal with precision and all that weird stuff.

So I decided to use numpy, as the de facto standard when doing numerical applications in Python. It was very straightforward: Instead of using:

a = [[1,2],[3,4]]

you can use:

a = array([[1,2],[3,4]])

and lo and behold, all your code still runs!

Not so fast, mister

Well, actually it runs slower! Huh? How come?

Apparently, just switching to numpy arrays isn't good at all, if you still are looping over them in Python. Nooo, you have to rewrite all your functions to take advantage of the numpy speedup. This can be quite tricky and will lead to hair pulling.

The walkthrough

I'll repeat here the steps I've took to implement the forward algorithm as described in L. Rabiner's HMM tutorial. It's simply 3 equations (19, 20, 21), mostly doing sums and multiplications.

I'll spare the irrelevant details. Assuming A, B, pi are the parameters of the HMM, N is the number of the hidden states and given obs, a list of observation indices, here is the naive code, mostly copying from the book:

#19
T = len(observation_list)
a = zeros((T,N))
for i in range(N):
    a[0][i] = pi[i]*B[obs[0]][i]

#20
for t in range(1,T):
    for j in range(N):
        for i in range(N):
            a[t][j] += a[t-1][i]*A[i][j]
        a[t][j] *= B[obs[t]][j]

#21
prob = 0
for i in range(N):
    prob += a[T-1][i]

The pattern is roughly, the sum (or product) of something over i, from 1 to N, turns into a for i in range(N). Let's see how this can be rewritten to use numpy.

First of all, don't delete the old function! We'll interleave the new code and sprinkle assertions in between to make sure we don't make mistakes. If you have unit tests (you should!), you can make sure later that everything still ticks, but while rewriting it helps to take baby steps.

For function #19, the numpy code is this:

a_ = zeros((T, self.N))
a_[0] = pi*B[obs[0],:]

assert((a==a_).all())

It's apparent that the conversion "algorithm" is:

  1. Use the numpy element accessor [x,y] instead of [x][y]
  2. Replace "i" (the counter) with ":"
  3. If you have just a vector, remove the square brackets entirely.
  4. Remove the for loop
  5. Assert that each element of the numpy array is the same with naive array.

Things to remember:

  1. The "*" operator for numpy arrays is element-wise multiplication, instead of dot product.
  2. The ":" accessor works a bit like a slice-operator. I like to thik of it as wildcard

What the above code is doing boils down to:

"Multiply each element of pi with the corresponding element of the slice obs[0] of B, and assign the resultant array (a vector, really) to a_[0]"

With that in mind, we can easily deduce that for function #21 the code will be:

prob_ = sum(a_[T-1,:])

assert(prob_==prob)

Which means, "sum the elements of the slice T-1 of array a_"

Function #20 is a bit more complex, and I'll go step-by-step:

First, there is a sum:

for i in range(self.N):
    a[t][j] += a[t-1][i]*A[i][j]

Which can be easily replaced with

a_[t][j] = sum(a_[t-1,:]*A[:,j])

using the above algorithm. We now have:

for t in range(1,T):
    for j in range(self.N):
        a_[t][j] = sum(a_[t-1,:]*A[:,j])
        a_[t][j] *= B[obs[t]][j]

There is also a product (a_[t][j] *= B[obs[t]][j]), that can be swapped with a_[t,:] *= B[obs[t],:]

The code now becomes:

for t in range(1,T):
    for j in range(self.N):
        a_[t][j] = sum(a_[t-1,:]*A[:,j])
    a_[t,:] *= B[obs[t],:]

Only one loop to go! However, that sum is trouble. When I tried to apply the usual "replace-i-with-:" routine, I got an AssertionError (that's why you have assertions!). After some minutes with pen and paper, it occured to me that the sum operator reduced the array to a single number, when clearly I expected a vector. So I needed an operator multiplied two arrays, did per row sums and return an array.

OK, duh, that's the dot product. My linear algebra is rusty, I know. So now the code becomes:

for t in range(1,T):
    a_[t,:] = dot(a_[t-1,:],A)
    a_[t,:] *= B[obs[t],:]

Or, in one line:

for t in range(1,T):
    a_[t,:] = dot(a_[t-1,:],A)*B[obs[t],:]

UPDATE: Of course, when implementing the viterbi algorithm, I came across this piece of code:

    for t in range(1,T):
        for j in range(self.N):
            max_daij = 0.0

            for i in range(self.N):
                tmp_daij=d[t-1][i]*A[i][j]
                if tmp_daij>=max_daij:
                    max_daij=tmp_daij
                    psi[t][j] = int(i)
            d[t][j] = max_daij*B[obs[t]][j]

It seems that max and argmax should be used. This translates to:

    for t in range(1,T):
        max_daij=amax(d_[t-1,:]*A.T,1)
        psi_[t,:] = argmax(d_[t-1,:]*A.T,1)
        d_[t,:] = max_daij*B[obs[t],:]

Things to notice: I used from numpy import max as amax so as not to complicate things. Also note the .T syntax. This is the transposed array. It's quite cheap as an operation, as it returns just a transposed view to the data. I should've done this testing months ago. Sigh.


Conclusion

That's not bad. The final result has less cruft, at the expense of more concentration. I actually kept the interleaved version so I can understand what's going on if I revisit the function.

I hope I haven't done anything stupid, since the data I use aren't that complicated, and in my experience, you really need big datasets to test against when developing numerical applications.

Comments


OOP and Django

Feb 09 2008 [Archived Version] □ Published at Eric Florenzano's Blog

OOP and Django

Feb 09, 2008

Being a senior in college means many things. It means job interviews and upper-level classes, emotional instability and independent living. It also means countless hours of sitting in uninteresting classes whose sole purpose is to fulfill some graduation requirement. For me, that means lots of daydreaming--about anything other than that class. Recently however, during one daydream, I had a brain wave worth typing up: What's the deal with Object-Oriented Programming and Django?

The Convention

Browsing through the views.py file in just about any publicly-available Django-based application will almost certainly reveal nothing more than a bunch of functions. These functions are undeniably specialized: they take in an HttpRequest object (plus possibly some more information), and they return an HttpResponse object. Although these functions may be specialized, nevertheless they are still just functions.

This should come as no surprise to anyone who has used the framework--in fact, it's encouraged by common convention! Not only does the tutorial use plain functions for views, but also the Django Book, and just about every other application out there. The question now becomes "why"? Why, in a language that seems to be "objects all the way down", does a paradigm emerge for this domain (Django views) wherein functions are used almost exclusively in lieu of objects?

That's not entirely true, sir...

Any time a broad statement like "just about any" is used, the exceptions are what become interesting. The admin application (both newforms-admin and old) is probably the most notable and interesting exception to my earlier broad statement. It's interesting because it's Django's shining star! Other applications which use object orientation: databrowse and formtools. These are some great Django apps which use Object-Orientation in the views.

Looking at those apps which use OOP and those which don't reveals an interesting idea: those apps which strive to go above-and-beyond in terms of modularity tend to be those who end up using classes and their methods for views. Now this same functionality could be accomplished by using plain functions, but they haven't--their functionality was accomplished using classes and methods.

Please keep in mind that what I'm not trying to do is make a value judgement on Object-Oriented programming vs. functional programming vs. any other programming paradigm. Instead, I'm providing an observation about the emergence of a common practice, and trying to analyze its implications.

But wait!

What really is the difference between writing a plain function as a view and Object-Oriented programming? It's completely reasonable to argue that writing a plain function for a view is, in fact, Object-Oriented programming. All class methods take in self as their first positional argument, and all views take in request as their first positional argument. Taking in this argument allows access to state which would otherwise be difficult to access. Changing the order of urlpatterns is equivalent to changing the polymorphic properties of a class and dynamic method lookup.

In essence, one could argue that using a plain function as a view is strictly equivalent to writing a method on the HttpRequest object. Thinking about it in this way, writing a Django application is really nothing more than building up a monolithic HttpRequest object which the user can call different methods on using its API: the URL. To me, this is a really interesting idea!

Off My Rocker

This is the result of extreme classroom boredom--so maybe posts here will continue down this slightly-more-esoteric road for a while. But honestly this was an interesting thought-experiment, and I'd like to get some feedback on what people think as well. Am I totally off base with this analysis? Moreover, do you use true Python "classes" as your views? If so, what benefits does it bring to the table?


django-planet aggregates posts from Django-related blogs. It is not affiliated with or endorsed by the Django Project.

Social Sharing

Feeds

Tag cloud

admin administration adsense advanced ajax amazon angular angularjs apache api app appengine app engine apple application security aprendiendo python architecture argentina articles asides audrey australia authentication automation backup bash basics best practices binary bitbucket blog blog action day blogging book books buildout business c++ cache capoeira celery celerycam celerycrawler challenges chat cheatsheet cherokee choices christianity class-based-views cliff clojure cloud cms code codeship codeship news coding command community computer computers computing configuration consumernotebook consumer-notebook continuous deployment continuous integration cookiecutter couchdb coverage css custom d data database databases db debian debugging deploy deployment deployment academy design developers development devops digitalocean django django1.7 django admin django cms djangocon django framework django-nose django-readonly-site django-rest-framework django-tagging django templates django-twisted-chat django web framework tutorials documentation dojango dojo dotcloud dreamhost dughh easy_install eclipse education elasticsearch email encoding english error europe eventbrite events expressjs extensions fabric facebook family fashiolista fedora field file filter fix flash flask foreman form forms frameworks friends fun functional reactive programming gae gallery games geek general gentoo gis git github gmail gnome google google app engine guides gunicorn hack hackathon hacking hamburg haskell heroku holidays hosting howto how-to howtos how-tos html http i18n image imaging indifex install installation intermediate internet ios iphone java javascript jinja2 jobs journalism jquery json justmigrated kde la latex linear regression linkedin linode linux login mac machine learning mac os x markdown math memcached meme mercurial meta meteor migration mirror misc model models mod_wsgi mongodb months mozilla multi-language mvc mysql nasa nelenschuurmans newforms news nginx nodejs nosql oauth ogólne openshift opensource open source open-source openstack operations orm osx os x ottawa paas packages packaging patterns pedantics pelican penetration test performance personal personal and misc philippines philosophy php pi pil pinax pip piston planet plone plugin pony postgis postgres postgresql ppoftw presentation private programmieren programming programming &amp; internet project projects pycharm pycon pycon-2013-guide pydiversity pygrunn pyladies pypi pypy pyramid python python3 queryset quick tips quora rabbitmq rails rant ratnadeep debnath reactjs recipe redis refactor release request resolutions rest reusable app review rhel rtnpro ruby ruby on rails scala scaling science screencast script scripting security server setup shell simple smiley snaking software software collections software development south sphinx sprint sql ssh ssl static storage supervisor support svn sysadmin tag tag cloud talk nerdy to me tastypie tdd techblog technical technology template templates template tags test testing tests tip tips tools tornado training transifex travel travel tips for geeks tumbles tutorial tutorials twisted twitter twoscoops typo3 ubuntu uncategorized unicode unittest unix use user authentication usergroup uwsgi uxebu validation virtualenv virtualenvwrapper web web 2.0 web application web applications web design &amp; development webdev web development webfaction web framework websockets whoosh windows wordpress work workshop wsgi yada znc zope