What's new in Django community blogs?

Two Scoops of Django 1.11 Is Printed!

Jul 13 2017 [Archived Version] □ Published at pydanny under tags  audrey book class-based-views django python

Two Scoops of Django

After longer than we expected, the shiny new print copies of Two Scoops of Django 1.11 are finally ready. We've shipped all pre-orders, and many of you who ordered it in advance should have the book in your hands. We're delighted by how well it's been received, which is due only to the help and encouragement of our readers.

Right now you can order the print version of Two Scoops of Django 1.11 at either:

  • Two Scoops Press (for autographed copies directly from us), or
  • Amazon (this link sends you to your regional Amazon store)

If you purchase the book on Amazon, please leave an honest review. Simply put, Amazon reviews make or break a book, each one responsible for additional sales. It's our dream that if sales of Two Scoops of Django 1.11 are good enough, we'll be able to justify doing less consulting and instead, writing more technical books.

Over 10,000 women have attended a Django Girls workshop!

Jul 12 2017 [Archived Version] □ Published at Django Girls Blog

If you’re really quiet and listen super hard, you might be able to hear the Django Girls team celebrating the latest milestone…

This month the Django Girls Foundation helped support over 10,000 people learn to code with the Django Girls tutorial! Of course, we couldn’t have done it without the hard work and dedication of our 947 volunteers who organised the 441 events over the past three years!

Go team! 

Texting Centre

Jul 11 2017 [Archived Version] □ Published at Latest Django packages added


Jul 11 2017 [Archived Version] □ Published at Latest Django packages added

💂🏻 Provide hints to optimize database usage by deferring unused fields


Jul 11 2017 [Archived Version] □ Published at Latest Django packages added

The best way to highlight active links in your Django app.

Towards Channels 2.0

Jul 11 2017 [Archived Version] □ Published at Aeracode

Outlining my plan for iterating on Channels' design, and what the future might hold for both Django and Python in general.

It's been around three years since I came up with the current Channels design - that of pushing everything over a networked "channel layer" and strictly separating protocol handling and business logic - and while it's generally working well for people, I have this feeling it can be improved, and I've been thinking about how for the past few months.

This was brought into sharp focus by recent discussion about writing more of a standard interface for asyncio in particular, but also by Tom Christie's recent work on the same subject, so I've written up where my current thinking is to both help people understand where I'm going and to be more transparent about how I think about problems like this.

So, let's start by looking at the issues with the current Channels design:

  • You're forced to run everything (and I mean everything) over a networked channel layer. There are good reasons for this - I'll cover them later - but it's a lot to force on people and arguably the biggest dent to smooth scaling of a Channels-based site (and to working towards being a viable WSGI replacement).
  • There's no standard application interface like WSGI you can plug into; you have to listen on channels and do your own event loop. This is the sort of problem frameworks should solve - things that are hard to get right and which most people need.
  • Persisting data for the lifetime of a socket is done using Django session backends. This was one of those hacks that made it into production, and while it works surprisingly well (session backends were made for something close to this access pattern), it sits uneasily with me, especially as it's another hard issue when it comes to scaling.
  • Channel names are often used and passed around directly, which makes it hard to do things like multiplexing and to have the multiplexed consumer written the same as a simplexed consumer (there's no channel name to send to for a multiplexed connection).

The overall picture means you end up with a deployment strategy that necessitates smaller clusters of protocol and worker servers, sharing their session store and channel layer internal to the cluster and having a loadbalancer balance across clusters. This isn't necessarily bad, but I think there's definitely room for improvement here and, crucially, room to massively improve the user experience (especially for small projects) along the way.

That said, the choices that led to these results were deliberate. Channels has a single design goal that led to all of these descisions:

  • You need to be able to trigger a send to a channel from anywhere

Events can happen in other processes (for example, a WebSocket terminated on another machine sends a message that the user connected to your machine needs to see), and you need to be able to react to those. Channels is not just a WebSocket termination framework - Autobahn is good at that by itself - it's a framework to build systems that handle WebSockets in a useful way, and solve the hard problems that come with them, like cross-socket (and thus cross-process) communication.

Hopefully you can see, then, how the system of pushing everything through a channel layer achieves this goal - by putting every process on an equal footing, we let you send to channels from anywhere with ease. Groups build upon this more by building in a broadcast abstraction to save you tracking channels yourself.

But, looking at it critically (and, crucially, with three years of hindsight), there are some natural flaws with this approach. Sure, everywhere can send to channels, but that means any special send-encoding (like the multiplexer earlier, or anything else you want to add to outgoing messages), has to be implemented in every place that might send a message. This is a pretty bad abstraction; that code should ideally all be in one place.

How do we resolve these two issues, then, and still keep that core design principle?

Moving the Line

Well, these two issues are not necessarily at odds. Yes, we need a cross-process communication layer to pass events around, and yes, we need a central place to put socket-interaction code and per-socket variable storage, but we can have both of them.

The model is already even in Channels - the protocol server. Protocol servers, like Daphne, are code that ties directly against a socket API in order to turn them into a series of events - low-level events, sure, but it has "custom code" (the stuff that encodes to and from ASGI) and per-socket session storage (to track things like what its channel name is).

What I am proposing, then, is to move the line between the protocol server and the worker server:

  • Users write socket/HTTP handling code that runs against a direct API in the same process, like WSGI. This code tracks its own per-socket variables, does any special send/receive encoding and decoding, and other related things you want to happen on a per-socket basis.
  • Messages on the channel layer are turned from direct send-to-channel to more abstract events; users send messages with their own formatting (e.g. {"room": "general-discussion", "user": "andrew", "text": "Hi all!"}), and the code running directly against the socket can receive these and interpret them as needed.

Not only does this move quite a lot of the handshaking/request/response traffic out of the channel layer, it also provides for much nicer code. No more would you need @channel_session or @enforce_ordering; your code runs directly against the socket in-process, but can still talk to other processes if it needs to.

Rethinking the API

If something like this is to change, then, it means that Channels' API must also change. There needs to be a "protocol handler" abstraction, very much in the Twisted vein of dealing with different types of incoming events, and there needs to still be a good "channel layer" abstraction, allowing you to send and receive messages between processes in a standardised way.

The breaking down of the HTTP and WebSocket protocols as currently defined in Channels still works, though - we could keep the formatting and the rough naming, and just change them from being received on channels to being (for example) callable attributes:

class ChatHandler:

    def connect(message):
        return {"accept": True}

    def receive(message):
        return {"text": "You said: %s" % message['text']}

Eagle-eyed readers will have spotted the flaw with this sketched API, though, which is that there's no way to send more than one message; in fact, this is another one of the Channels design goals made manifest. The reason that the protocol server and the workers run separately in Channels right now is that they run in different modes; protocol servers are async, serving hundreds of sockets at a time, while worker servers are synchronous, serving just the one request - because Django is, by the virtue of history, built as a synchronous framework.

Async APIs are not backwards-compatible, so if the solution was to make Django async we would have to either go all-in or have two parallel APIs - not to mention the mammoth task involved and the number of developer-hours it would take. I would rather build a basic framework upon which both sync and async apps can run, letting us move components over to async as needed (or if it's needed at all; sync code is often easier to write and debug).

So, what are we to do? We need an interface that lets us send and receive messages outside the scope of a synchronous request-response cycle, yet that still allows us to use synchronous code.

Fortunately, the design of Channels consumers helps us here. They are designed to be pretty much non-blocking: they receive a message, do something, and then return. They're not instantaneous; database access and other things slow them down, but the idea is that they live nowhere near as long as the connection does.

This means we can keep the same model going if we run that code in the same process as the protocol termination, though we're probably going to be using threadpools to get decent performance out of synchronous code. This is not that much of a change from the old model in performance terms, though; current Channels workers just run one consumer at a time, synchronously and serially, the only difference being that you could individually scale their capacity compared to the protocol servers.

Now we're combining those two things, deployment changes a little. Before, Channels effectively had a built-in load balancer for WebSockets; protocol servers did very little work, and offloaded all the processing to whatever worker was free at the time. Now, WebSockets are stuck with the process they connect to having to do the work; if one process has a hundred very busy sockets, and one process has a hundred very quiet sockets, the people on the first instance are going to see much, much worse performance than those on the other one.

The solution to this is to build for failure (one of the main things I strive for in software design). It should be OK to close WebSockets at random; people use sites on bad connections, or suspend and resume their machines, and the client-side code should cope with this. Once we have this axiom of design, we can then just say that, if a server gets overloaded with sockets, it should just close a few of them and let the clients reconnect (likely to a different server, if our load-balancer is doing its job).

That lets us continue with a design that might not have particularly high throughput on individual processes when combined with a synchronous framework like Django. If and when fully-developed async components and frameworks emerge - be it from Django or some other part of the Python ecosystem - they should be able to run in a proper async mode, rather than a threadpool, and benefit from the speed increase that would bring.

Communicating Across Processes

So that's most of the worries about the main protocol-terminating code dealt with; if we were just writing an application that served individual users, and didn't do any "real-time" (in the Web sense) communication between them, we could stop there.

However, I care about that stuff; think of the most basic WebSocket or HTTP long-polling examples, and they all rely on live updates - liveblogs, chat, polls, and so on. The very nature of them compared to HTTP is the lower latency, and the ability for a server to push something down to a client when an event happens rather than waiting for another poll.

Channels currently solves this using a combination of channel names for each open socket, so you can send to them from any process, and Groups, which wrap the idea of sending the same message to a lot of sockets. As we discussed above, both of these approaches have issues if you want to add extra information (especially in the Group case, where it's very unlikely you actually want to send identical messages to everyone connected).

So, we need to move the target of these events from the sockets themselves to the code handling them. Luckily for us, that code is already structured to receive incoming events in the form of protocol events, so we can continue the same abstraction to user-generated events too.

What remains is to work out the addressing/routing, where there's two questions:

  • How do we send to a specific socket-handling code instance?
  • How do we broadcast to a whole set of them at once?

The reply_channel abstraction from current Channels continues to work well for the first case, I think; with the recent performance improvements to process-local channels, they're really quite efficient, and we know our handling code is only in one process.

I'm a little less sold on the current design of Groups. The need to handle failure (specifically, of the code that removes channels from groups on disconnect) results in some compromises, to the point where most users could improve upon the design with a database.

Offering something that tries to walk a middle line and ends up being compromised is a bad choice, in my opinion; software should be designed to work well for a certain set of cases, and show its users how to build into those abstractions and design patterns it supports - and crucially, tell them when they should look elsewhere. I often tell people that they should not use Channels or Django if it's clear what they're building won't fit, for example.

The current design of Groups, where the sending code works out a list of destination channels, is not the only kind of broadcast. We could invert things and have consumers listen to central points of broadcast instead; this is the sort of pattern you often see in things like Redis' LISTEN/NOTIFY.

However, there are some issues bringing that to the rest of the system as designed; not only do these mechanisms enable you to more easily miss messages if you're interleaving them with list reading, but it's asking for an entirely different kind of message transport from the channel layers.

Instead, I think the right move here is to narrow and focus what Groups are for, and not only discourage people from using them for the wrong things but actively make it difficult, and hopefully foster third-party solutions to things like presence and connection count.


There's one last elephant in the room - async support. Newer Python versions support features like async def and await keywords, and HTTP long-polling and WebSocket handling code is a natural fit for the flow of Python async code.

With the move to having consumers run in a single process per socket, that also gives us the freedom to run consumers as a single async function if we want. There's a problem, though - async and sync APIs must necessarily be different, and we still want to allow sync code (not only from earlier Python versions, which is less of a concern as time goes on, but also as it is sometimes simpler to write and maintain).

Channels' design has always been tailored to help avoid deadlock, too - something that's far too common to run into in asynchronous systems. Making it impossible for a consumer to wait on a specific type of event is a deliberate choice which forces you to write code that can deal with events coming in any order, and I want to preserve this design property.

Still, we should allow for async APIs and consumers if not only so that it's possible to do operations like database queries or sending channel messages in a non-blocking way and get better performance out of a single Python process. Exactly how I want to do that is covered below.

Bringing it all together

So, after all this, what is the result? What do I think the future of Channels looks like?

Let me run through each of the main changes with some examples.

Channel Layers

The basic channel layer interface - send, receive, group_add - will remain the same. There's no real need to change this, and several different layers have reached maturity now with the design (plus, it's proven useful even outside the context of HTTP/WebSockets; I've implemented SOA service communication with it, for example).

However, we need to address the async question. As I mentioned above, async APIs must be different methods than sync ones, and so currently we have receive_async and nothing else. This also pushes down to inside the module; implementing a channel layer that services both synchronous requests and async requests gets very difficult, as you can't rely on the event loop always being around, so everything must be written synchronously.

To that end, I'm proposing that channels layers come in two flavours - synchronous and asynchronous. One package would likely provide both, and they would share a common base class, but have different receive loops/connection management.

Servers would require one or the other to run against, depending on their internal structure and event loop; an "async" class attribute would be used as an easy way to determine what flavour you've been passed without needing to introspect the method signatures.

The Consumer interface

Rather than having to listen to specific channels and do your own event loop, as happens now, user/framework code will instead just need to provide a low-level consumer interface. My current proposed interface for this would look like:

class Consumer:

    def __init__(self, type, channel_layer, consumer_channel, send):

    def __call__(self, message):

Well, that's the class-based version. The actual contract would be that you pass a callable which returns a callable; it doesn't matter if this is a class constructor, factory function, or something which dispatches to one of several classes depending on the type, which would be something like http, http2 or websocket.

The other arguments to the constructor are:

  • channel_layer, pretty much the same as it is now. How you send to other parts of the system, or add/remove yourself from groups.
  • consumer_channel, the replacement for reply_channel for the rest of the system. Anything sent to this channel will end up being passed into the consumer's callable, just like protocol messages.
  • send, a callable which takes a message in the per-protocol format and sends it down to the client. This is where you would send a HTTP response chunk, or a WebSocket frame.

The subsequent (__call__) callable is the replacement for the current consumer abstraction in Channels; the source channel no longer matters, and instead a type field will be required in messages (in any direction; this was already needed in current Channels anyway). reply_channel is also gone from messages; it's replaced by the send argument to the constructor.


Async will initially be an all-or-nothing deal - either the server you are running inside uses async and so you need an async consumer, or it's synchronous and your code must be as well (the channel layer type also follows, of course).

This seems the best way to keep a clean API, though it does have the unfortunate side-effect of making two separate ecosystems (but then, this is true of sync and async code in Python in general). I have some hope that we can find a way to adapt async channel layers and APIs to have automatic synchronous versions, but that needs more research (if you have any tips on this, please get in touch).

As for the consumer API, it will be the same for both, except that the consumer's callable (__call__) will be expected to be async in an async consumer. The constructor will not be, as it's not possible (to my knowledge) to have class constructors be async.


Group membership will remain largely the same under the hood; rather than using channel_layer.group_add and channel_layer.group_discard with the reply_channel, they will instead be used with the consumer_channel.

However, the reframing I want to do here is in how it's presented to the developer. Because groups now feed into the Consumer class, rather than direct to the client, they are more useful as general signal broadcasting, with per-client customisation and handling possible thanks to the consumer class.

I also want to suppress any idea that groups have a membership or list of channels internally, and remove the group_members endpoint that's currently on the channel layer but heavily discouraged. They'll instead be presented as pure broadcast, and people who want connection or status tracking can implement that in the consumer code with a separate datastore instead. I'll also look at removing more of the delivery restrictions in the specification so backends can better implement them.

Cross-process send

This will remain largely the same. Sending to a specific consumer is now channel_layer.send(consumer_channel, message), and with the inclusion of the type key in a message it's easier to deal with a variety of possible incoming messages in the consumer.

Sending to a group remains exactly the same - channel_layer.send_group(group, message) - but the message, again, now goes to consumers rather than directly to the client.

Background tasks

Channels has always preseneted the idea of running "background tasks", but without much in the way of advice as to what these are. People often think they're a replacement for Celery - which they are not directly, as the guarantees are different (at-most-once versus at-least-once) along with the APIs (no task status in Channels, for example).

In the same vein as re-framing Groups, I want to make it clear what the design and guarantees are, that you can have named channels to send tasks to, and also add a small bit of API to make it easier to have a loop that listens on a channel and sends responses.

This is a relatively minor bit of Channels, however, and at some point I expect people to use ASGI channel layers more directly if they want a more advanced non-response-based flow (as we do at Eventbrite for SOA transport).

Remote Consumers

Of course, there are valid reasons to run consumers remotely from your protocol termination server, and even with this new interface it will be possible to have a pair of interfaces (a consumer-to-channel-layer bridge, and a channel-layer-to-consumer worker) that let you run Channels in basically the same layout it is now. This won't be a priority, but I like a design that lets you plug in components like this.


So, after all that, what am I proposing to change? Here's a short example of what a Django example might look like (with a consumer superclass that handles the basics for us, and dispatches based on message type):

class ChatConsumer(DjangoConsumer):

    type = "websocket"

    # Room name drawn from URL pattern: /chat/room/foobar/
    def websocket_connect(message, room_name):
        self.room = Rooms.objects.get(slug=room_name)
        self.room.announce("User %s has joined" % self.user)
        self.send({"type": "websocket.send", "text": "OK"})

    def websocket_receive(message):

    # This method is called by the group when it sends messages of type
    # "group.message" to us.
    def group_message(message):
        self.send({"type": "websocket.send", "text": message['text']})

    def websocket_disconnect(message)

Here's a quick summary of the main changes:

  • Splitting the ASGI spec up into three parts: "servers", "channel layers", and "protocols", so that people could use just the server and protocol parts if they want to only build in-process stuff.
  • Consumers turn from a Django-only implementation of Channels to the primary interface between a server and the user's code (getting more generic in the process) - and that code now runs in-process with the protocol server.
  • Because consumers now run in the same process the socket is terminated, there's no need to use Django sessions to store data and instead they can use normal variables.
  • This also means that reply_channel, @enforce_ordering and @channel_session are all gone.
  • You send cross-process messages to consumers, rather than directly to the sockets they are coupled to, so that you can implement send formats for the socket in a single place.
  • Channel layers have a separate asynchronous implementation allowing full async code to run against them, and protocol servers can choose what mode(s) to implement.
  • Messages have an explicit type key so you don't need to guess what they are based on their schema.
  • Deployment will change to be fatter Daphne (or other protocol server) instances with the code embedded, and no worker server processes (unless you want to do background tasks of some kind).

And what stays the same?

  • The protocol formats for HTTP and WebSocket stay near-identical; the main thing will be to drop the path keys in the receive and disconnect messages as this is now provided with the consumer class instance.
  • The channel layer send/receive/group APIs, barring optional extension to provide parallel async versions
  • Routing inside Django will remain very similar to the end user, though we'll move around a few internal pieces to make it a direct new-style consumer. We might swap channel names out for type names and make you write all consumers as classes, but I'm not sure yet.
  • Databinding will likely remain very similar, but you'll be able to handle outgoing events individually in consumers rather than formatting for a single send directly to a socket.
  • Client-side code and interactions won't change at all.

Obviously, this isn't a final specification document - we'll develop that as we go along, changing things slightly as the code is implemented and realities hit, but I wanted to get this post out to give everyone a better idea of what the aim is.

Of course, we'll do our best to maintain backwards compatability, but there may be some cases where we'll have to provide helpful error messages for old APIs or upgrade guides instead.

I'd love to hear any feedback (positive, or negative with suggestions for alternatives); you can find my email address, IRC handle and Twitter handle on my about page.

This is quite a big change for Channels, but I think it's for the best, and based on talking to people developing with it over the last few years should fix a lot of the idosyncracies that people run into. It's impossible to know what the best fit is until we get there, and even then nothing will be perfect, but I'm still determined to end up with a good solution for async-capable Python web interfaces, be it directly or by inspiring competition.

Django 1.11+ django.contrib.auth Class Based Views - Part 2 - Password Change and Reset

Jul 10 2017 [Archived Version] □ Published at GoDjango Screencasts and Tutorials

Since we can log in and logout what about managing our password? Learn the power of using the builtin Generic Class Based Views now in django.contrib.auth. They are simple to use once you know about them.
Watch Now...


Jul 10 2017 [Archived Version] □ Published at Latest Django packages added

Distributed Task Queue (development branch)


Jul 10 2017 [Archived Version] □ Published at Latest Django packages added

FIDO U2F security token support for Django

Domain Name for Django Development Server

Jul 09 2017 [Archived Version] □ Published at DjangoTricks under tags  advanced debugging development domain intermediate

Domain Name for Django Development Server

Isn't it strange that browsing the web you usually access the websites by domain names, however, while developing a Django website, you usually access it through IP address? Wouldn't it be handy to navigate through your local website by domain name too? Let's have a look what possibilities there are to access the local development server by a domain name.

Access via IP Address

You probably know the following line by heart since the first day of developing with Django and can type it with closed eyes?

(myenv)$ python manage.py runserver

When you run a management command runserver, it starts a lightweight Django development server which by default listens to HTTP requests on your local machine's port 8000, whereas by default, HTTP websites are running on the 80 and HTTPS websites are running on 443. Enter in a browser and you can click through your Django project.

Note that this is a local address and it is not accessible from other devices in the network. Other people accessing the same address from their computers will see what is provided by web servers on their own machines, if any web server is running there at all.

Each device in a local network has its own Internet Protocol (IP) address. There are two versions of IP addresses: IPv4, typically formed from 4 decimal numbers separated by dots (e.g., and IPv6, formed from hexadecimal numbers separated by colons (e.g. [fe80::200:f8ff:fe21:67cf]). The IP address can be set automatically and generated dynamically when you connect to the network, or you can set it manually and make it static. For example, the printer in the network will usually have a static address, whereas a mobile phone or tablet will have a dynamically attached IP addresses.

If you want to access a responsive website on your computer from another device in the network, I recommend you to set the IP address manually in the network settings. It is much more convenient to have an address that doesn't change every time you connect to the same network - you can bookmark it or use in different configuration files. Just don't let it clash with the IP addresses of other devices in the network.

Then run the local development server passing IP address and port 8000:

(myenv)$ python manage.py runserver

The is a special case. It allows you to access the website through any IP address that is assigned to your computer: or, or the one that is set in your network settings. To access the website through any of those addresses, you will have to list those IP addresses in your Django setting ALLOWED_HOSTS.

Moreover, this allows you to check the website you are building through your computer's IP address, e.g., not only from your computer, but from any smartphone, tablet, or another computer in the same local network. Also through the same IP address you can access the website from a virtual machine. For example, by installing Windows in Parallels Desktop on a Mac, you can test how Django websites behave in Opera, Microsoft Edge, or Internet Explorer.

Domain Names for Local Host

Sometimes you want to address the website you are developing using a unique host name. This is necessary either when you have subdomains which lead to different parts of the website (e.g. http://aidas.example.com should show my profile), or when you need to test social authentication (e.g. using Python Social Auth).

One of the ways to deal with that is configuring a hosts file, which allows to map host names to IP addresses manually. Unfortunately, the hosts file doesn't support wildcard entries, such as <anything>.example.com, so for any new subdomain, you will need to modify the file as a Super User on Unix-based operating systems or as System Administrator on Windows.

A better way is to use a wildcard domain name that points to the IP of local host: You can either set it up yourself at a domain provider, or use one of the available services.

For example, localtest.me by Scott Forsyth allows you to have unlimited wildcard entries pointing to local host. So all of the following domains would show a website at local host:


Whichever domains you need to make work, don't forget to add them to ALLOWED_HOSTS in the Django project settings.

This enables to use authentication at Facebook or payments by PayPal (except the Instant Payment Notification which we'll cover a little later).

Also you can test subdomain resolution. For example, Django context processor might parse the subdomain and add some context variables, or a middleware might parse the subdomain and rewrite the path or redirect to a specific view.

Unfortunately, you can't test the website from an iPhone or iPad, using such address. And setting up your own domain's Address Record (A record) to the static IP of a computer in a local network is too inconvenient.

Domain Names for Local IP

There is another service - xip.io provided by Basecamp which allows you to use a wild card domain entries pointing to specific IP address.

Supposing that your computer's IP address is, all of the following domains would show a website on your computer's local web server:

Add them to ALLOWED_HOSTS in the project settings and you can check the website from any capable device in the local network.

Unless you are using the standard port 80, you will always have to add the port number. Also your website will be shown unsecured under HTTP, not HTTPS, and in some cases you will need to test the Django website under secure conditions, for example, when creating a Facebook canvas app or working with payments.


Sometimes you want to demonstrate your fresh website to other participants at a hackathon. Or you want to share your website temporarily with the interested colleagues or friends. Or you need to test services that use Webhooks - HTTP callbacks, that post data to your server on specific events, like Instant Payment Notification at PayPal or notifications about sent SMS messages at twilio.

One way to do that is to have a remote staging website and to deploy to it very often to test the development results. For that you need a specific domain and server, and probably some automation for deployment. Also you will need to log all activities and edit log files in Terminal - no ability to make use of handy visual PyCharm debugging with breakpoints.

This is quite inconvenient. Luckily, alternatives to this method exist.

Tunnels are systems making your local host open to the public Internet. Tunnels have a frontend - that's the server by which the website will be accessed, and backend - that's your own development machine. By creating a tunnel, you open access through a firewall from a frontend server to local servers running on specified ports.

The best known open source tunnelling systems are ngrok.com, localtunnel.me, and pagekite.net. Let's have a look at each of them.


Although it is not under active development now - the last commit was more than a year ago - ngrok is the most popular one. At the time of writing, it has 10573 GitHub stars. The tool was coded in the go programming language.

The ngrok is a freemium service giving you one persistent session and one randomly generated subdomain for free, but if you want to customize the setup or even install it on your own servers, you have to pay an annual fee.

To start a tunnel for a local Django project, you would type the following in the Terminal:

$ ngrok http 8000

Then anybody on the Internet could access your entering something like https://92832de0.ngrok.io in their browser's address bar.

The default ngrok configuration would also start a special website running at http://localhost:4040 that would show the details of the traffic to and from your Django website.

If you are a paying customer and want to have a custom subdomain for your website, you can start the tunnel typing this in the Terminal:

$ ngrok http -subdomain=myproject 8000

This would create a domain like https://myproject.ngrok.io that would show the content of the Django project on your local host.

Using Canonical Name Records (CNAME records) in DNS configuration, it is also possible to create tunnels within ngrok under custom domain names like https://dev.example.com, and even wildcard entries like https://<anything>.dev.example.com.

To restrict access only to specific users, you can also use the Basic authentication with the following command:

$ ngrok http -auth="username:password" 8000


This service was created overnight at a hackathon and then published and maintained as it proved to be a useful tool. Localtunnel.me doesn't require any user account, and it creates a temporary access to your localhost under a randomly generated subdomain like https://nkfmosjsgh.localtunnel.me or a custom subdomain like https://myproject.localtunnel.me if it is available. When you close the tunnel, the address is not saved for you for future usage.

Localtunnel is free and open source. If you want or need, you can install the frontend part on your own server, so called "on premise".

To start a tunnel you would normally type the following in the Terminal:

$ lt --port 8000

If you need a custom domain, you can also type this instead:

$ lt --port 8000 --subdomain myproject

Localtunnel is meant to be relatively simple for quick temporary access. Therefore, CNAME configuration and wildcard subdomains are not possible.

Still this project is under active development. It was programmed in Node JS and by the time of writing it received 4832 GitHub Stars.


Pagekite is open source, python based, pay-what-you-want solution. Comparing to the previous projects, it has only 368 GitHub Stars, but is also worth giving a try.

You can start a tunnel with Pagekite, by entering the a command with your private user's domain name in the Terminal:

$ pagekite.py 8000 myuser.pagekite.me

This will open a secure access to your local Django project from https://myuser.pagekite.me.

For each project you can then have a separate project's address, like https://myproject-myuser.pagekite.me which can be created starting the tunnel like this:

$ pagekite.py 8000 myproject-myuser.pagekite.me

With Pagekite you can have custom domains like https://dev.example.com for your tunnel using CNAME setting in the domain configuration. It's possible to expose non-web services, for example SSH or Minecraft server, too.

The Basic authentication is available using a command like this:

$ pagekite.py 8000 myproject-myuser.pagekite.me +password/username=password

Django Project Configuration

If you want to use tunnelling with your Django project, you will have to do a couple of modifications here and there:

  • Change the URL configuration to show static and media files even in non DEBUG mode:

    # urls.py
    # ...
    import re
    from django.views.static import serve

    if settings.STATIC_URL.startswith("/"):
    urlpatterns += [
    # {'document_root': settings.STATIC_ROOT},
    if settings.MEDIA_URL.startswith("/"):
    urlpatterns += [
    {'document_root': settings.MEDIA_ROOT},

    If you want the static files to get recognized from various apps automatically, omit the {'document_root': settings.STATIC_ROOT}. Otherwise you will have to run collectstatic management command every time you change a CSS, JavaScript, or styling image file.

  • Have separate settings for the exposed access.

    # settings.local_exposed
    from .local import *
    DEBUG = False
    ALLOWED_HOSTS = [...] # enter the domains of your tunnel's frontend

    To use those settings run the following in your virtual environment:

    (myenv)$ python manage.py runserver --settings=settings.local_exposed --insecure

    Here the --insecure directive forces automatic static file recognition from different places in your project even in non DEBUG mode. Leave it out, if you are serving the static files collected by collectstatic command.

Security Recommendations

This list of security recommendations is by no means complete. Use tunnelling at your own risk.

  • Don't keep tunnels running all the time. When not in need, close the connection.
  • Share the frontend URL only with trusted people. If you make the URL easy to remember or guess, set the Basic authentication for the tunnel's frontend.
  • Switch off the DEBUG mode in your Django project.
  • Have frequent backups of your project's code, media files, and database.
  • Don't use production data for development.
  • Don't use sensitive data for testing: no real passwords or API tokens of live system, use sandbox credentials for PayPal or Stripe payments, etc.
  • If you don't trust the tunnelling services, you can set up a tunnelling frontend on your own servers.

Do you see any other security issues about using tunnelling with Django development server? Then please share your thoughts in the comments.

Final Thoughts

When you are developing a responsive website with Django and need to check how it works on a mobile device, you can run the development server with and access it on your Wifi network through the IP address of your computer, or you can use xip.io to analogically check it by a domain name.

When you need to check subdomain resolution, you can use the hosts file, configure your private subdomain pointing directly to your local IP, or use localtest.me, xip.io, or one of the tunnelling services.

When you want to debug Webhooks in order to get notified about executed payments, received messages, or completed serverless processes, you can use ngrok.com, localtunnel.me, pagekite.net or some other tunnelling service. Or of course you can set a staging website with logging, but that makes a lot of hassle debugging.

Perhaps you know some other interesting solutions how to deal with domains and local development server. If you do, don't hesitate to share your tips in the comments.


Jul 09 2017 [Archived Version] □ Published at Latest Django packages added

Python's Pickles

Jul 07 2017 [Archived Version] □ Published at tuts+

Pickles in Python are tasty in the sense that they represent a Python object as a string of bytes. Many things can actually be done with those bytes. For instance, you can store them in a file or database, or transfer them over a network. 

The pickled representation of a Python object is called a pickle file. The pickled file thus can be used for different purposes, like storing results to be used by another Python program or writing backups. To get the original Python object, you simply unpickle that string of bytes.

To pickle in Python, we will be using the pickle module. As stated in the documentation:

The pickle module implements binary protocols for serializing and de-serializing a Python object structure. “Pickling” is the process whereby a Python object hierarchy is converted into a byte stream, and “unpickling” is the inverse operation, whereby a byte stream (from a binary file or bytes-like object) is converted back into an object hierarchy. Pickling (and unpickling) is alternatively known as “serialization”, “marshalling,” or “flattening”; however, to avoid confusion, the terms used here are “pickling” and “unpickling”.

The pickle module allows us to store almost any Python object directly to a file or string without the need to perform any conversions. What the pickle module actually performs is what's so called object serialization, that is, converting objects to and from strings of bytes. The object to be pickled will be serialized into a stream of bytes that can be written to a file, for instance, and restored at a later point.

Installing pickle

The pickle module actually comes already bundled with your Python installation. In order get a list of the installed modules, you can type the following command in the Python prompt: help('modules').

So all you need to do to work with the pickle module is to import pickle!

Creating a Pickle File

From this section onwards, we'll take a look at some examples of pickling to understand the concept better. Let's start by creating a pickle file from an object. Our object here will be the todo list we made in the Python's lists tutorial.

In order to pickle our list object (todo), we can do the following:

Notice that we have made an import pickle to be able to use the pickle module. We have also created a pickle file to store the pickled object in, namely todo.pickle. The dump function writes a pickled representation of todo to the open file object pickle_file. In other words, the dump function here has two arguments: the object to pickle, which is the todo list, and a file object where we want to write the pickle, which is todo.pickle.

Unpickling (Restoring) the Pickled Data

Say that we would like to unpickle (restore) the pickled data; in our case, this is the todo list. To do that, we can write the following script:

The above script will output the todo list items:

As mentioned in the documentation, the load(file) function does the following:

Read a string from the open file object file and interpret it as a pickle data stream, reconstructing and returning the original object hierarchy. This is equivalent to Unpickler(file).load().

Pickles as Strings

In the above section, we saw how we can write/load pickles to/from a file. This is not necessary, however. I mean that if we want to write/load pickles, we don't always need to deal with files—we can instead work with pickles as strings. We can thus do the following:

Notice that we have used the dumps (with an "s" at the end) function, which, according to the documentation:

Returns the pickled representation of the object as a string, instead of writing it to a file.

In order to restore the pickled data above, we can use the loads(string) function, as follows:

According to the documentation, what the loads function does is that it:

Reads a pickled object hierarchy from a string. Characters in the string past the pickled object’s representation are ignored.

Pickling More Than One Object

In the above examples, we have dealt with pickling and restoring (loading) only one object at a time. In this section, I'm going to show you how we can do that for more than one object. Say that we have the following objects:

If you would like to learn more about Python dictionaries and tuples, check the following tutorials:

We can simply pickle the above objects by running a series of dump functions, as follows:

This will pickle all the four objects in the pickle file pickled_file.pickle.

There is another wonderful way to write the above script using the Pickler class in the pickle module, as follows:

To restore (load) the original data, we can simply use more than one load function, as follows:

The output of the above script is:

As with the Pickler class, we can rewrite the above script using the Unpickler class in the pickle module, as follows:

Note that the variables have to be written and read in the same order to get the desired output. To avoid any issues here, we can use a dictionary to administer the data, as follows:

To restore (load) the data pickled in the above script, we can do the following:

Pickles and Pandas

Well, this seems an interesting combination. If you are wondering what Pandas are, you can learn more about them from the Introducing Pandas tutorial. The basic data structure of pandas is called DataFrame, a tabular data structure composed of ordered columns and rows.

Let's take an example of DataFrame from the Pandas tutorial:

In order to pickle our DataFrame, we can use the to_pickle() function, as follows:

To restore (load) the pickled DataFrame, we can use the read_pickle() function, as follows:

Putting what we have mentioned in this section all together, this is what the script that pickles and loads a pandas object looks like:


In this tutorial, I have covered an interesting module called pickle. We have seen how easily this module enables us to store Python objects for different purposes, such as using the object with another Python program, transferring the object across a network, saving the object for later use, etc. We can simply pickle the Python object, and unpickle (load) it when we want to restore the original object.

Don’t hesitate to see what we have available for sale and for study in the marketplace, and don't hesitate to ask any questions and provide your valuable feedback using the feed below.

Anna Makarudze joins as the Django Girls Fundraising Coordinator!

Jul 06 2017 [Archived Version] □ Published at Django Girls Blog

The Django Girls team is expanding again! We’re super excited to announce the latest addition to our team. Anna Makarudze joins us as our Fundraising Coordinator. Anna lives in Harare, Zimbabwe, and was born and raised in Masvingo. She is an ICT consultant as well as a Python/Django developer.


Anna has been involved with Django Girls since 2016 when she organised the first workshop in her hometown of Masvingo. Since then, Anna has hosted two workshops in Harare, with a third planned in August 2017. Anna has also coached for Django Girls Windhoek in 2016 and 2017!

If that wasn’t enough, Anna is also one of the organisers of PyCon Zimbabwe. The first PyCon Zimbabwe was organised in November last year, and the second is planned for this August. She also started PyLadies Harare December last year and is a volunteer on the DSF CofC Committee. How amazing!!!!!

We’re very happy that Anna is joining our team to help the organisation grow, and become even more awesome!

Good luck Anna :)

If you would like to contact Anna about fundraising for Django Girls, you can email her at [email protected] or [email protected].


Jul 06 2017 [Archived Version] □ Published at Latest Django packages added


Jul 05 2017 [Archived Version] □ Published at Latest Django packages added

DjangoDeploy is a collection of bash scripts which can be used to deploy your django application onto a Ubuntu16.04 server with gunicorn and nginx.

django-planet aggregates posts from Django-related blogs. It is not affiliated with or endorsed by the Django Project.

Social Sharing


Tag cloud

admin administration adsense advanced ajax amazon angular angularjs apache api app appengine app engine apple application security aprendiendo python architecture argentina articles asides audrey aurelia australia authentication automation backup bash basics best practices big data binary bitbucket blog blog action day blogging book books buildout business c++ cache capoeira celery celerycam celerycrawler challenges chat cheatsheet cherokee choices christianity class-based-views cliff clojure cloud cms code codeship codeship news coding command community computer computers computing configuration consumernotebook consumer-notebook continuous deployment continuous integration cookiecutter couchdb coverage css custom d data database databases db debian debugging deploy deployment deployment academy design developers development devops digitalocean django django1.7 django admin django cms djangocon django framework django-nose django-readonly-site django-rest-framework django-tagging django templates django-twisted-chat django web framework tutorials documentation dojango dojo dotcloud dreamhost dughh easy_install eclipse education elasticsearch email encoding english error europe eventbrite events expressjs extensions fabric facebook family fashiolista fedora field file filter fix flash flask foreman form forms frameworks friends fun functional reactive programming gae gallery games geek general gentoo gis git github gmail gnome goals google google app engine guides gunicorn hack hackathon hacking hamburg haskell heroku holidays hosting howto how-to howtos how-tos html http i18n igalia image imaging indifex install installation intermediate internet ios iphone java javascript jinja2 jobs journalism jquery json justmigrated kde la latex linear regression linkedin linode linux login mac machine learning mac os x markdown math memcached meme mercurial meta meteor migration mirror misc model models mod_wsgi mongodb months mozilla multi-language mvc mysql nasa nelenschuurmans newforms news nginx nodejs nosql oauth ogólne openshift opensource open source open-source openstack operations orm osx os x ottawa paas packages packaging patterns pedantics pelican penetration test performance personal personal and misc philippines philosophy php pi pil pinax pip piston planet plone plugin pony postgis postgres postgresql ppoftw presentation private programmieren programming programming &amp; internet project projects pycharm pycon pycon-2013-guide pydiversity pygrunn pyladies pypi pypy pyramid python python3 queryset quick tips quora rabbitmq rails rant ratnadeep debnath reactjs recipe redis refactor release request resolutions rest reusable app review rhel rtnpro ruby ruby on rails scala scaling science screencast script scripting security server setup shell simple smiley snaking software software collections software development south sphinx sprint sql ssh ssl static storage supervisor support svn sysadmin tag tag cloud talk nerdy to me tastypie tdd techblog technical technology template templates template tags test testing tests tip tips tools tornado training transifex travel travel tips for geeks tumbles tutorial tutorials twisted twitter twoscoops typo3 ubuntu uncategorized unicode unittest unix use user authentication usergroup uwsgi uxebu vagrant validation virtualenv virtualenvwrapper web web 2.0 web application web applications web design &amp; development webdev web development webfaction web framework websockets whoosh windows wordpress work workshop wsgi yada year-end-review znc zope