What's new in Django community blogs?

DjangoCon 2017 Recap

Aug 23 2017 [Archived Version] □ Published at Caktus Blog

Mid-August brought travel to Spokane for several Caktus staff members attending DjangoCon 2017. As a Django shop, we were proud to sponsor and attend the event for the eighth year. Meeting and Greeting We always look forward to booth time as an opportunity to catch up with fellow Djangonauts and make new connections. Caktus was...

Image Filtering in Python

Aug 22 2017 [Archived Version] □ Published at tuts+

Have you ever come across a noisy image? I mean an image that was not that clear when viewing it? I think we do come across such images very often, especially when many images nowadays are taken by our mobile phone cameras or low-resolution digital cameras.

If you had only that noisy image which means something to you, but the issue is that it cannot be viewed properly, would there be a solution to recover from such noise?

This is where image filtering comes into play, and this is what I will be describing in this tutorial. Let's get started!

Image Filtering

Image filtering is a popular tool used in image processing. At the end of the day, we use image filtering to remove noise and any undesired features from an image, creating a better and an enhanced version of that image. Two types of filters exist: linear and non-linear. Examples of linear filters are mean and Laplacian filters. Non-linear filters constitute filters like median, minimum, maximum, and Sobel filters.

Each of those filters has a specific purpose, and is designed to either remove noise or improve some aspects in the image. But how is filtering carried out? This is what we will see in the next section.

How Do We Perform Image Filtering?

In order to carry out an image filtering process, we need a filter, also called a mask. This filter is usually a two-dimensional square window, that is a window with equal dimensions (width and height). 

The filter will include numbers. Those numbers are called coefficients, and they are what actually determines the effect of the filter and what the output image will look like. The figure below shows an example of a 3x3 filter, having nine values (coefficients).

An example 3x3-filter

To apply the filter, the 3x3 window is slid over the image. This process of sliding a filter window over an image is called convolution in the spatial domain. The window will be placed on each pixel (i.e. think of it as a cell in a matrix) in the image, where the center of the filter should overlap that pixel. 

Once this overlap happens, the pixels in the sub-image that the filter is on top of will be multiplied with the corresponding coefficients of the filter. In this case, we will have a new matrix with new values similar to the size of the filter (i.e. 3x3). Finally, the central pixel value will be replaced by a new value using a specific mathematical equation depending on the type of filter used (i.e. median filter).

I know the above paragraph is a bit wordy. Let's take an example to show how an image filter is applied in action. Suppose we have the following sub-image where our filter overlapped (i and j refer to the pixel location in the sub-image, and I refers to the image):

An example sub-image

The convolution of our filter shown in the first figure with the above sub-image will look as shown below, where I_new(i,j) represents the result at location (i,j).

The process is repeated for each pixel in the image, including the pixels at the boundary of the image. But, as you can guess, part of the filter will reside outside the image when placing the filter at the boundary pixels. In this case, we perform padding

This process simply means that we insert new pixel values in the sub-image under the part of the filter that comes outside of the image before the convolution process, since that part apparently does not contain any pixel values. It is outside of the image! Those padded pixels could be zeros or a constant value. There are other methods for setting the padding values, but these are outside the scope of this tutorial.

I think that's enough theory for now, so let's go ahead and get our hands dirty with coding! In this tutorial, I will be explaining the median filter (i.e. non-linear) and the mean filter (i.e. linear) and how we can implement them in Python. 

Median Filter

In the median filter, we choose a sliding window that will move across all the image pixels. What we do here is that we collect the pixel values that come under the filter and take the median of those values. The result will be assigned to the center pixel. 

Say our 3x3 filter had the following values after placing it on a sub-image:

An example median-filter

Let's see how to calculate the median. The median, in its essence, is the middle number of a sorted list of numbers. Thus, to find the median for the above filter, we simply sort the numbers from lowest to highest, and the middle of those numbers will be our median value. Sorting the values in our 3x3 window will give us the following:

To find the middle number (median), we simply count the number of values we have, add 1 to that number, and divide by 2. This will give us the location of the middle value in the window, which is our median value. So the median value will be at location 9+1/2 = 5, which is 59. This value will be the new value of the pixel under the center of our 3x3 window.

This type of filter is used for removing noise, and works best with images suffering from salt and pepper noise. The image below shows an example of a picture suffering from such noise:

An example of salt-and-pepper noise

Now, let's write a Python script that will apply the median filter to the above image. For this example, we will be using the OpenCV library. Kindly check Install OpenCV-Python in Windows and Install OpenCV 3.0 and Python 2.7+ on Ubuntu to install OpenCV.

To apply the median filter, we simply use OpenCV's cv2.medianBlur() function. Our script can thus look as follows:

Notice that I have used argparse, as it is a good practice to be flexible here, and use the command-line to pass the image we want to apply the median filter on as an argument to our program. 

After passing our image as a command-line argument, we read that image using the cv2.imread() function. We then apply the median filter using the medianBlur() function, passing our image and filter size as parameters. The image is displayed using the cv2.imshow() function, and is saved to the disk using cv2.imwrite().

The result of the above script is as follows:

The result of the processed image

Well, what do you think? Very beautiful—a nice and clean image without noise.

You can download the above code from my median-filter repository on GitHub.

Mean Filter

The mean filter is an example of a linear filter. It basically replaces each pixel in the output image with the mean (average) value of the neighborhood. This has the effect of smoothing the image (reducing the amount of intensity variations between a pixel and the next), removing noise from the image, and brightening the image.

Thus, in mean filtering, each pixel of the image will be replaced with the mean value of its neighbors, including the pixel itself. The 3x3 kernel used for mean filtering is as shown in the figure below, although other kernel sizes could be used (i.e. 5x5):

An example of the Mean Filter

What the above kernel is actually trying to tell us is that we sum all the elements under the kernel and take the mean (average) of the total.

An important point to mention here is that all the elements of the mean kernel should:

  • sum to 1
  • be the same

Let's take an example to make things more clear. Say we have the following sub-image:

An example of the sub-image mean filter

When applying the mean filter, we would do the following:

The exact result is 44.3, but I rounded the result to 44. So the new value for the center pixel is 44 instead of 91.

Now to the coding part. Let's say we have the following noisy image:

Another noisy image

What we want to do at this point is apply the mean filter on the above image and see the effects of applying such a filter.

The code for doing this operation is as follows:

Notice from the code that we have used a 3x3 kernel for our mean filter. We have also used the filter2D() function to apply the mean filter. The first parameter of this function is our input image, the second is the desired depth of the output image ddepth, and the third parameter is our kernel. Assigning -1 for the ddepth parameter means that the output image will have the same depth as the input image.

After running the code on our noisy image, this was the result I obtained:

A result of the noisy image after a filter

If you observe the output image, we can see that it is smoother than the noisy image. Mission done!

You can download the above code from my mean filter repository on GitHub.


As we have seen in this tutorial, Python allows us to carry out advanced tasks like image filtering, especially through its OpenCV library, in a simple manner.

Additionally, don’t hesitate to see what we have available for sale and for study in the marketplace, and don't hesitate to ask any questions and provide your valuable feedback using the feed below.

Asynchronous I/O With Python 3

Aug 21 2017 [Archived Version] □ Published at tuts+

In this tutorial you'll go through a whirlwind tour of the asynchronous I/O facilities introduced in Python 3.4 and improved further in Python 3.5 and 3.6. 

Python previously had few great options for asynchronous programming. The new Async I/O support finally brings first-class support that includes both high-level APIs and standard support that aims to unify multiple third-party solutions (Twisted, Gevent, Tornado, asyncore, etc.).

It's important to understand that learning Python's async IO is not trivial due to the rapid iteration, the scope, and the need to provide a migration path to existing async frameworks. I'll focus on the latest and greatest to simplify a little.

There are many moving parts that interact in interesting ways across thread boundaries, process boundaries, and remote machines. There are platform-specific differences and limitations. Let's jump right in. 

Pluggable Event Loops

The core concept of async IO is the event loop. In a program, there may be multiple event loops. Each thread will have at most one active event loop. The event loop provides the following facilities:

  • Registering, executing and cancelling delayed calls (with timeouts).
  • Creating client and server transports for various kinds of communication.
  • Launching subprocesses and the associated transports for communication with an external program.
  • Delegating costly function calls to a pool of threads. 

Quick Example

Here is a little example that starts two coroutines and calls a function in delay. It shows how to use an event loop to power your program:

The AbstractEventLoop class provides the basic contract for event loops. There are many things an event loop needs to support:

  • Scheduling functions and coroutines for execution
  • Creating futures and tasks
  • Managing TCP servers
  • Handling signals (on Unix)
  • Working with pipes and subprocesses

Here are the methods related to running and stopping the event as well as scheduling functions and coroutines:

Plugging in a new Event Loop

Asyncio is designed to support multiple implementations of event loops that adhere to its API. The key is the EventLoopPolicy class that configures asyncio and allows the controlling of every aspect of the event loop. Here is an example of a custom event loop called uvloop based on the libuv, which is supposed to be much faster that the alternatives (I haven't benchmarked it myself):

That's it. Now, whenever you use any asyncio function, it's uvloop under the covers.

Coroutines, Futures, and Tasks

A coroutine is a loaded term. It is both a function that executes asynchronously and an object that needs to be scheduled. You define them by adding the async keyword before the definition:

If you call such a function, it doesn't run. Instead, it returns a coroutine object, and if you don't schedule it for execution then you'll get a warning too:

To actually execute the coroutine, we need an event loop:

That's direct scheduling. You can also chain coroutines. Note that you have to call await when invoking coroutines:

The asyncio Future class is similar to the concurrent.future.Future class. It is not threadsafe and supports the following features:

  • adding and removing done callbacks
  • cancelling
  • setting results and exceptions

Here is how to use a future with the event loop. The take_your_time() coroutine accepts a future and sets its result after sleeping for a second.

The ensure_future() function schedules the coroutine, and wait_until_complete() waits for the future to be done. Behind the curtain, it adds a done callback to the future.

This is pretty cumbersome. Asyncio provides tasks to make working with futures and coroutines more pleasant. A Task is a subclass of Future that wraps a coroutine and that you can cancel. 

The coroutine doesn't have to accept an explicit future and set its result or exception. Here is how to perform the same operations with a task:

Transports, Protocols, and Streams

A transport is an abstraction of a communication channel. A transport always supports a particular protocol. Asyncio provides built-in implementations for TCP, UDP, SSL, and subprocess pipes.

If you're familiar with socket-based network programming then you'll feel right at home with transports and protocols. With Asyncio, you get asynchronous network programming in a standard way. Let's look at the infamous echo server and client (the "hello world" of networking). 

First, the echo client implements a class called EchoClient that is derived from the asyncio.Protocol. It keeps its event loop and a message it will send to the server upon connection. 

In the connection_made() callback, it writes its message to the transport. In the data_received() method, it just prints the server's response, and in the connection_lost() method it stops the event loop. When passing an instance of the EchoClient class to the loop's create_connection() method, the result is a coroutine that the loop runs until it completes. 

The server is similar except that it runs forever, waiting for clients to connect. After it sends an echo response, it also closes the connection to the client and is ready for the next client to connect. 

A new instance of the EchoServer is created for each connection, so even if multiple clients connect at the same time, there will be no problem of conflicts with the transport attribute.

Here is the output after two clients connected:

Streams provide a high-level API that is based on coroutines and provides Reader and Writer abstractions. The protocols and the transports are hidden, there is no need to define your own classes, and there are no callbacks. You just await events like connection and data received. 

The client calls the open_connection() function that returns the reader and writer objects used naturally. To close the connection, it closes the writer. 

The server is also much simplified.

Working With Sub-Processes

Asyncio covers interactions with sub-processes too. The following program launches another Python process and executes the code "import this". It is one of Python's famous Easter eggs, and it prints the "Zen of Python". Check out the output below. 

The Python process is launched in the zen() coroutine using the create_subprocess_exec() function and binds the standard output to a pipe. Then it iterates over the standard output line by line using await to give other processes or coroutines a chance to execute if output is not ready yet. 

Note that on Windows you have to set the event loop to the ProactorEventLoop because the standard SelectorEventLoop doesn't support pipes. 


Don’t hesitate to see what we have available for sale and for study in the marketplace, and don't hesitate to ask any questions and provide your valuable feedback using the feed below.

Python's asyncio is a comprehensive framework for asynchronous programming. It has a huge scope and supports both low-level as well as high-level APIs. It is still relatively young and not well understood by the community. 

I'm confident that over time best practices will emerge, and more examples will surface and make it easier to use this powerful library.


Aug 21 2017 [Archived Version] □ Published at Latest Django packages added

This Django app adds a new field, ConstrainedFileField, which adds the capability of checking the document size and type


Aug 21 2017 [Archived Version] □ Published at Latest Django packages added

Django model choices as Enum


Aug 19 2017 [Archived Version] □ Published at Latest Django packages added

Django EventStream

Aug 18 2017 [Archived Version] □ Published at Latest Django packages added

Reliable, multi-channel Server-Sent Events for Django, using Pushpin or Fanout Cloud

Django Patterns: Fat Models and `cached_property`

Aug 15 2017 [Archived Version] □ Published at Lincoln Loop

One of my favorite patterns in Django is the combination of "fat" models and cached_property from django.utils.functional.

Fat models are a general MVC concept which encourages pushing logic into methods on your Model layer rather than the Controller ("view" in Django parlance). This has a lot of benefits. It helps maintain the DRY (Don't Repeat Yourself) principle by making common logic easy to find/reuse and makes it easy to break the logic down into small testable units.

One problem with this approach is that as you break down your logic into smaller more reusable units, you may find yourself using them multiple times within a single response. If your methods are particularly resource intensive, it will become an unnecessary performance hit. A common place you find this is in Django tempaltes with patterns like this:

{% if item.top_ten_reviews %}
{% for review in item.top_ten_reviews %}

This will call the top_ten_reviews method twice. If that method is making database calls, you've now doubled them.

Enter cached_property. This decorator will cache the results of a method for the duration of the request and return it as a property when called again. This technique is known as memoization. Let's look at a simple example:

from django.db import models
from django.utils.functional import cached_property

class Item(models.Model):
    def top_ten_reviews(self):
        """Get 10 best reviews"""
        return self.review_set.order_by('-rating')[:10]
class Review(models.Model):
    item = models.ForeignKey(Item)
    rating = models.PositiveIntegerField()

In my code, I can lookup an Item object and reference the property item.top_ten_reviews. The first reference will build the queryset, but future references will not, instead recalling the queryset from an in-memory cache.

A couple caveats to cached_property:

  1. Like Python's built-in property, it only works for methods without an argument (other than self).
  2. It may not be thread-safe. If the object's data isn't changing between threads, it's likely safe. If you are using threads, look to the cached-property module for a thread-safe implemenation.

Wrapping up, cached_property is one of my favorite "easy wins" when working on un-optimized code. It's a quick and relatively safe way to cut out repeated slow tasks without getting into full-blown caching and the ensuing invalidation challenges.

Support a Great Partnership: PyCharm and Django Team up Again

Aug 15 2017 [Archived Version] □ Published at The Django weblog

Last June (2016) JetBrains PyCharm partnered with the Django Software Foundation to generate a big boost to Django fundraising. The campaign was a huge success. Together we raised a total of $50,000 for the Django Software Foundation!

This year we hope to repeat that success. During the two-week campaign, buy a new PyCharm Professional Edition individual license with a 30% discount code, and all the money raised will go to the DSF’s general fundraising and the Django Fellowship program.

Promotion details

Up until Aug 28th, you can effectively donate to Django by purchasing a New Individual PyCharm Professional annual subscription at 30% off. It’s very simple:

  1. When buying a new annual PyCharm subscription in our e-store, on the checkout page, сlick “Have a discount code?”.
  2. Enter the following 30% discount promo code:

Alternatively, just click this shortcut link to go to the e-store with the code automatically applied

Fill in the other required fields on the page and click the “Place order” button.

All of the income from this promotion code will go to the DSF fundraising campaign 2017 – not just the profits, but actually the entire sales amount including taxes, transaction fees – everything. The campaign will help the DSF to maintain the healthy state of the Django project and help them continue contributing to their different outreach and diversity programs.

Read more details on the special promotion page.

“Django has grown to be a world-class web framework, and coupled with PyCharm’s Django support, we can give tremendous developer productivity,” says Frank Wiles, DSF President. “Last year JetBrains was a great partner for us in support of raising money for the Django Software Foundation, on behalf of the community, I would like to extend our deepest thanks for their generous help. Together we hope to make this a yearly event!”

If you have any questions, get in touch with Django at fundraising@djangoproject.com or JetBrains at sales@jetbrains.com.


Aug 15 2017 [Archived Version] □ Published at Latest Django packages added

ShipIt Day Recap Q3 2017

Aug 14 2017 [Archived Version] □ Published at Caktus Blog

Caktus recently held the Q3 2017 ShipIt Day. Each quarter, employees take a step back from business as usual and take advantage of time to work on personal projects or otherwise develop skills. This quarter, we enjoyed fresh crêpes while working on a variety of projects, from coloring books to Alexa skills. Technology for Linguistics...


Aug 14 2017 [Archived Version] □ Published at Latest Django packages added

Search-list class-based views for Django, with form

Unsupervised Learning

Aug 10 2017 [Archived Version] □ Published at python under tags  big data data science how-tos machine learning python

In my last few articles, I've looked into machine learning and how you can build a model that describes the world in some way. All of the examples I looked at were of "supervised learning", meaning that you loaded data that already had been categorized or classified in some way, and then created a model that "learned" the ways the inputs mapped to the outputs. more>>

Simple django backend to send email through SendGrid

Aug 09 2017 [Archived Version] □ Published at Latest Django packages added

My Experience as a Django girl Organiser and How i became One

Aug 07 2017 [Archived Version] □ Published at Django Girls Blog

This blog post was written by Umar Faruq Adamu. Thank you Umar :) 

On 04 & 05 August 2017, the Django Girls Yola community successfully carry out its first Django workshop in the city of Yola.

The journey to the workshop all started when I, Umar Faruq Adamu the organizer of the workshop had a call from a friend who wants to learn web development. She asked if I should fix a time when am less busy to teach her how to build her own website. I thought of that and came with an idea since many ladies are willing to learn and didn’t find a way for that. I now decided that I should conduct a survey by reaching out to ladies in Yola to hear from them if they are interested in Programming and what is stopping them from pursuing it. To my greatest surprise out of 70 people, 52 of them are interested in programming and what stops them from coding is a lack of motivation and support. 18 of the 70 people responded that they think “Programming is for men”,“Programming is not easy, I know I can’t understand”.

It takes me some days to convince the 18 people that programming wasn’t that hard the way they think and tried HTML with them. After some days I had a call from one of the ladies telling me that she can now write an HTML page and wanted to try back end. She asked if I should be coming to her school and teach her with her friends. It was then when I said, “ I’ve attended Django Girls workshop before, why should I not organize one for my community since there are a lot of people who are interested?”.

I now applied to organize one in my city and get approval to organize. Something that first came into my mind is that I know a lot of people will participate and I and my co-organizer alone can’t successfully deliver the content of the workshop to the expected number of participants, I know decided to look for coaches from the city and outside the state. We get 15 coaches from and outside the state who are willing to volunteer and promote programming in the mind of ladies.

Now that we have coaches, the second thing that we have concluded on is the idea of sponsors that will support the event. Our workshop is successful because of the support that we got from awesome Foundations and Companies like PSF, djangoproject, flexiSAF Edusoft and BKC for giving us a free Hall and Internet. Your contributions to Django Girls Yola is highly appreciated and it has make a great impact to the attendees.

During the workshop, the coaches do their maximum best and explain codes in simpler terms for easy understanding. What we enjoy more during the workshop is debugging errors with the attendees, they try very hard to understand how Django works. The attendees are awesome, we enjoy been together and learned a lot from them while we teach.

Organizing Django workshop is the best thing that I’ve ever done in my life even though I organized many before but this was the best because it was marked a success as the content was delivered and the experience we got from this is awesome.

Django Girls is a non-profit that teach programming to women all around the world. Want to help us? Support us!

See some of our photos 

djangogirls yola

last pic

group pic

After workshop


django-planet aggregates posts from Django-related blogs. It is not affiliated with or endorsed by the Django Project.

Social Sharing


Tag cloud

admin administration adsense advanced ajax amazon angular angularjs apache api app appengine app engine apple application security aprendiendo python architecture argentina articles asides audrey aurelia australia authentication automation backup bash basics best practices big data binary bitbucket blog blog action day blogging book books buildout business c++ cache capoeira celery celerycam celerycrawler challenges chat cheatsheet cherokee choices christianity class-based-views cliff clojure cloud cms code codeship codeship news coding command community computer computers computing configuration consumernotebook consumer-notebook continuous deployment continuous integration cookiecutter couchdb coverage css custom d data database databases db debian debugging deploy deployment deployment academy design developers development devops digitalocean django django1.7 django admin django cms djangocon django framework django-nose django-readonly-site django-rest-framework django-tagging django templates django-twisted-chat django web framework tutorials documentation dojango dojo dotcloud dreamhost dughh easy_install eclipse education elasticsearch email encoding english error europe eventbrite events expressjs extensions fabric facebook family fashiolista fedora field file filter fix flash flask foreman form forms frameworks friends fun functional reactive programming gae gallery games geek general gentoo gis git github gmail gnome goals google google app engine guides gunicorn hack hackathon hacking hamburg haskell heroku holidays hosting howto how-to howtos how-tos html http i18n igalia image imaging indifex install installation intermediate internet ios iphone java javascript jinja2 jobs journalism jquery json justmigrated kde la latex linear regression linkedin linode linux login mac machine learning mac os x markdown math memcached meme mercurial meta meteor migration mirror misc model models mod_wsgi mongodb months mozilla multi-language mvc mysql nasa nelenschuurmans newforms news nginx nodejs nosql oauth ogólne openshift opensource open source open-source openstack operations orm osx os x ottawa paas packages packaging patterns pedantics pelican penetration test performance personal personal and misc philippines philosophy php pi pil pinax pip piston planet plone plugin pony postgis postgres postgresql ppoftw presentation private programmieren programming programming &amp; internet project projects pycharm pycon pycon-2013-guide pydiversity pygrunn pyladies pypi pypy pyramid python python3 queryset quick tips quora rabbitmq rails rant ratnadeep debnath reactjs recipe redis refactor release request resolutions rest reusable app review rhel rtnpro ruby ruby on rails scala scaling science screencast script scripting security server setup shell simple smiley snaking software software collections software development south sphinx sprint sql ssh ssl static storage supervisor support svn sysadmin tag tag cloud talk nerdy to me tastypie tdd techblog technical technology template templates template tags test testing tests tip tips tools tornado training transifex travel travel tips for geeks tumbles tutorial tutorials twisted twitter twoscoops typo3 ubuntu uncategorized unicode unittest unix use user authentication usergroup uwsgi uxebu vagrant validation virtualenv virtualenvwrapper web web 2.0 web application web applications web design &amp; development webdev web development webfaction web framework websockets whoosh windows wordpress work workshop wsgi yada year-end-review znc zope