What's new in Django community blogs?

django-video-encoding

Apr 24 2017 [Archived Version] □ Published at Latest Django packages added

django-video-encoding helps to convert your videos into different formats and resolutions.


PyDDF Python Spring Sprint 2017

Apr 24 2017 [Archived Version] □ Published at eGenix.com News & Events

The following text is in German, since we're announcing a Python sprint in Düsseldorf, Germany.

Ankündigung

PyDDF Python Frühlings-Sprint 2017 in
Düsseldorf


Samstag, 06.05.2017, 10:00-18:00 Uhr
Sonntag, 07.05.2017, 10:00-18:00 Uhr

trivago GmbH,  Karl-Arnold-Platz 1A,  40474 Düsseldorf

Informationen

Das Python Meeting Düsseldorf (PyDDF) veranstaltet mit freundlicher Unterstützung der trivago GmbH ein Python Sprint Wochenende im Mai.

Der Sprint findet am Wochenende 6./7.5.2017 in der trivago Niederlassung am Karl-Arnold-Platz 1A statt (nicht am Bennigsen-Platz 1). Folgende Themengebiete haben wir als Anregung angedacht:
  • Openpyxl
Openpyxl ist eine Python Bibliothek, mit der man Excel 2010+ Dateien lesen und schreiben kann.

Charlie Clark ist Co-Maintainer des Pakets.
  • Telegram-Bot

Telegram ist eine Chat-Anwendung, die von vielen Nutzern verwendet wird. Telegram unterstützt das Registrieren von sogenannten Bots - kleinen Programmen, die man vom Chat aus ansteuern kann, um z.B. Informationen zu bekommen.

Im Sprint wollen wir versuchen, einen Telegram-Bot in Python zu schreiben.

  • Jython (Python in Java implementiert)

    Stefan Richthofer, einer der Jython Core Entwickler, wird anwesend sein und über ein Jython Thema sprinten, z.B.

    Using Jython:
    - Jython basics
    - Python/Java integration
    - GUI mit JavaFX in Python

    Developing Jython:
    - Jython internals
    - Bugfixes in Jython core - Können wir ein paar echte Bugs beheben?

    Experimentelles (Was ist schon implementiert? Wir probieren es aus!):
    - JyNI
    - Jython 3
Natürlich kann jeder Teilnehmer weitere Themen vorschlagen, z.B.
  • RaspberryPi-Robot (einen Roboter mit einem Raspi ansteuern)
  • u.a.

Anmeldung und weitere Infos

Alles weitere und die Anmeldung findet Ihr auf der Sprint Seite:

Teilnehmer sollten sich zudem auf der PyDDF Liste anmelden, da wir uns dort koordinieren:

Über das Python Meeting Düsseldorf

Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python Begeisterte aus der Region wendet.

Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.

Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf.

Viel Spaß !

Marc-Andre Lemburg, eGenix.com


data-migrator

Apr 24 2017 [Archived Version] □ Published at Latest Django packages added

A declarative data-migration package


django-webpacker

Apr 23 2017 [Archived Version] □ Published at Latest Django packages added

A django compressor tool that bundles css, js files to a single css, js file with webpack and updates your html files with respective css, js file path.


django-micro

Apr 22 2017 [Archived Version] □ Published at Latest Django packages added

Django as microframework


Django ORM operations compared with Peewee

Apr 21 2017 [Archived Version] □ Published at charlesleifer.com

I saw a post in my weekly Python newsletter showing Django users how to execute common queries using SQLAlchemy. Here's the Peewee version.

Setup

Assume we have a Django model that looks like the following:

class Country(models.Model):
    name = models.CharField(max_length=255, unique=True)
    continent = models.CharField(max_length=50)

In Peewee, our table definition looks almost identical:

from peewee import *  # Peewee defines __all__, so import * is common practice.


class Country(Model):
    name = CharField(unique=True)  # Default max_length of 255.
    continent = CharField(max_length=50)

Because Peewee does not have a singleton, application-wide database configuration, we also need to associate our model with a database. A common convention, when working with more than one table, is to define a base Model class which configures the database. This will save you a lot of typing when you have more than one model:

from peewee import *
from playhouse.pool import PooledPostgresqlDatabase


db = PooledPostgresqlDatabase('my_app')


class BaseModel(Model):
    class Meta:
        database = db


class Country(BaseModel):
    name = CharField(unique=True)  # Default max_length of 255.
    continent = CharField(max_length=50)

To create tables, call the db.create_tables() method, passing in a list of tables. Peewee will re-order the tables so that they are created in the correct order if you have foreign-keys.

def create_tables():
    db.connect()
    db.create_tables([Country])
    db.close()  # Ensure we don't leave dangling connections.

Basic SELECTs

With Peewee, fetching all columns for all Countries looks like this:

countries = Country.select()
for country in countries:
    print country.name, 'is on', country.continent

With Django it looks quite similar:

countries = Country.objects.all()
for country in countries:
    print country.name, 'is on', country.continent

With Peewee, to select only the name of the country:

countries = Country.select(Country.name)
for country in countries:
    print country.name

The Django way is fairly similar, but behind-the-scenes it also selects the id column:

countries = Country.objects.only('name')

Filtering results using the WHERE clause

With Peewee, fetching only the name of countries in Europe, we write:

countries = Country.select(Country.name).where(Country.continent == 'Europe')

Django uses a keyword-argument hack:

countries = Country.objects.filter(continent='Europe').values('name')

More advanced filtering

If we want to find countries in Europe or Asia:

countries = (Country
             .select(Country.name)
             .where((Country.continent == 'Europe') |
                    (Country.continent == 'Asia')))

# Using "IN"
countries = (Country
             .select(Country.name)
             .where(Country.continent.in_(['Europe', 'Asia'])))

With Django, we introduce a special object called "Q":

from django.db.models import Q

countries = Country.objects.filter(Q(continent='Europe') |
                                   Q(continent='Asia'))

# Using "IN"
countries = Country.objects.filter(continent__in=['Europe', 'Asia'])

Grouping results

We want to get the number of countries per continent:

num_countries = fn.COUNT(Country.name).alias('count')
query = (Country
         .select(Country.continent, num_countries)
         .group_by(Country.continent))
for result in query:
    print '%s has %d countries' % (result.continent, result.count)

With Django we need to import a special "Count" function helper. The rows returned are dictionaries instead of model instances:

from django.db.models import Count

query = Country.objects.values('continent').annotate(count=Count('name'))
for result in query:
    print '%s has %d countries' % (result['continent'], result['count'])

Sorting rows

We want to get a list of countries, ordered by continent, then by name:

query = Country.select().order_by(Country.continent, Country.name)

With Django we use strings to reference the columns we're sorting by (this makes it more difficult to spot errors at runtime -- we'll get a strange SQL error if we make a typo, whereas Peewee will throw an AttributeError indicating the field is not valid).

query = Country.objects.all().order_by('continent', 'name')

Conclusion

Peewee maps more closely to SQL concepts, and it is internally consistent in its representation of tables, columns and functions. With Peewee, combining fields and functions follows predictable patterns. Django requires the use of special methods, special objects (like Q, Count and F), and has different semantics -- sometimes you must use keyword arguments, other times you use strings. Furthermore, those familiar with SQL will find themselves wondering how to translate a query into "Django" whereas with Peewee the APIs are very similar to their SQL counterparts. Although, as the author of Peewee, I'm probably a little biased in its favor.

For more thoughts on the poorly-designed Django ORM, check out Shortcomings in the Django ORM.

To learn more about Peewee, check out the quick-start guide or view an example Twitter-like site written using Peewee and Flask.


Remote Full Stack Developer - Angular Frontend / Django Backend @ iRazoo / SideMoney

Apr 21 2017 [Archived Version] □ Published at Djangojobs.Net latest jobs

About Us:

iRazoo.com is the web's most popular rewards program. We give you free gift cards and cash for the everyday things you already do online. Earn points when you watch videos, complete offers and answer surveys. Redeem points for gift cards to your favorite retailers or get cash back from PayPal.

Requirements:

We need an experienced full stack developer (angular frontend / django backend) to update and add new features to our web app (https://app.irazoo.com). We have plenty of work for the right developer and are looking for a min. of 30 hours per week commitment.

If you have experience with mobile app development (specifically the ionic framework for angular) that would be a bonus, or you must be willing to learn ionic.

You would be working with a lead developer in a small team environment. Looking for candidates that are self starters, can learn new technologies, and can handle the responsibility of working in a fast paced environment.

You will be asked to answer the following questions when submitting a proposal:

  • What experience do you have with angular? If so, please give examples of recent apps you have worked on.

  • What experience do you have with django and python? Please give examples.


django-secure-mail

Apr 20 2017 [Archived Version] □ Published at Latest Django packages added


Introduction to Parallel and Concurrent Programming in Python

Apr 20 2017 [Archived Version] □ Published at tuts+

Python is one of the most popular languages for data processing and data science in general. The ecosystem provides a lot of libraries and frameworks that facilitate high-performance computing. Doing parallel programming in Python can prove quite tricky, though.

In this tutorial, we're going to study why parallelism is hard especially in the Python context, and for that, we will go through the following:

  • Why is parallelism tricky in Python (hint: it's because of the GIL—the global interpreter lock).
  • Threads vs. Processes: Different ways of achieving parallelism. When to use one over the other?
  • Parallel vs. Concurrent: Why in some cases we can settle for concurrency rather than parallelism.
  • Building a simple but practical example using the various techniques discussed.

Global Interpreter Lock

The Global Interpreter Lock (GIL) is one of the most controversial subjects in the Python world. In CPython, the most popular implementation of Python, the GIL is a mutex that makes things thread-safe. The GIL makes it easy to integrate with external libraries that are not thread-safe, and it makes non-parallel code faster. This comes at a cost, though. Due to the GIL, we can't achieve true parallelism via multithreading. Basically, two different native threads of the same process can't run Python code at once.

Things are not that bad, though, and here's why: stuff that happens outside the GIL realm is free to be parallel. In this category fall long-running tasks like I/O and, fortunately, libraries like numpy.

Threads vs. Processes

So Python is not truly multithreaded. But what is a thread? Let's take a step back and look at things in perspective.

A process is a basic operating system abstraction. It is a program that is in execution—in other words, code that is running. Multiple processes are always running in a computer, and they are executing in parallel.

A process can have multiple threads. They execute the same code belonging to the parent process. Ideally, they run in parallel, but not necessarily. The reason why processes aren't enough is because applications need to be responsive and listen for user actions while updating the display and saving a file.

If that's still a bit unclear, here's a cheatsheet:

PROCESSES
THREADS
Processes don't share memory
Threads share memory
Spawning/switching processes is expensive
Spawning/switching threads is less expensive
Processes require more resources
Threads require fewer resources (are sometimes called lightweight processes)
No memory synchronisation needed
You need to use synchronisation mechanisms to be sure you're correctly handling the data

There isn’t one recipe that accommodates everything. Choosing one is greatly dependent on the context and the task you are trying to achieve.

Parallel vs. Concurrent

Now we'll go one step further and dive into concurrency. Concurrency is often misunderstood and mistaken for parallelism. That's not the case. Concurrency implies scheduling independent code to be executed in a cooperative manner. Take advantage of the fact that a piece of code is waiting on I/O operations, and during that time run a different but independent part of the code.

In Python, we can achieve lightweight concurrent behaviour via greenlets. From a parallelization perspective, using threads or greenlets is equivalent because neither of them runs in parallel. Greenlets are even less expensive to create than threads. Because of that, greenlets are heavily used for performing a huge number of simple I/O tasks, like the ones usually found in networking and web servers.

Now that we know the difference between threads and processes, parallel and concurrent, we can illustrate how different tasks are performed on the two paradigms. Here's what we're going to do: we will run, multiple times, a task outside the GIL and one inside it. We're running them serially, using threads and using processes. Let's define the tasks:

We've created two tasks. Both of them are long-running, but only crunch_numbers actively performs computations. Let's run only_sleep serially, multithreaded and using multiple processes and compare the results:

Here's the output I've got (yours should be similar, although PIDs and times will vary a bit):

Here are some observations:

  • In the case of the serial approach, things are pretty obvious. We're running the tasks one after the other. All four runs are executed by the same thread of the same process.

  • Using processes we cut the execution time down to a quarter of the original time, simply because the tasks are executed in parallel. Notice how each task is performed in a different process and on the MainThread of that process.

  • Using threads we take advantage of the fact that the tasks can be executed concurrently. The execution time is also cut down to a quarter, even though nothing is running in parallel. Here's how that goes: we spawn the first thread and it starts waiting for the timer to expire. We pause its execution, letting it wait for the timer to expire, and in this time we spawn the second thread. We repeat this for all the threads. At one moment the timer of the first thread expires so we switch execution to it and we terminate it. The algorithm is repeated for the second and for all the other threads. At the end, the result is as if things were run in parallel. You'll also notice that the four different threads branch out from and live inside the same process: MainProcess.

  • You may even notice that the threaded approach is quicker than the truly parallel one. That's due to the overhead of spawning processes. As we noted previously, spawning and switching processes is an expensive operation.

Let's do the same routine but this time running the crunch_numbers task:

Here's the output I've got:

The main difference here is in the result of the multithreaded approach. This time it performs very similarly to the serial approach, and here's why: since it performs computations and Python doesn't perform real parallelism, the threads are basically running one after the other, yielding execution to one another until they all finish.

The Python Parallel/Concurrent Programming Ecosystem

Python has rich APIs for doing parallel/concurrent programming. In this tutorial we're covering the most popular ones, but you have to know that for any need you have in this domain, there's probably something already out there that can help you achieve your goal. 

In the next section, we'll build a practical application in many forms, using all of the libraries presented. Without further ado, here are the modules/libraries we're going to cover:

  • threading: The standard way of working with threads in Python. It is a higher-level API wrapper over the functionality exposed by the _thread module, which is a low-level interface over the operating system's thread implementation.

  • concurrent.futures: A module part of the standard library that provides an even higher-level abstraction layer over threads. The threads are modelled as asynchronous tasks.

  • multiprocessing: Similar to the threading module, offering a very similar interface but using processes instead of threads.

  • gevent and greenlets: Greenlets, also called micro-threads, are units of execution that can be scheduled collaboratively and can perform tasks concurrently without much overhead.

  • celery: A high-level distributed task queue. The tasks are queued and executed concurrently using various paradigms like multiprocessing or gevent.

Building a Practical Application

Knowing the theory is nice and fine, but the best way to learn is to build something practical, right? In this section, we're going to build a classic type of application going through all the different paradigms.

Let's build an application that checks the uptime of websites. There are a lot of such solutions out there, the most well-known ones being probably Jetpack Monitor and Uptime Robot. The purpose of these apps is to notify you when your website is down so that you can quickly take action. Here's how they work:

  • The application goes very frequently over a list of website URLs and checks if those websites are up.
  • Every website should be checked every 5-10 minutes so that the downtime is not significant.
  • Instead of performing a classic HTTP GET request, it performs a HEAD request so that it does not affect your traffic significantly.
  • If the HTTP status is in the danger ranges (400+, 500+), the owner is notified.
  • The owner is notified either by email, text-message, or push notification.

Here's why it's essential to take a parallel/concurrent approach to the problem. As the list of websites grows, going through the list serially won't guarantee us that every website is checked every five minutes or so. The websites could be down for hours, and the owner won't be notified.

Let's start by writing some utilities:

We'll actually need a website list to try our system out. Create your own list or use mine:

Normally, you'd keep this list in a database along with owner contact information so that you can contact them. Since this is not the main topic of this tutorial, and for the sake of simplicity, we're just going to use this Python list.

If you paid really good attention, you might have noticed two really long domains in the list that are not valid websites (I hope nobody bought them by the time you're reading this to prove me wrong!). I added these two domains to be sure we have some websites down on every run. Also, let's name our app UptimeSquirrel.

Serial Approach

First, let's try the serial approach and see how badly it performs. We'll consider this the baseline.

Threading Approach

We're going to get a bit more creative with the implementation of the threaded approach. We're using a queue to put the addresses in and create worker threads to get them out of the queue and process them. We're going to wait for the queue to be empty, meaning that all the addresses have been processed by our worker threads.

concurrent.futures

As stated previously, concurrent.futures is a high-level API for using threads. The approach we're taking here implies using a ThreadPoolExecutor. We're going to submit tasks to the pool and get back futures, which are results that will be available to us in the future. Of course, we can wait for all futures to become actual results.

The Multiprocessing Approach

The multiprocessing library provides an almost drop-in replacement API for the threading library. In this case, we're going to take an approach more similar to the concurrent.futures one. We're setting up a multiprocessing.Pool and submitting tasks to it by mapping a function to the list of addresses (think of the classic Python map function).

Gevent

Gevent is a popular alternative for achieving massive concurrency. There are a few things you need to know before using it:

  • Code performed concurrently by greenlets is deterministic. As opposed to the other presented alternatives, this paradigm guarantees that for any two identical runs, you'll always get the same results in the same order.

  • You need to monkey patch standard functions so that they cooperate with gevent. Here's what I mean by that. Normally, a socket operation is blocking. We're waiting for the operation to finish. If we were in a multithreaded environment, the scheduler would simply switch to another thread while the other one is waiting for I/O. Since we're not in a multithreaded environment, gevent patches the standard functions so that they become non-blocking and return control to the gevent scheduler.

To install gevent, run: pip install gevent

Here's how to use gevent to perform our task using a gevent.pool.Pool:

Celery

Celery is an approach that mostly differs from what we've seen so far. It is battle tested in the context of very complex and high-performance environments. Setting up Celery will require a bit more tinkering than all the above solutions.

First, we'll need to install Celery:

pip install celery

Tasks are the central concepts within the Celery project. Everything that you'll want to run inside Celery needs to be a task. Celery offers great flexibility for running tasks: you can run them synchronously or asynchronously, real-time or scheduled, on the same machine or on multiple machines, and using threads, processes, Eventlet, or gevent.

The arrangement will be slightly more complex. Celery uses other services for sending and receiving messages. These messages are usually tasks or results from tasks. We're going to use Redis in this tutorial for this purpose. Redis is a great choice because it's really easy to install and configure, and it's really possible you already use it in your application for other purposes, such as caching and pub/sub. 

You can install Redis by following the instructions on the Redis Quick Start page. Don't forget to install the redis Python library, pip install redis, and the bundle necessary for using Redis and Celery: pip install celery[redis].

Start the Redis server like this: $ redis-server

To get started building stuff with Celery, we'll first need to create a Celery application. After that, Celery needs to know what kind of tasks it might execute. To achieve that, we need to register tasks to the Celery application. We'll do this using the @app.task decorator:

Don't panic if nothing is happening. Remember, Celery is a service, and we need to run it. Till now, we only placed the tasks in Redis but did not start Celery to execute them. To do that, we need to run this command in the folder where our code resides:

celery worker -A do_celery --loglevel=debug --concurrency=4

Now rerun the Python script and see what happens. One thing to pay attention to: notice how we passed the Redis address to our Redis application twice. The broker parameter specifies where the tasks are passed to Celery, and backend is where Celery puts the results so that we can use them in our app. If we don't specify a result backend, there's no way for us to know when the task was processed and what the result was.

Also, be aware that the logs now are in the standard output of the Celery process, so be sure to check them out in the appropriate terminal.

Conclusions

I hope this has been an interesting journey for you and a good introduction to the world of parallel/concurrent programming in Python. This is the end of the journey, and there are some conclusions we can draw:

  • There are several paradigms that help us achieve high-performance computing in Python.
  • For the multi-threaded paradigm, we have the threading and concurrent.futures libraries.
  • multiprocessing provides a very similar interface to threading but for processes rather than threads.
  • Remember that processes achieve true parallelism, but they are more expensive to create.
  • Remember that a process can have more threads running inside it.
  • Do not mistake parallel for concurrent. Remember that only the parallel approach takes advantage of multi-core processors, whereas concurrent programming intelligently schedules tasks so that waiting on long-running operations is done while in parallel doing actual computation.


Real World Examples of Aurelia’s Compose Tag

Apr 19 2017 [Archived Version] □ Published at Nerdy Dork under tags  programming & internet

In this post I'll show two real-world examples of where I have used Aurelia's template tag.

The post Real World Examples of Aurelia’s Compose Tag appeared first on Dustin Davis.


Recap of DjangoConEurope 2017

Apr 17 2017 [Archived Version] □ Published at DjangoTricks under tags  community conference djangocon florence programming

"DjangoCon, why is everybody wearing this t-shirt?" wondered the security guys in the airport of Florence, Italy, in the beginning of April. The reason for that was DjangoCon Europe 2017 happening there for a week, full of interesting talks in an exciting location.

What I liked, was that the conference was not only about technical novelties in Django world, but also about human issues that programmers deal with in everyday life.

Interesting Non-tech Topics

According to a manifest, the conference had a goal to strengthen the Django community and to shape responsible attitude towards the works done with Django.

Healthy and Successful Community

We have to build stronger communities including everyone who wants to participate without discrimination. Although, at first, it might be difficult as people have biases, i.e. prejudices for or against one person or group; by being emphatic we can accept and include everyone no matter what is their gender identity or expression, sexual orientation, ethnicity, race, neurodiversity, age, religion, disabilities, geographical location, food diversities, body size, or family status.

Valuing diversity and individual differences is the key for a healthy, positive and successful community, that empowers its members and helps them grow stronger and happier.

Responsibility for How We Use Technology

Information technology companies (Apple, Alphabet, Microsoft, Amazon, and Facebook) are among the most traded companies in the world. IT connects people and their things, automates processes, stores and treats historical data. Usually you don't need many physical resources to start an IT business. Software developers have a power to shape the future, but should use this power responsibly:

With this, great responsibility is upon us: to make the future a better place, to make the future more evenly distributed, across gender gaps and discriminations, breaking economical, political and geographical barriers.

Business

  • When creating an online business, it is important to think about the business value that your product will give to people and the way you will make money with it. Don't make assumptions without talking to your customers.
  • When choosing employees for your company, give them freedom how to prove their knowledge: by a quiz, or whiteboard interview, or a take-home programming task. Different people have different ways how to best represent their skills.
  • Launch as early as possible. Track the usage statistics with Google Analytics or other analytics service. Collect emails for the mailing list. Write about your product in a blog and personalized emails.
  • Django is an open-source project based on the hard work of many professionals, and if you gain any commercial value of it and appreciate the framework, you should donate to the Django Software Foundation.

Interesting Tech Topics

From the technical point of view, I liked several ideas mentioned in the conference:

Automate your processes

  • For starting new projects, you can have boilerplates with the mostly used functionalities already prepared. Django management command startproject has a parameter --template for that where you can pass a URL to a zip file.
  • Developers should have troubleshooting checklists for debugging, just like airplane pilots.
  • There are several types of tests. Unit tests check the functionality of individual functions or methods. Integration tests check how different units work together. The functional tests check how the processes of business requirements work from start to end. Finally, there is manual testing requiring people to click through the website and fill in the forms. Some tests like the ones involving third-party authentication or mobile phones, are hardly possible to automate. Anyway, manual testing is the most expensive in time and resources (besides it being boring for the tester), functional tests go after them, then integration tests, and lastly unit tests. Although automatic testing adds up to the development time, in the long run it makes the systems more stable and error proof.

What about Django

  • You can extend the Django ORM with custom lookups, transactions, and filtered prefetchings, to make your QuerySets more readable and more capable.
  • Once again, PostgreSQL has more capabilities than MySQL and is more stable. Use EXPLAIN ANALYZE ... SQL command to find the bottlenecks of your database queries. You can usually fix them by adding indexes.
  • You can have custom indexes for your database tables, to optimize the performance on PostgreSQL (or some other vendor) database.
  • Django 1.11 is out and it's a long-term support version.

What about Third Party Django Packages

  • After analyzing the 6 most popular model translation packages (parler, hvad, klingon, modeltranslation, nece, and i18nfield) from different angles (database support, integration in django admin and forms, performance, etc.), django-hvad seemed to be the winning approach.
  • You can visually build static websites with very little coded configuration using django-cms and djangocms-cascade. The djangocms-cascade provides an alternative nested-plugins system for Django CMS.

What about Django Projects

  • If you build web apps for developing countries, you have to keep these things in mind: people might be using cell phones instead of computers (you need responsive design with small or no images), Internet connectivity is slow and unstable (websites have to be fast and should preferably have offline versions), the users do not always understand English (the websites should be translated and adapted), and locations where people live do not always have street addresses.
  • Some interesting use cases: tracking the health of the heart with Arduino and Django, providing weather data to the whole Europe using Django, managing a radio station in Birmingham using Django.

Thanks

Finally, thanks to the organizers for making this conference as great as it was. The city was beautiful, the food and coffee was delicious, the location for the talks was impressive. Looking forward to the next DjangoConEurope!


django-traffic

Apr 17 2017 [Archived Version] □ Published at Latest Django packages added

Django middleware that helps visualize your app's traffic in Kibana


DriverRestore

Apr 15 2017 [Archived Version] □ Published at Latest Django packages added


Aurelia – Sharing Data Between Components

Apr 14 2017 [Archived Version] □ Published at Nerdy Dork under tags  aurelia dependency injection di programming & internet

Looking back, one of the first issues I ran into in Aurelia is “how do I get my components to talk to each other?” This could be parent/child components or sibling components. It turned out the answer was simply “Dependency Injection” (or DI as you might see it referred to). But what exactly is dependency […]

The post Aurelia – Sharing Data Between Components appeared first on Dustin Davis.


Increment has launched.

Apr 14 2017 [Archived Version] □ Published at Irrational Exuberance

If you haven't already taken a look, first take a quick trip over to increment.com and take a look at Increment's first issue, covering how any company can set up and succeed at on-call rotations.


Over the past three months, I've had the fairly rare opportunity to watch Susan, Mercedes and Philipp transform Increment from a rought concept into something concrete (there are printed copies!).

It's been educational. Also, impressive.

Having never seen a magazine launch before, I would have never imagined the sheer weight of details. Putting together a style guide. Drafting a process for procuring artwork. Determining the quantity of artwork to include. Picking a number of articles for each issue, and the mix between staff contributions and industry contributions (especially for a first issue, as an unproven concept). Projecting a completion rate for external contributions, and consequently the factor of requested pieces beyond the minimum quantity.

Then comes the process of creating the content itself. The number of emails, phone calls and reach outs to gather background information. Determining the content calendar. Picking the article topics, and adapting them as the requested articles veer into unplanned directions. Determining a defined voice (for reasons I still find suspicious my proposal of writing in the voice of "tech david attenborough" didn't receive the adoption I hoped for).

Somehow Susan was able to navigate all of that, and much more, and willed this thing into existence. I'm proud of the first issue, and even more excited about what is coming down the pipe.

Overall, this project also does a pretty good job of illustrating why I remain so excited about Stripe: it's retained it's ability to run a few interesting experiments, while also focusing the majority of its work on the core mission.


django-planet aggregates posts from Django-related blogs. It is not affiliated with or endorsed by the Django Project.

Social Sharing

Feeds

Tag cloud

admin administration adsense advanced ajax amazon angular angularjs apache api app appengine app engine apple application security aprendiendo python architecture articles asides audrey authentication automation backup bash basics best practices binary bitbucket blog blog action day blogging book books buildout business c++ cache capoeira celery celerycam celerycrawler challenges chat cheatsheet cherokee choices christianity class-based-views cliff cloud cms code codeship codeship news coding command community computer computers computing configuration consumernotebook continuous deployment continuous integration couchdb coverage css custom d data database databases db debian debugging deploy deployment deployment academy design developers development devops digitalocean django django1.7 django admin django cms djangocon django framework django-nose django-readonly-site django-rest-framework django-tagging django templates django-twisted-chat django web framework tutorials documentation dojango dojo dotcloud dreamhost dughh easy_install eclipse education elasticsearch email encoding english error europe eventbrite events expressjs extensions fabric facebook family fashiolista fedora field file filter fix flash flask foreman form forms frameworks friends fun functional reactive programming gae gallery games geek general gentoo gis git github gmail gnome google google app engine guides gunicorn hack hackathon hacking hamburg haskell heroku holidays hosting howto how-to howtos how-tos html http i18n image imaging indifex install installation intermediate internet ios iphone java javascript jinja2 jobs journalism jquery json justmigrated kde la latex linear regression linkedin linode linux login mac machine learning mac os x markdown math memcached meme mercurial meta meteor migration mirror misc model models mod_wsgi mongodb months mozilla multi-language mvc mysql nelenschuurmans newforms news nginx nodejs nosql oauth ogólne openshift opensource open source open-source openstack operations orm osx os x ottawa paas packages packaging patterns pedantics pelican penetration test performance personal personal and misc philippines philosophy php pi pil pinax pip piston planet plone plugin pony postgis postgres postgresql ppoftw presentation private programmieren programming programming & internet project projects pycharm pycon pycon-2013-guide pydiversity pygrunn pyladies pypi pypy pyramid python python3 queryset quick tips quora rabbitmq rails rant ratnadeep debnath reactjs recipe redis refactor release request resolutions rest reusable app review rhel rtnpro ruby ruby on rails scala scaling science screencast script scripting security server setup shell simple smiley snaking software software collections software development south sphinx sprint sql ssh ssl static storage supervisor support svn sysadmin tag tag cloud talk nerdy to me tastypie tdd techblog technical technology template templates template tags test testing tests tip tips tools tornado transifex travel travel tips for geeks tumbles tutorial tutorials twisted twitter twoscoops typo3 ubuntu uncategorized unicode unittest unix use user authentication usergroup uwsgi uxebu virtualenv virtualenvwrapper web web 2.0 web application web applications web design & development webdev web development webfaction web framework websockets whoosh windows wordpress work workshop wsgi yada znc zope